MulticoreWare

AI & Robotics

Explainable AI – The building block for trustworthy AI Systems

April 4, 2024

 

AuthorSelventhiran Rengaraj is an Associate Technical Project Manager in the Mobility & Transportation Business Unit at MulticoreWare. He has hands-on experience in developing Robotics Stacks for Ground & Underwater Robots and is working on cutting-edge AI and ADAS Perception Stack Optimization on leading Automotive Semiconductor platforms.

Explainable-AI-The-building-block-for-trustworthy-AI-Systems

Introduction

Just a few decades ago, the idea of machines that could think belonged to the realm of science fiction. But, today machines have evolved beyond mere tools – aid us in thinking, creating, and decision-making. Artificial Intelligence (AI) has altered many aspects of our lives. From virtual assistants such as Siri and Alexa to advanced machine learning algorithms predicting disease outbreaks, AI manifests in a multitude of applications. Advancements in Generative AI and Natural Language Processing are propelling technologies like ChatGPT enabling AI conversations that feel remarkably human.

Safe AI

As AI becomes more involved in our everyday choices, it’s important to consider the ethical, transparent, and legal frameworks governing these systems.

Artificial Intelligence (AI) safety can be broadly defined as an endeavour to ensure that AI applications do not pose harm to humanity.

It represents a proactive approach to developing, assessing, and deploying AI systems in a manner that prioritizes safety and ethics.

Responsible AI (RAI) and Explainable AI (XAI) serve as complementary guiding principles for fostering trustworthy and robust AI systems, ultimately contributing to the development of Safe AI.

What is the difference between Explainable AI and Responsible AI?

Picture this scenario: An autonomous vehicle faces a perplexing situation, such as a child unexpectedly running into the roadway.

Explainable AI – XAI helps AI scientists and users to understand the car’s decision-making process, how & why the AI Algorithm, the brain of the autonomous car, opted for a particular action (like braking or swerving) by analyzing the AI utilized data (camera footage and sensor readings). Similar to a black box analysis, this proves beneficial for future improvements, but does not avert the immediate incident.

Responsible AI – Responsible AI in autonomous mobility entails developing and deploying AI systems within self-driving vehicles with a focus on safety and reliability. This includes addressing challenges such as navigation, collision avoidance, pedestrian safety, and adherence to traffic regulations. Responsible AI endeavors to advance accountable, trustworthy, and secure autonomous mobility solutions by prioritizing safety-critical components and ethical frameworks.

Explainable AI (XAI) Methods for Computer Vision

Understanding AI decision-making is crucial for Safe AI systems.The research community prioritizes advancements in XAI methodologies to address this need. To trust the prediction of any AI model, it is important to understand the rationale behind each prediction it generates. For example, in computer vision, XAI reveals which image elements influence model predictions (classification / detection etc). This is vital in an AI model for a complex environment (like Indian roads that are often unpredictable and chaotic). XAI identifies potential weaknesses and biases in the model & reduces user anxieties about unexpected behaviours.

Demonstrating the Explanation of Results from a Pretrained Model for Detection & Segmentation Tasks (using input images captured on Indian roads)

Explainability for 2D Object Detection

Original Image

Object Detection results

An attempt to explain why the model has predicted the object as a car (using Ablation-CAM)

Explainability for 2D Semantic Segmentation

Original Image

Semantic Segmentation results

An attempt to explain why the model has segmented the object as a car (using Grad-CAM)

Original Image Source: India Driving Dataset (https://idd.insaan.iiit.ac.in/dataset/details/)

Is Explainability the same as visualizing the model’s output (qualitative outputs of a model)?

No! Visualization helps us to see the results in a clear & easily comprehensible format, but ‘Explainability’ delves into understanding the primary factors influencing the model’s output. It uncovers the rationale behind an AI’s decision making process.

Types of Explainable AI

Challenges & Limitations

Though the concept of Explainable AI seems to hold the promise of building trust and ensuring accountability in AI systems, developing it faces numerous challenges along the way.

Conclusion – Road Ahead for Safe & Explainable AI

As AI continues to evolve, prioritizing safety and explainability is crucial. Collaboration among researchers & developers is a key to harnessing AI for positive impact while ensuring it navigates the complexities of the world with safety, transparency and trustworthiness. The latest trends in Explainable AI & how the SOTA Transformer-based AI models play a vital role in Explainability will be discussed in the successive blogs.

To learn more about how we are building AI solutions responsibly, write to us at info@multicorewareinc.com

Share Via

Explore More

Aug 2 2024

Presence Detection: Why Radar Technology Surpasses PIR Sensors

Detecting the presence of an object is crucial in the indoor environment for various use cases, such as modern smart homes, security systems, and industrial applications.

Read more
Mar 27 2024

Optimising CNN Model on Low Power Vision DSP

The customer, an IP company, specializes in vision-based DSPs utilized for Imaging, Computer Vision, and AI applications.

Read more
Jan 12 2024

Exploring In-Cabin Radar Sensing

In today’s rapidly advancing technological landscape, in-cabin radar technology has gained significant attention for its potential to revolutionize the automotive industry.

Read more

GET IN TOUCH

    (Max 300 characters)