Explainable AI – The building block for trustworthy AI Systems
April 4, 2024
AuthorSelventhiran Rengaraj is an Associate Technical Project Manager in the Mobility & Transportation Business Unit at MulticoreWare. He has hands-on experience in developing Robotics Stacks for Ground & Underwater Robots and is working on cutting-edge AI and ADAS Perception Stack Optimization on leading Automotive Semiconductor platforms.
Introduction
Just a few decades ago, the idea of machines that could think belonged to the realm of science fiction. But, today machines have evolved beyond mere tools – aid us in thinking, creating, and decision-making. Artificial Intelligence (AI) has altered many aspects of our lives. From virtual assistants such as Siri and Alexa to advanced machine learning algorithms predicting disease outbreaks, AI manifests in a multitude of applications. Advancements in Generative AI and Natural Language Processing are propelling technologies like ChatGPT enabling AI conversations that feel remarkably human.
Safe AI
As AI becomes more involved in our everyday choices, it’s important to consider the ethical, transparent, and legal frameworks governing these systems.
Artificial Intelligence (AI) safety can be broadly defined as an endeavour to ensure that AI applications do not pose harm to humanity.
It represents a proactive approach to developing, assessing, and deploying AI systems in a manner that prioritizes safety and ethics.
Responsible AI (RAI) and Explainable AI (XAI) serve as complementary guiding principles for fostering trustworthy and robust AI systems, ultimately contributing to the development of Safe AI.
What is the difference between Explainable AI and Responsible AI?
Picture this scenario: An autonomous vehicle faces a perplexing situation, such as a child unexpectedly running into the roadway.
Explainable AI – XAI helps AI scientists and users to understand the car’s decision-making process, how & why the AI Algorithm, the brain of the autonomous car, opted for a particular action (like braking or swerving) by analyzing the AI utilized data (camera footage and sensor readings). Similar to a black box analysis, this proves beneficial for future improvements, but does not avert the immediate incident.
Responsible AI – Responsible AI in autonomous mobility entails developing and deploying AI systems within self-driving vehicles with a focus on safety and reliability. This includes addressing challenges such as navigation, collision avoidance, pedestrian safety, and adherence to traffic regulations. Responsible AI endeavors to advance accountable, trustworthy, and secure autonomous mobility solutions by prioritizing safety-critical components and ethical frameworks.
Explainable AI (XAI) Methods for Computer Vision
Understanding AI decision-making is crucial for Safe AI systems.The research community prioritizes advancements in XAI methodologies to address this need. To trust the prediction of any AI model, it is important to understand the rationale behind each prediction it generates. For example, in computer vision, XAI reveals which image elements influence model predictions (classification / detection etc). This is vital in an AI model for a complex environment (like Indian roads that are often unpredictable and chaotic). XAI identifies potential weaknesses and biases in the model & reduces user anxieties about unexpected behaviours.
Demonstrating the Explanation of Results from a Pretrained Model for Detection & Segmentation Tasks (using input images captured on Indian roads)
Explainability for 2D Object Detection
Explainability for 2D Semantic Segmentation
Original Image Source: India Driving Dataset (https://idd.insaan.iiit.ac.in/dataset/details/)
Is Explainability the same as visualizing the model’s output (qualitative outputs of a model)?
No! Visualization helps us to see the results in a clear & easily comprehensible format, but ‘Explainability’ delves into understanding the primary factors influencing the model’s output. It uncovers the rationale behind an AI’s decision making process.
Types of Explainable AI
-
Intrinsic (model-based interpretability): The model itself is interpretable by adjusting the structure components. (ex., short decision trees or sparse linear models)
-
Post-hoc techniques work by building an additional model to analyze an existing trained model and provide insights into its decision-making process.
-
Model-specific approaches can only be applied to a certain scope of application. (ex., DeepLIFT, Grad-CAM, Integrated Gradients)
-
Model-agnostic methods can be applied to any machine learning model, regardless of its structure or type (ex., LIME & SHAP)
-
Local Explanation considers the model as a black box, focusing on the local variables that contribute to the decision. Generally, a local explanation focuses on a single input dataset and the characteristic variables associated with it.
-
Global Explanation focuses on understanding how the entire model works. It analyzes how different pieces of data interact within the model to influence the final output of the model.
Challenges & Limitations
Though the concept of Explainable AI seems to hold the promise of building trust and ensuring accountability in AI systems, developing it faces numerous challenges along the way.
-
Complexity of AI: Today’s AI models strive for improvement in specific tasks, be it image recognition or autonomous navigation, by processing vast amounts of data. However, translating these complex mathematical models into simple explanations for humans is difficult.
-
Trade-off between explainability and performance: Most AI models prioritize efficiency, focusing only on delivering results without giving any importance for explaining their operations.
-
Safeguarding privacy: Understanding AI decisions often requires analyzing sensitive data, risking privacy and security without proper safeguards. Clear guidelines and protocols are crucial for secure data handling in XAI.
Conclusion – Road Ahead for Safe & Explainable AI
As AI continues to evolve, prioritizing safety and explainability is crucial. Collaboration among researchers & developers is a key to harnessing AI for positive impact while ensuring it navigates the complexities of the world with safety, transparency and trustworthiness. The latest trends in Explainable AI & how the SOTA Transformer-based AI models play a vital role in Explainability will be discussed in the successive blogs.
To learn more about how we are building AI solutions responsibly, write to us at info@multicorewareinc.com