Functional Safety and AI for Autonomous Driving Systems
November 29, 2024
AuthorKavitha has 20+ years of extensive work experience as firmware head for embedded systems development across automotive, industrial and medical domains. She is a TUV SUD certified L2 professional on Functional Safety. She currently works on Functional Safety, Cyber Security, SOTIF and Safety for AI systems.
Introduction
Artificial Intelligence is the driving force behind the next level of autonomous vehicles, pushing us towards a future of truly “self driving” cars. Functional safety is critical because these vehicles must emulate and substitute human perception, intuition, intelligence and decision making capabilities. Self driving cars need to rely on artificial intelligence for perception from different sensors of the real world scenarios to navigate safely.
Functional Safety is all about risk mitigation and if the vehicle could be driven to a safe state, when systems deviate from predefined safety goals. This makes it vital to address the reliability and robustness in AI systems.
Challenges in AI Functional Safety
The inherent complexity and unpredictability of the AI models present a set of challenges to ensure functional safety risks are clearly handled. While traditional safety-critical systems are guided by well-established rules and deterministic behaviours, AI models can be opaque, adaptive, and prone to unforeseen failures. Outlined below are the key challenges and their corresponding mitigation plans:
1. Lack of Explainability
AI models, particularly deep learning algorithms, are considered ”black boxes” due to the lack of transparency in understanding or qualifying the decision making process. Without clear insight into the model’s decision-making process, predicting the outcomes or ensuring functional safety becomes significantly harder.
Explainable AI (XAI) is gaining traction as a critical tool for understanding the behaviour of AI models. It is also viewed that some AI models are better in explainability than others. Read our blog – Applications & Methodologies of XAI.
2. Data Quality, Quantity, Dependency and Bias
Just as the quality of a product depends on the quality of the raw materials used in its development, the effectiveness of deep learning systems is based on the quality of their training data. Additionally, the principle of “more is better” applies, as larger and more diverse datasets enhance the system’s robustness. For example, an autonomous vehicle trained on data that doesn’t comprehensively represent real-world conditions could behave unexpectedly in unfamiliar scenarios or corner cases, resulting in safety risks. Real world scenarios could vary across different geographies and hence adequate training datasets on all deployment geographies is essential.
Generative AI can be used for synthetic data generation to train the models exhaustively. Read our blog – Gen AI integration in Robotics.
3. Continuous learning and adaptation
AI models are trained on large datasets and are designed to learn and dynamically adapt to new perceived information. However, their behaviour can change when exposed to environments or scenarios that significantly differ from the trained data. While this ability to continuously learn is required in autonomous driving, it raises concerns for functional safety. For instance, a self-driving car may adapt its decision-making based on new data but without careful monitoring, it could develop unsafe behaviours that were neither present nor tested during its initial deployment.
Safety monitors could be added which is essentially a non-AI module to override the AI model’s decision in case of mis-interpretation. It is an approach which is being researched and might require significant effort to deploy.
4. Testing and Validation
Traditional safety-critical systems can be rigorously tested for functional safety through simulation, fault injection, and formal verification. However testing deep learning models presents significant challenges. It is often difficult to exhaustively test all possible scenarios, as the model may encounter edge cases or situations never seen before. The dynamic nature of AI also makes it challenging to ensure over its lifetime as the system continuously learns and evolves.
However, emerging testing tools designed to handle AI, ML and DL models are set to address this issue in the near future.
5. Resource Constraints in Safety-Critical Systems
AI models can be computationally intensive, requiring significant processing power, memory and compute resources. In safety-critical applications, there may be constraints imposed on hardware as such, which can limit the capability to deploy robust AI models. Ensuring that these models remain safe and reliable within such constraints can be a major challenge, particularly when dealing with real-time processing requirements. Additionally, AI systems that rely on sensor inputs (e.g., LIDAR, RADAR, ToF, cameras) must be robust to sensor drifts, failures or inaccuracies.
The computing environment is becoming increasingly capable of accommodating the deployments of deep learning models. However, challenges related to power consumption, size, cost, and other constraints still need to be addressed.
6. Evolving Industry Standards and Regulatory Guidelines
Functional safety standards, ISO 26262:2018 for automotive systems and IEC 61508 for industrial applications provide guidelines for ensuring safety in traditional systems. However, there are few well-established functional safety standards for AI systems, particularly those that use machine learning. This gap in regulatory frameworks and evolving standards makes it challenging for industries to adopt best practices and ensure the safety of AI-driven products and services.
While standards are being established, it might take a few years for them to become fully integrated into the deep learning model development and training processes.
ISO/PAS 8800 (expected to release shortly):
- Defines Safety related properties and risk factors that can cause performance degradation or malfunctioning of AI models in alignment with ISO 26262, ISO 21448, ISO/TR4804 and guidance from ISO/IEC TR 5469.
- Safety Requirements, Data Quality and completeness, failure mitigation
- Tools to support AI based development, verification and validation
- Safety Assurance argument and evidence fulfilling objectives for the AI system
ISO/DTS 5083 Road Vehicles – Safety for automated driving systems – Design Verification and Validation (Draft in approval phase) – Will replace ISO/TR 4804:2020
7. Ethical and Accountability Issues
Within the realm of functional safety, ethical considerations play a significant role. If an AI system fails and poses a risk or causes harm to human life, determining accountability can be challenging. AI models are typically designed by a large team of data scientists and engineers, making it difficult to identify the exact cause of failure.
Establishing clear lines of accountability and ethical guidelines for AI systems is crucial in addressing functional safety concerns. Security risks are also involved as hackers could potentially take control of the system and lead to potential hazards. In case of an inevitable accident, deciding whether to prioritize the safety of the vehicle’s occupants or people around the vehicle is a complex and debatable issue.
MulticoreWare’s Unique Positioning
Ensure safety compliance with our expert engineers. We handle functional safety requirements, safety hardening libraries, and application source code.
Advanced AI/ML Algorithms + Edge Optimization + Functional Safety Services
- Automotive SPICE: Gap Analysis, Process Improvement, and Pre-Assessment.
- Functional Safety (FuSa): Compliant software implementation, testing, V&V, FuSa kits, gap analysis, and re-engineering recommendations and development
- Safety of Systems with AI Models: Safety analysis, V&V of ML/DL models, and development of safety cages for fail-safe functionality.
Conclusion
Guaranteeing functional safety in AI models is a multifaceted challenge that requires a comprehensive approach including transparency, rigorous testing, effective data quality management, and development of new safety standards. We possess in-depth expertise in the ISO 26262 standards, ensuring the design and development of safe and reliable electrical and electronic systems. To know more, write to us: info@multicorewareinc.com
Know more: Automotive Safety Engineering | Industrial Functional Safety