Explainable AI for Intrusion Detection Systems in Automotive
Reference number | |
Coordinator | RISE Research Institutes of Sweden AB |
Funding from Vinnova | SEK 100 000 |
Project duration | January 2025 - June 2025 |
Status | Ongoing |
Venture | 6G - Competence supply |
Call | 6G - Supervision of degree work |
Purpose and goal
The integration of additional communication solutions into automotive systems increases the complexity and vulnerability to cyber threats. The goal of this thesis is to develop an explainable AI model for detecting intrusions in vehicle systems. The model should combine high accuracy with clear insights into decision-making processes. The work focuses on advanced techniques for network-based intrusion detection, with trade-offs between explainability, performance and computational efficiency.
Expected effects and result
Explainable AI models are envisioned to improve the transparency and reliability of AI decisions in Intrusion Detection Systems, making them more trustworthy in automotive systems. Insights into the decision-making processes will help users understand and trust the AI´s actions. The results are expected to highlight the trade-offs between explainability, performance, and computational efficiency, thereby contributing to the development of smart and intelligent networks and the 6G vision.
Planned approach and implementation
The plan is to design and evaluate different models, as well as investigate how different attacks affect the models. The approach includes: 1. Review of the latest technology in explainable AI 2. Study of intrusion detection in automotive using algorithms and open datasets 3. Data preparation and investigation of advanced methods and tools for explainable AI 4. Evaluation of how the integration of explainable AI affects the performance, reliability and validity of intrusion detection systems