Your browser doesn't support javascript. This means that the content or functionality of our website will be limited or unavailable. If you need more information about Vinnova, please contact us.

CLEARPATH – Multimodal Perception and AI for Safe Autonomous Navigation

Reference number
Coordinator Örebro universitet - Örebro universitet Inst f naturvetenskap & teknik
Funding from Vinnova SEK 6 984 960
Project duration September 2025 - August 2028
Status Ongoing
Venture Advanced digitalization - Industrial needs-driven innovation
Call Advanced digitalization - Industrial innovation 2025

Purpose and goal

The project develops robust perception for autonomous machines in mining and construction. Using multimodal sensor fusion of 4D radar, thermal, visual, and lidar, it enables safe navigation in dust, fog, and with dirty sensors. The goal is to improve safety and efficiency under harsh conditions by advancing object recognition, terrain analysis, and localisation, combined with reliable hardware and trustworthy AI.

Expected effects and result

The project advances AI-driven automation and autonomy for mining and construction, improving efficiency, safety, and sustainability. It develops robust multimodal perception using 4D radar, lidar, thermal imaging, and generative AI for navigation, object detection, and semantic understanding. Research output at TRL 6 research enables failure-resilient, trustworthy systems, reduces physical demands, and promotes diversity while supporting scalable, industry-ready automation.

Planned approach and implementation

The project includes development of radar-based perception and motion analysis; multimodal depth estimation, sensor fusion, and trustworthy perception; semantic mapping and traversability analysis; collection and curation of robust datasets; integration of autonomy and teleoperation into demonstrators; in addition to management, requirements analysis, evaluation, dissemination, and exploitation.

The project description has been provided by the project members themselves and the text has not been looked at by our editors.

Last updated 22 September 2025

Reference number 2025-01043