Enhancing Autonomous Vehicle Perception through Multi-Sensor Fusion and Uncertainty-Aware Decision-Making

Authors

  • Vikash Kumar Verma
  • Dr. Sourabh Mandaloi

Keywords:

Autonomous Driving, Multi-Sensor Fusion, Uncertainty-Aware Fusion, TransFuse Model, Deep Learning, RGB–LiDAR Fusion

Abstract

Autonomous driving requires precise perception and decision making in non-static, complex and uncertain settings. Systems that only rely on one sensor, are typically not reliable in low visibility, occlusion or dynamic conditions, all of which will ultimately affect safe navigation. This research will assess the likelihood that multi-sensor fusion of RGB and depth/LiDAR, can improve the accuracy of perception, in addition to whether a model of uncertainty can be leveraged to heuristically mitigate sensor reliability. Finally, to also improve online decision-making in steering, throttle and braking driving agents. To this end, the proposed Uncertainty-Aware TransFuse fuses RGB and depth features using CNNs with Transformer-based attention. Incorporating uncertainty leverages weighted reliance upon sensor reliability/uncertainty heuristics during inference. Experimental results on the KITTI dataset demonstrated statistically significant improvements over the baseline, in navigation accuracy, object detection performance, and lane detection performance, even when the scenes were severely degraded. The proposed multi-sensor fusion system works in real-time at 32 FPS and reduces false detections in occlusion and low-light testing conditions. In short, multi-sensor fusion of RGB and depth, with uncertainty, increases overall robustness and safety. The Uncertainty-Aware TransFuse provides a robust model of real-world driving for reliable autonomous perceiving.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Author Biographies

Vikash Kumar Verma

M.Tech Scholar

Department of Computer Science

SAM College, Bhopal, India

Dr. Sourabh Mandaloi

Associate Professor

Department of Computer Science

SAM College

Bhopal, Madhya Pradesh, India

References

Z. Teed and J. Deng, “DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras,” in Advances in Neural Information Processing Systems, vol. 34, 2021. [Online]. Available: arXiv:2108.10869.

W. Chen, X. Zhang, Y. Sun, et al., “LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2024, pp. 19844–19853, doi:10.1109/CVPR52733.2024.01876.

S. Shah, N. Rajyaguru, C. D. Singh, C. A. Metzler and Y. Aloimonos, “CodedVO: Coded Visual Odometry,” IEEE Robotics and Automation Letters, 2024. [Online]. Available: arXiv:2407.18240. doi:10.1109/LRA.2024.3416788.

O. Françani and M. R. O. A. Máximo, “Transformer-Based Model for Monocular Visual Odometry: A Video Understanding Approach,” arXiv:2305.06121, 2023.

P. Stratton, S. S. Garimella, A. Saxena, N. Amutha and E. Gerami, “Volume-DROID: A Real-Time Implementation of Volumetric Mapping with DROID-SLAM,” arXiv:2306.06850, 2023.

Z. Xin, C. Wu, P. Huang, Y. Zhang, Y. Mao and G. Huang, “Large-Scale Gaussian Splatting SLAM (LSG-SLAM),” arXiv:2505.09915, May 2025.

O. Mostafa, N. Evangeliou and A. Tzes, “SLAM-based Safe Indoor Exploration Strategy,” in Proc. 11th Int. Conf. Automation, Robotics and Applications (ICARA), 2025. doi:10.1109/ICARA64554.2025.10977630.

H. B. Tabrizi and C. Crick, “Brain-Inspired Visual Odometry: Balancing Speed and Interpretability through a System of Systems Approach,” arXiv:2312.13162, Dec. 2023.

C. Homeyer, A. et al., “Combining End-to-End SLAM with 3D Gaussian Splatting,” arXiv, 2024.

S. Isaacson, T. “LONER: LiDAR-Only Neural Representations for Real-Time SLAM,” arXiv, 2023.

Hagemann, M. et al., “Deep Geometry-Aware Camera Self-Calibration from Video,” in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), 2023, pp. 3415–3425, doi:10.1109/ICCV51070.2023.00318.

E. P. Herrera-Granda, J. C. Torres-Cantero and D. H. Peluffo-Ordóñez, “Monocular visual SLAM, visual odometry, and structure-from-motion methods applied to 3D reconstruction: A comprehensive survey,” Heliyon, vol. 10, e37356, 2024. doi:10.1016/j.heliyon.2024.e37356.

Y. Xu, H. Jiang, Z. Xiao, J. Feng and L. Zhang, “DG-SLAM: Robust Dynamic Gaussian Splatting SLAM with Hybrid Pose Optimization,” in Proc. Advances in Neural Information Processing Systems (NeurIPS), 2024. [Online]. Available: arXiv:2411.08373.

X. Yue, et al., “LiDAR-based SLAM: A Survey,” arXiv, 2023.

Koval, B., “Evaluation of LiDAR-based 3D SLAM Algorithms in SubT,” arXiv, 2023.

Z. Wang, X., “Improved LeGO-LOAM by Outlier Elimination,” Measurement, 2023, Art. no. 112767. doi:10.1016/j.measurement.2023.112767.

B. Shen, L., “LIO-SAM++: Lidar-Inertial Semantic SLAM,” Sensors, vol. 24, no. 23, art. no. 7546, 2024. doi:10.3390/s24237546.

W. Wu, H., “DALI-SLAM: Degeneracy-aware LiDAR-Inertial,” ScienceDirect / arXiv, 2025.

S. Isaacson, “LONER: LiDAR Neural Representations for SLAM,” arXiv, 2023.

NV-LIO, “Normal-Vector based Lidar-Inertial Odometry (NV-LIO),” arXiv / conference paper, 2024.

N. Prieto-Fernández, , “Weighted Conformal LiDAR-Mapping,” arXiv, 2024.

M. D. Duc, X., “LiDAR-Encoder-IMU Factor-Graph Fusion,” arXiv, 2024. Benchmark studies on LIO-SAM / LeGO-LOAM / Cartographer (2023–2024).

M. Filipenko and I. Afanasyev, “Comparison of Various SLAM Systems for Mobile Robot in an Indoor Environment,” arXiv:2501.09490, 2025.

S. Alaba, T. and U., “GPS-IMU UKF Fusion for Robust Navigation,” arXiv / Sensors, 2024.

W. Löffler, , “Train Localization with IMU During GNSS Outages,” arXiv / conference 2024.

K. Mouzakidou, , “Airborne Sensor Fusion: Accuracy Gains,” ScienceDirect (journal/Elsevier), 2024.

Y. Xu et al., “DG-SLAM / Dynamic Gaussian Splatting (NeurIPS-related),” NeurIPS / arXiv, 2024.

M. Hilger, A. “4D Imaging Radar Loop Closure for SLAM,” arXiv:2501.xxxx, 2025.

T. Tahves, J. Gu, M. Bellone and R. Sell, “CLFT: Camera-LiDAR Fusion Transformer for Traffic Object Segmentation,” arXiv:2501.02858, Jan. 2025. arXiv+1

Downloads

Published

10/30/2025

How to Cite

Verma, V. K., & Mandaloi, D. S. (2025). Enhancing Autonomous Vehicle Perception through Multi-Sensor Fusion and Uncertainty-Aware Decision-Making. SMART MOVES JOURNAL IJOSCIENCE, 11(10), 20–30. Retrieved from https://ijoscience.com/index.php/ojsscience/article/view/577

Issue

Section

Articles