Employing Computer Vision on a Smartphone to Help the Visually Impaired Cross the Road
DOI:
https://doi.org/10.1609/aaaiss.v6i1.36057Abstract
This paper presents a smartphone-based hybrid computer vision system designed to assist visually impaired (VI) individuals in safely navigating pedestrian crosswalks. Existing assistive technologies often depend on controlled crossings and require external hardware, limiting their usability in diverse real-world scenarios. In contrast, this system leverages a standard smartphone camera to detect vehicles and recognize pedestrian traffic lights in real time. The proposed framework integrates two lightweight YOLOv11 models—one for vehicle detection and another for pedestrian traffic light classification—alongside MiDaS v2.1 for monocular depth estimation. These models were trained on public datasets (KITTI and blind-assist1), optimized using TensorFlow Lite, and deployed as two Android applications providing auditory feedback for real-time guidance. Performance evaluations demonstrate high accuracy in object detection and reliable depth estimation under various conditions. Usability testing further confirms the practicality of the system in live environments. By combining accessibility, mobility, and context-aware scene understanding, this work offers a low-cost, deployable alternative for improving independent mobility in the VI community.Downloads
Published
2025-08-01
How to Cite
Ismail, M. I. M., & Mousa, M. A. A. (2025). Employing Computer Vision on a Smartphone to Help the Visually Impaired Cross the Road. Proceedings of the AAAI Symposium Series, 6(1), 227-234. https://doi.org/10.1609/aaaiss.v6i1.36057
Issue
Section
Human-AI Collaboration: Exploring Diversity of Human Cognitive Abilities and Varied AI Models for Hybrid Intelligent Systems