Background and Objective
Daily activities such as shopping and navigating indoors are challenging problems for people with visual impairment. Researchers tried to find different solutions to help people with visual impairment navigate indoors and outdoors.
Methods
We applied deep learning to help visually impaired people navigate indoors using markers. We propose a system to help them detect markers and navigate indoors using an improved Tiny-YOLOv3 model. A dataset was created by collecting marker images from recorded videos and augmenting them using image processing techniques such as rotation transformation, brightness, and blur processing. After training and validating this model, the performance was tested on a testing dataset and on real videos.
Results
The contributions of this paper are: (1) We developed a navigation system to help people with visual impairment navigate indoors using markers; (2) We implemented and tested a deep learning model to detect Aruco markers in different challenging situations using Tiny-YOLOv3; (3) We implemented and compared several modified versions of the original model to improve detection accuracy. The modified Tiny-YOLOv3 model achieved an accuracy of 99.31% in challenging conditions and the original model achieved an accuracy of 96.11 %.
Conclusion
The training and testing results show that the improved Tiny-YOLOv3 models are superior to the original model. |