|dc.description.abstract||Visually impaired and blind (VIB) people often face additional difficulties in their daily lives due to their lack of access to visual information, which reduces their quality of life. Due to their vision problems, it is hard to avoid obstacles and find target objects in different spaces. Assistive devices have been developed to help blind people avoid obstacles and navigate. However, many of these technologies require users to purchase additional devices, and they need more flexibility, thus inconveniencing VIB users. To address the above issues, we proposed a new approach, implementing a design, programming, and interface to create a Navigation Assistance through AR technology and Deep learning (NAAD) system. The NAAD system is based on Mobile-Net Single-shot Detection (MobileNet-SSD), Augmented Reality (AR), and LiDAR for implementing obstacle detection, target object detection, distance calculation, and navigation of user-specified lost objects. It includes a mobile application that uses simple voice and gesture controls to aid navigation. This system aims to (i) help visually impaired and blind (VIB) people avoid obstacles in daily life, (ii) Use computer vision to find user-specified objects quickly, and (iii) integrate LiDAR for AR/VR experiences, reducing additional equipment and improving the distance accuracy between the obstacle and the user. In my research based on the NAAD system, we designed the safe mode and query mode. Subsequently, We upgraded the system and added the navigation function in the query mode.
In the first study in this dissertation, we discussed object detection integration and introduced the query mode of the NAAD system. The object detection function of the NAAD system is implemented by machine learning. This virtual assistant provides different functions such as obstacle detection, distance estimation, navigation system, and real-time environment analysis to help users detect and find their object items. Subsequently, the object detection feature includes a unique interactable feature that enables the user to interact with the device to find indoor objects by providing voice and vibration feedback and further provides voice navigation to the user. Experimental results show that the system has a good performance of the response time of 19 ms outpacing , and the FPS is over 30 frames per second and outperforms similar systems  and .
In the second study in this dissertation, we discussed obstacle detection integration and introduced the safe mode of the NAAD system. In safe mode, this virtual assistant provides different functions such as obstacle detection, distance estimation, and an alarm system that analyzes the environment in real time and alerts the user to avoid obstacles. To improve the distance detection accuracy between obstacles and users, we introduced a LiDAR sensor instead of a depth camera to actively detect the environment without being affected by ambient light. The experimental results show that the distance detection accuracy between the obstacle and the user is 96% within the five-meter range, outpacing other research and surpassing similar projects.
In the third study in this dissertation, we optimize and upgrade the NAAD system. We discuss object-based navigation and introduce an object navigator in the query mode of the NAAD system. We designed the object navigation module by adding a memory storage unit that records the target object and the user's location in real-time. It can solve the detection and navigation of the target object when the user is far away from it or not in the same space. Finally, we introduce the frame and interface design of the upgraded final NAAD ( Navigation Assistance through AR technology and Deep learning) system. Experimental results show that the NAAD system can successfully locate and navigate to lost item in different spaces.||en_US