VIVID: Augmenting Vision-based Indoor Navigation System with Edge Computing

Indoor localization and navigation have a great potential of application, especially in large indoor spaces where people tend to get lost. The indoor localization problem is the fundamental of an indoor navigation system. Existing research and commercial efforts have leveraged wirelessbased approaches to locate users in indoor environments. However, the predominant wireless-based approaches, such as WiFi and Bluetooth, are still not satisfactory, either not supporting commodity devices, or being vulnerable to environmental changes. These issues make them hard to deploy and maintain.

In this project, we present Vivid, a mobile device-friendly indoor localization and navigation system that leverages visual cues as the cornerstone of localization. By leveraging the computation power at the extreme internet edges, Vivid to a large extent overcomes the difficulties brought by resource-intensive image processing tasks. We propose a grid-based algorithm that transforms the feature map into a grid, with which finding the path between two positions can be easily obtained. We also leverage deep learning techniques to assist in automatic map maintenance to adapt to the visual changes. With edge computing, user privacy is preserved since the visual data is mainly processed locally and detected dynamic objects are removed immediately.

The evaluation results show that: i) our system easily outperforms the existing solutions on COTS devices in localization accuracy, yielding decimeter-level error; ii) our choice of the system architecture is optimal among the available ones; iii) the automatic map maintenance mechanism effectively ameliorates the localization robustness of the system.

Avatar
Haoran Qiu
Ph.D. Candidate in Computer Science

My research interests include distributed systems, machine learning and cloud computing.