CUED Publications database

Vision-based global localization using a visual vocabular

Wang, J and Cipolla, R and Zha, H (2005) Vision-based global localization using a visual vocabular. Proceedings - IEEE International Conference on Robotics and Automation, 2005. pp. 4230-4235. ISSN 1050-4729

Full text not available from this repository.

Abstract

This paper presents a novel coarse-to-fine global localization approach that is inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by SIFT descriptors are used as natural land-marks. These descriptors are indexed into two databases: an inverted index and a location database. The inverted index is built based on a visual vocabulary learned from the feature descriptors. In the location database, each location is directly represented by a set of scale invariant descriptors. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the inverted index is fast but not accurate enough; whereas localization from the location database using voting algorithm is relatively slow but more accurate. The combination of coarse and fine stages makes fast and reliable localization possible. In addition, if necessary, the localization result can be verified by epipolar geometry between the representative view in database and the view to be localized. Experimental results show that our approach is efficient and reliable. ©2005 IEEE.

Item Type: Article
Uncontrolled Keywords: Mobile robots Scale invariant features Vision-based localization Visual vocabulary
Subjects: UNSPECIFIED
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 07 Mar 2014 12:15
Last Modified: 08 Dec 2014 02:13
DOI: 10.1109/ROBOT.2005.1570770