Wang, J and Cipolla, R and Zha, H (2005) Vision-based global localization using a visual vocabular. Proceedings - IEEE International Conference on Robotics and Automation, 2005. pp. 4230-4235. ISSN 1050-4729Full text not available from this repository.
This paper presents a novel coarse-to-fine global localization approach that is inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by SIFT descriptors are used as natural land-marks. These descriptors are indexed into two databases: an inverted index and a location database. The inverted index is built based on a visual vocabulary learned from the feature descriptors. In the location database, each location is directly represented by a set of scale invariant descriptors. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the inverted index is fast but not accurate enough; whereas localization from the location database using voting algorithm is relatively slow but more accurate. The combination of coarse and fine stages makes fast and reliable localization possible. In addition, if necessary, the localization result can be verified by epipolar geometry between the representative view in database and the view to be localized. Experimental results show that our approach is efficient and reliable. ©2005 IEEE.
|Uncontrolled Keywords:||Mobile robots Scale invariant features Vision-based localization Visual vocabulary|
|Divisions:||Div F > Machine Intelligence|
|Depositing User:||Unnamed user with email firstname.lastname@example.org|
|Date Deposited:||16 Jul 2015 13:34|
|Last Modified:||03 Aug 2015 04:48|