CUED Publications database

Coarse-to-fine vision-based localization by indexing scale-invariant features

Wang, J and Zha, H and Cipolla, R (2006) Coarse-to-fine vision-based localization by indexing scale-invariant features. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 36. pp. 413-422. ISSN 1083-4419

Full text not available from this repository.

Abstract

This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable. © 2006 IEEE.

Item Type: Article
Uncontrolled Keywords: Coarse-to-fine localization Scale-invariant features Vector space model Visual vocabulary
Subjects: UNSPECIFIED
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 07 Mar 2014 11:25
Last Modified: 12 Dec 2014 19:04
DOI: 10.1109/TSMCB.2005.859085