Chen, Y and Kim, T-K and Cipolla, R (2011) Silhouette-based object phenotype recognition using 3D shape priors. Proceedings of the IEEE International Conference on Computer Vision. pp. 25-32.Full text not available from this repository.
This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.
|Divisions:||Div F > Machine Intelligence|
|Depositing User:||Cron job|
|Date Deposited:||04 Feb 2015 23:03|
|Last Modified:||05 Feb 2015 06:33|