Asthana, A and Goecke, R and Quadrianto, N and Gedeon, T (2009) Learning based automatic face annotation for arbitrary poses and expressions from frontal images only. 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009. pp. 1635-1642.Full text not available from this repository.
Statistical approaches for building non-rigid deformable models, such as the Active Appearance Model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases. © 2009 IEEE.
|Divisions:||Div F > Computational and Biological Learning|
|Depositing User:||Unnamed user with email firstname.lastname@example.org|
|Date Deposited:||02 Sep 2016 18:18|
|Last Modified:||28 Sep 2016 23:57|