CUED Publications database

Scenenet: An annotated model generator for indoor scene understanding

Handa, A and Patraucean, V and Stent, S and Cipolla, R (2016) Scenenet: An annotated model generator for indoor scene understanding. In: UNSPECIFIED pp. 5737-5743..

Full text not available from this repository.


© 2016 IEEE. We introduce Scenenet, a framework for generating high-quality annotated 3D scenes to aid indoor scene understanding. Scenenet leverages manually-annotated datasets of real world scenes such as nYUv2 to learn statistics about object co-occurrences and their spatial relationships. Using a hierarchical simulated annealing optimisation, these statistics are exploited to generate a potentially unlimited number of new annotated scenes, by sampling objects from various existing databases of 3D objects such as Modelnet, and textures such as OpenSurfaces and ArchiveTextures. Depending on the task, Scenenet can be used directly in the form of annotated 3D models for supervised training and 3D reconstruction benchmarking, or in the form of rendered annotated sequences of RGB-D frames or videos.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Divisions: Div F > Machine Intelligence
Div D > Construction Engineering
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:34
Last Modified: 17 May 2018 06:35