Huckvale, M and Howard, IS and Fagel, S (2009) KLAIR: A virtual infant for spoken language acquisition research. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. pp. 696-699.Full text not available from this repository.
Recent research into the acquisition of spoken language has stressed the importance of learning through embodied linguistic interaction with caregivers rather than through passive observation. However the necessity of interaction makes experimental work into the simulation of infant speech acquisition difficult because of the technical complexity of building real-time embodied systems. In this paper we present KLAIR: a software toolkit for building simulations of spoken language acquisition through interactions with a virtual infant. The main part of KLAIR is a sensori-motor server that supplies a client machine learning application with a virtual infant on screen that can see, hear and speak. By encapsulating the real-time complexities of audio and video processing within a server that will run on a modern PC, we hope that KLAIR will encourage and facilitate more experimental research into spoken language acquisition through interaction. Copyright © 2009 ISCA.
|Uncontrolled Keywords:||Autonomous agent Machine learning Situated learning Speech acquisition Toolkit|
|Divisions:||Div F > Computational and Biological Learning|
|Depositing User:||Cron job|
|Date Deposited:||16 Jul 2015 14:09|
|Last Modified:||01 Aug 2015 04:03|