CUED Publications database

Multimodal Classification of Driver Glance

Baumann, D and Mahmoud, M and Robinson, P and Dias, E and Skrypchuk, L (2017) Multimodal Classification of Driver Glance. In: International Conference on Affective Computing and Intelligent Interaction, 2017-10-23 to 2017-10-26 pp. 389-394..

Full text not available from this repository.

Abstract

—This paper presents a multimodal approach to invehicle classification of driver glances. Driver glance is a strong predictor of cognitive load and is a useful input to many applications in the automotive domain. Six descriptive glance regions are defined and a classifier is trained on video recordings of drivers from a single low-cost camera. Visual features such as head orientation, eye gaze and confidence ratings are extracted, then statistical methods are used to perform failure analysis and calibration on the visual features. Non-visual features such as steering wheel angle and indicator position are extracted from a RaceLogic VBOX system. The approach is evaluated on a dataset containing multiple 60 second samples from 14 participants recorded while driving in a natural environment. We compare our multimodal approach to separate unimodal approaches using both Support Vector Machine (SVM) and Random Forests (RF) classifiers. RF Mean Decrease in Gini Index is used to rank selected features which gives insight into the selected features and improves the classifier performance. We demonstrate that our multimodal approach yields significantly higher results than unimodal approaches. The final model achieves an average F1 score of 70.5% across the six classes.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Subjects: UNSPECIFIED
Divisions: Div C > Engineering Design
Div C > Materials Engineering
Depositing User: Cron Job
Date Deposited: 12 Jun 2018 01:35
Last Modified: 10 Apr 2021 22:30
DOI: