CUED Publications database

Improving Interpretability and Regularisation in Deep Learning

Wu, C and Gales, M and Ragni, A and Karanasou, P and Sim, KC (2017) Improving Interpretability and Regularisation in Deep Learning. IEEE/ACM Transactions on Audio Speech and Language Processing, 26. pp. 256-265. ISSN 2329-9290

Full text not available from this repository.


IEEE Deep learning approaches yield state-of-the-art performance in a range of tasks, including automatic speech recognition. However, the highly distributed representation in a deep neural network (DNN) or other network variations are difficult to analyse, making further parameter interpretation and regularisation challenging. This paper presents a regularisation scheme acting on the activation function output to improve the network interpretability and regularisation. The proposed approach, referred to as activation regularisation, encourages activation function outputs to satisfy a target pattern. By defining appropriate target patterns, different learning concepts can be imposed on the network. This method can aid network interpretability and also has the potential to reduce over-fitting. The scheme is evaluated on several continuous speech recognition tasks: the Wall Street Journal continuous speech recognition task, eight conversational telephone speech tasks from the IARPA Babel program and a U.S. English broadcast news task. On all the tasks, the activation regularisation achieved consistent performance gains over the standard DNN baselines.

Item Type: Article
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 19 Jan 2018 20:14
Last Modified: 10 Apr 2021 22:30
DOI: 10.1109/TASLP.2017.2774919