CUED Publications database

Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators

Reagen, B and Whatmough, P and Adolf, R and Rama, S and Lee, H and Lee, SK and Hernandez-Lobato, JM and Wei, GY and Brooks, D (2016) Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators. In: UNSPECIFIED pp. 267-278..

Full text not available from this repository.


© 2016 IEEE. The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of accelerating their execution with specialized hardware. While published designs easily give an order of magnitude improvement over general-purpose hardware, few look beyond an initial implementation. This paper presents Minerva, a highly automated co-design approach across the algorithm, architecture, and circuit levels to optimize DNN hardware accelerators. Compared to an established fixed-point accelerator baseline, we show that fine-grained, heterogeneous data type optimization reduces power by 1.5, aggressive, in-line predication and pruning of small activity values further reduces power by 2.0, and active hardware fault detection coupled with domain-aware error mitigation eliminates an additional 2.7 through lowering SRAM voltages. Across five datasets, these optimizations provide a collective average of 8.1 power reduction over an accelerator baseline without compromising DNN model accuracy. Minerva enables highly accurate, ultra-low power DNN accelerators (in the range of tens of milliwatts), making it feasible to deploy DNNs in power-constrained IoT and mobile devices.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 18:58
Last Modified: 15 Sep 2020 05:41
DOI: 10.1109/ISCA.2016.32