CUED Publications database

Development of low entropy coding in a recurrent network.

Harpur, GF and Prager, RW (1996) Development of low entropy coding in a recurrent network. Network, 7. pp. 277-284. ISSN 0954-898X

Full text not available from this repository.


In this paper we present an unsupervised neural network which exhibits competition between units via inhibitory feedback. The operation is such as to minimize reconstruction error, both for individual patterns, and over the entire training set. A key difference from networks which perform principal components analysis, or one of its variants, is the ability to converge to non-orthogonal weight values. We discuss the network's operation in relation to the twin goals of maximizing information transfer and minimizing code entropy, and show how the assignment of prior probabilities to network outputs can help to reduce entropy. We present results from two binary coding problems, and from experiments with image coding.

Item Type: Article
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 19:45
Last Modified: 21 Jun 2018 02:41