Harpur, GF and Prager, RW (1996) Development of low entropy coding in a recurrent network. Network, 7. pp. 277-284. ISSN 0954-898XFull text not available from this repository.
In this paper we present an unsupervised neural network which exhibits competition between units via inhibitory feedback. The operation is such as to minimize reconstruction error, both for individual patterns, and over the entire training set. A key difference from networks which perform principal components analysis, or one of its variants, is the ability to converge to non-orthogonal weight values. We discuss the network's operation in relation to the twin goals of maximizing information transfer and minimizing code entropy, and show how the assignment of prior probabilities to network outputs can help to reduce entropy. We present results from two binary coding problems, and from experiments with image coding.
|Divisions:||Div F > Machine Intelligence|
|Depositing User:||Cron Job|
|Date Deposited:||09 Dec 2016 17:15|
|Last Modified:||23 Apr 2017 03:49|