CUED Publications database

Cluster-seeking shrinkage estimators.

Srinath, KP and Venkataramanan, R (2016) Cluster-seeking shrinkage estimators. ISIT. pp. 845-849.

Full text not available from this repository.


This paper considers the problem of estimating a high-dimensional vector θ ∈ ℝn from a noisy one-time observation. The noise vector is assumed to be i.i.d. Gaussian with known variance. For the squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JS-estimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for θ that lie close to the origin. JS-estimators can be generalized to shrink the data towards any target subspace. Such estimators also dominate the ML-estimator, but the risk reduction is significant only when θ lies close to the subspace. This leads to the question: in the absence of prior information about θ, how do we design estimators that give significant risk reduction over the ML-estimator for a wide range of θ? In this paper, we attempt to infer the structure of θ from the observed data in order to construct a good attracting subspace for the shrinkage estimator. We provide concentration results for the squared-error loss and convergence results for the risk of the proposed estimators, as well as simulation results to support the claims. The estimators give significant risk reduction over the ML-estimator for a wide range of θ, particularly for large n.

Item Type: Article
Divisions: Div F > Signal Processing and Communications
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 20:01
Last Modified: 07 Aug 2018 03:46