Reed, C and Ghahramani, Z Scaling the Indian Buffet Process via Submodular Maximization. In ICML 2013: JMLR W&CP 28 (3): 1013-1021, 2013. (Unpublished)Full text not available from this repository.
Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Welling (2008)'s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a one-third approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model.
|Uncontrolled Keywords:||stat.ML stat.ML cs.LG|
|Divisions:||Div F > Computational and Biological Learning|
|Depositing User:||Unnamed user with email email@example.com|
|Date Deposited:||18 May 2016 18:54|
|Last Modified:||27 Aug 2016 23:06|