%0 Conference Proceedings
%B Neural Information Processing Systems (NIPS'08)
%D 2008
%T Discriminative learning for dimensionality reduction and classification
%A Simon Lacoste-Julien
%A Fei Sha
%A Michael I. Jordan
%C Vancouver, Canada
%U http://books.nips.cc/papers/files/nips21/NIPS2008_0993.pdf
%X Probabilistic topic models (and their extensions) have become popular as models of latent structures in collections of text documents or images. These models are usually treated as generative models and trained using maximum likelihood estimation, an approach which may be suboptimal in the context of an overall classification problem. In this paper, we describe DiscLDA, a discriminative learning framework for such models as Latent Dirichlet Allocation (LDA) in the setting of dimensionality reduction with supervised side information. In DiscLDA, a class-dependent linear transformation is introduced on the topic mixture proportions. This parameter is estimated by maximizing the conditional likelihood using Monte Carlo EM. By using the transformed topic mixture proportions as a new representation of documents, we obtain a supervised dimensionality reduction algorithm that uncovers the latent structure in a document collection while preserving predictive power for the task of classification. We compare the predictive power of the latent structure of DiscLDA with unsupervised LDA on the 20 Newsgroup ocument classification task.
%8 12/2008