Show simple item record

dc.contributor.authorZhu, Xiaojin (Jerry)en_US
dc.contributor.authorBlei, Daviden_US
dc.contributor.authorLafferty, Johnen_US
dc.date.accessioned2012-03-15T17:20:05Z
dc.date.available2012-03-15T17:20:05Z
dc.date.created2006en_US
dc.date.issued2006en_US
dc.identifier.citationTR1553en_US
dc.identifier.urihttp://digital.library.wisc.edu/1793/60486
dc.description.abstractLatent Dirichlet Allocation models a document by a mixture of topics, where each topic itself is typically modeled by a unigram word distribution. Documents however often have known structures, and the same topic can exhibit different word distributions under different parts of the structure. We extend latent Dirichlet allocation model by replacing the unigram word distributions with a factored representation conditioned on both the topic and the structure. In the resultant model each topic is equivalent to a set of unigrams, reflecting the structure a word is in. The proposed model is more flexible in modeling the corpus. The factored representation prevents combinatorial explosion and leads to efficient parameterization. We derive the variational optimization algorithm for the new model. The model shows improved perplexity on text and image data, but no significant accuracy improvement when used for classification.en_US
dc.format.mimetypeapplication/pdfen_US
dc.publisherUniversity of Wisconsin-Madison Department of Computer Sciencesen_US
dc.titleTagLDA: Bringing a document structure knowledge into topic modelsen_US
dc.typeTechnical Reporten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • CS Technical Reports
    Technical Reports Archive for the Department of Computer Sciences at the University of Wisconsin-Madison

Show simple item record