Seminar: Tracy Ke, Harvard

Event
Monday, February 14, 2022 - 11:00am
Event Type: 

Abstract: The probabilistic topic model imposes a low-rank structure on the expectation of the corpus matrix. Therefore, singular value decomposition (SVD) is a natural tool of dimension reduction. We propose an SVD-based method for estimating a topic model. Our method constructs an estimate of the topic matrix from only a few leading singular vectors of the corpus matrix, and has a great advantage in memory use and computational cost for large-scale corpora. The core ideas behind our method include a pre-SVD normalization to tackle severe word frequency heterogeneity, a post-SVD normalization to create a low-dimensional word embedding that manifests a simplex geometry, and a post-SVD procedure to construct an estimate of the topic matrix directly from the embedded word cloud. We provide the explicit rate of convergence of our method. We show that our method attains the optimal rate in the case of long and moderately long documents, and it improves the rates of existing methods in the case of short documents. The key of our analysis is a sharp row-wise large-deviation bound for empirical singular vectors, which is technically demanding to derive and potentially useful for other problems. We apply our method to a corpus of Associated Press news articles and a corpus of abstracts of statistical papers.

 

Zoom link:

Please click this URL to start or join. https://iastate.zoom.us/j/98782130243?pwd=ZlppMEpHZGZ1VEErZTdvL3F5OXVaZz09 

    Or, go to https://iastate.zoom.us/join and enter meeting ID: 987 8213 0243 and password: 2022 

Category: 
Tags: