Improving Topic Models with Latent Feature Word Representations

Probabilistic topic models are widely used to discover latent topics in document collections, while latent feature vector representations of words have been used to obtain high performance in many NLP tasks. In this paper, we extend two different Dirichlet multinomial topic models by incorporating latent feature vector representations of words trained on very large corpora to improve the word-topic mapping learnt on a smaller corpus. Experimental results show that by using information from the external corpora, our new models produce significant improvements on topic coherence, document clustering and document classification tasks, especially on datasets with few or short documents.


Introduction
Topic modeling algorithms, such as Latent Dirichlet Allocation (Blei et al., 2003) and related methods (Blei, 2012), are often used to learn a set of latent topics for a corpus, and predict the probabilities of each word in each document belonging to each topic (Teh et al., 2006;Toutanova and Johnson, 2008;Porteous et al., 2008;Johnson, 2010;Xie and Xing, 2013;Hingmire et al., 2013).
Conventional topic modeling algorithms such as these infer document-to-topic and topic-to-word distributions from the co-occurrence of words within documents. But when the training corpus of documents is small or when the documents are short, the resulting distributions might be based on little evidence. Sahami and Heilman (2006) and Phan et al. (2011) show that it helps to exploit external knowledge to improve the topic representations. Sahami and Heilman (2006) employed web search results to improve the information in short texts. Phan et al. (2011) assumed that the small corpus is a sample of topics from a larger corpus like Wikipedia, and then use the topics discovered in the larger corpus to help shape the topic representations in the small corpus. However, if the larger corpus has many irrelevant topics, this will "use up" the topic space of the model. In addition, Petterson et al. (2010) proposed an extension of LDA that uses external information about word similarity, such as thesauri and dictionaries, to smooth the topic-to-word distribution.
Topic models have also been constructed using latent features (Salakhutdinov and Hinton, 2009;Srivastava et al., 2013;Cao et al., 2015). Latent feature (LF) vectors have been used for a wide range of NLP tasks (Glorot et al., 2011;Socher et al., 2013;Pennington et al., 2014). The combination of values permitted by latent features forms a high dimensional space which makes it is well suited to model topics of very large corpora.
Rather than relying solely on a multinomial or latent feature model, as in Salakhutdinov and Hinton (2009), Srivastava et al. (2013) and Cao et al. (2015), we explore how to take advantage of both latent feature and multinomial models by using a latent feature representation trained on a large external corpus to supplement a multinomial topic model estimated from a smaller corpus.
Our main contribution is that we propose two new latent feature topic models which integrate latent feature word representations into two Dirichlet multinomial topic models: a Latent Dirichlet Allocation (LDA) model (Blei et al., 2003) and a onetopic-per-document Dirichlet Multinomial Mixture (DMM) model (Nigam et al., 2000). Specifically, we replace the topic-to-word Dirichlet multinomial component which generates the words from topics in each Dirichlet multinomial topic model by a twocomponent mixture of a Dirichlet multinomial component and a latent feature component.
In addition to presenting a sampling procedure for the new models, we also compare using two different sets of pre-trained latent feature word vectors with our models. We achieve significant improvements on topic coherence evaluation, document clustering and document classification tasks, especially on corpora of short documents and corpora with few documents.

LDA model
The Latent Dirichlet Allocation (LDA) topic model (Blei et al., 2003) represents each document d as a probability distribution θ d over topics, where each topic z is modeled by a probability distribution φ z over words in a fixed vocabulary W .
As presented in Figure 1, where α and β are hyper-parameters and T is number of topics, the generative process for LDA is described as follows: where Dir and Cat stand for a Dirichlet distribution and a categorical distribution, and z d i is the topic indicator for the i th word w d i in document d. Here, the topic-to-word Dirichlet multinomial component generates the word w d i by drawing it from the categorical distribution Cat(φ z d i ) for topic z d i .
(LDA) (DMM) Figure 1: Graphical models of LDA and DMM We follow the Gibbs sampling algorithm for estimating LDA topic models as described by Griffiths and Steyvers (2004). By integrating out θ and φ, the algorithm samples the topic z d i for the current i th word w d i in document d using the conditional distribution P(z d i | Z ¬d i ), where Z ¬d i denotes the topic assignments of all the other words in the document collection D, so: Notation: N t,w d is the rank-3 tensor that counts the number of times that word w is generated from topic t in document d by the Dirichlet multinomial component, which in section 2.1 belongs to the LDA model, while in section 2.2 belongs to the DMM model. When an index is omitted, it indicates summation over that index (so N d is the number of words in document d).
We write the subscript ¬d for the document collection D with document d removed, and the subscript ¬d i for D with just the i th word in document d removed, while the subscript d ¬i represents document d without its i th word. For example, N t ¬d i is the number of words labelled a topic t, ignoring the i th word of document d.
V is the size of the vocabulary, V = |W |.

DMM model for short texts
Applying topic models for short or few documents for text clustering is more challenging because of data sparsity and the limited contexts in such texts. One approach is to combine short texts into long pseudo-documents before training LDA (Hong and Davison, 2010;Weng et al., 2010;Mehrotra et al., 2013). Another approach is to assume that there is only one topic per document (Nigam et al., 2000;Zhao et al., 2011;Yin and Wang, 2014). In the Dirichlet Multinomial Mixture (DMM) model (Nigam et al., 2000), each document is assumed to only have one topic. The process of generating a document d in the collection D, as shown in Figure 1, is to first select a topic assignment for the document, and then the topic-to-word Dirichlet multinomial component generates all the words in the document from the same selected topic: Yin and Wang (2014) introduced a collapsed Gibbs sampling algorithm for the DMM model in 300 which a topic z d is sampled for the document d using the conditional probability P(z d | Z ¬d ), where Z ¬d denotes the topic assignments of all the other documents, so: Notation: M t ¬d is the number of documents assigned to topic t excluding the current document d; Γ is the Gamma function.

Latent feature vector models
Traditional count-based methods (Deerwester et al., 1990;Lund and Burgess, 1996;Bullinaria and Levy, 2007) for learning real-valued latent feature (LF) vectors rely on co-occurrence counts. Recent approaches based on deep neural networks learn vectors by predicting words given their window-based context (Collobert and Weston, 2008;Mikolov et al., 2013;Pennington et al., 2014;. Mikolov et al. (2013)'s method maximizes the log likelihood of each word given its context. Pennington et al. (2014) used back-propagation to minimize the squared error of a prediction of the logfrequency of context words within a fixed window of each word. Word vectors can be trained directly on a new corpus. In our new models, however, in order to incorporate the rich information from very large datasets, we utilize pre-trained word vectors that were trained on external billion-word corpora.

New latent feature topic models
In this section, we propose two novel probabilistic topic models, which we call the LF-LDA and the LF-DMM, that combine a latent feature model with either an LDA or DMM model. We also present Gibbs sampling procedures for our new models. In general, LF-LDA and LF-DMM are formed by taking the original Dirichlet multinomial topic models LDA and DMM, and replacing their topic-to-word Dirichlet multinomial component that generates words from topics with a two-component mixture of a topic-to-word Dirichlet multinomial component and a latent feature component.
Informally, the new models have the structure of the original Dirichlet multinomial topic models, as shown in Figure 2, with the addition of two matrices τ and ω of latent feature weights, where τ t and ω w are the latent-feature vectors associated with topic t and word w respectively. Our latent feature model defines the probability that it generates a word given the topic as the categorical distribution CatE with: CatE is a categorical distribution with log-space parameters, i.e. CatE(w | u) ∝ exp(u w ). As τ t and ω w are (row) vectors of latent feature weights, so τ t ω is a vector of "scores" indexed by words. ω is fixed because we use pre-trained word vectors.
In the next two sections 3.1 and 3.2, we explain the generative processes of our new models LF-LDA and LF-DMM. We then present our Gibbs sampling procedures for the models LF-LDA and LF-DMM in the sections 3.3 and 3.4, respectively, and explain how we estimate τ in section 3.5.

Generative process for the LF-LDA model
The LF-LDA model generates a document as follows: a distribution over topics θ d is drawn for document d; then for each i th word w d i (in sequential order that words appear in the document), the model chooses a topic indicator z d i , a binary indicator variable s d i is sampled from a Bernoulli distribution to determine whether the word w d i is to be generated by the Dirichlet multinomial or latent feature component, and finally the word is generated from the chosen topic by the determined topic-toword model. The generative process is: where the hyper-parameter λ is the probability of a word being generated by the latent feature topic-toword model and Ber(λ) is a Bernoulli distribution with success probability λ.

Generative process for the LF-DMM model
Our LF-DMM model uses the DMM model assumption that all the words in a document share the same topic. Thus, the process of generating a document in a document collection with our LF-DMM is as follows: a distribution over topics θ is drawn for the document collection; then the model draws a topic indicator z d for the entire document d; for every i th word w d i in the document d, a binary indicator variable s d i is sampled from a Bernoulli distribution to determine whether the Dirichlet multinomial or latent feature component will be used to generate the word w d i , and finally the word is generated from the same topic z d by the determined component. The generative process is summarized as:

Inference in LF-LDA model
From the generative model of LF-LDA in Figure 2, by integrating out θ and φ, we use the Gibbs sampling algorithm (Robert and Casella, 2004) to perform inference to calculate the conditional topic assignment probabilities for each word. The outline of the Gibbs sampling algorithm for the LF-LDA model is detailed in Algorithm 1.
Algorithm 1: An approximate Gibbs sampling algorithm for the LF-LDA model Initialize the word-topic variables z di using the LDA sampling algorithm for iteration iter = 1, 2, ... do for topic t = 1, 2, ..., T do τ t = arg max τ t P(τ t | Z, S) for document d = 1, 2, ..., |D| do for word index i = 1, 2, ..., N d do sample z di and s di from Here, S denotes the distribution indicator variables for the whole document collection D. Instead of sampling τ t from the posterior, we perform MAP estimation as described in the section 3.5.
For sampling the topic z d i and the binary indicator variable s d i of the i th word w d i in the document d, we integrate out s d i in order to sample z d i and then sample s d i given z d i . We sample the topic z d i using the conditional distribution as follows: Then we sample s d i conditional on z d i = t with: Notation: Due to the new models' mixture architecture, we separate out the counts for each of the two components of each model. We define the rank-3 tensor K t,w d as the number of times a word w in document d is generated from topic t by the latent feature component of the generative LF-LDA or LF-DMM model.
We also extend the earlier definition of the tensor N t,w d as the number of times a word w in document d is generated from topic t by the Dirichlet multinomial component of our combined models, which in section 3.3 refers to the LF-LDA model, while in section 3.4 refers to the LF-DMM model. For both tensors K and N , omitting an index refers to summation over that index and negation ¬ indicates exclusion as before. So N w d + K w d is the total number of times the word type w appears in the document d.

Inference in LF-DMM model
For the LF-DMM model, we integrate out θ and φ, and then sample the topic z d and the distribution selection variables s d for document d using Gibbs sampling as outlined in Algorithm 2.
Algorithm 2: An approximate Gibbs sampling algorithm for the LF-DMM model Initialize the word-topic variables z di using the DMM sampling algorithm for iteration iter = 1, 2, ... do for topic t = 1, 2, ..., T do τ t = arg max τ t P(τ t | Z, S) for document d = 1, 2, ..., |D| do sample z d and s d from As before in Algorithm 1, we also use MAP estimation of τ as detailed in section 3.5 rather than 302 sampling from the posterior. The conditional distribution of topic variable and selection variables for document d is: Unfortunately the ratios of Gamma functions makes it difficult to integrate out s d in this distribution P. As z d and s d are not independent, it is computationally expensive to directly sample from this distribution, as there are 2 (N w d +K w d ) different values of s d . So we approximate P with a distribution Q that factorizes across words as follows: This simpler distribution Q can be viewed as an approximation to P in which the topic-word "counts" are "frozen" within a document. This approximation is reasonably accurate for short documents. This distribution Q simplifies the coupling between z d and s d . This enables us to integrate out s d in Q. We first sample the document topic z d for document d using Q(z d ), marginalizing over s d : Then we sample the binary indicator variable s d i for each i th word w d i in document d conditional on z d = t from the following distribution:

Learning latent feature vectors for topics
To estimate the topic vectors after each Gibbs sampling iteration through the data, we apply regularized maximum likelihood estimation. Applying MAP estimation to learn log-linear models for topic models is also used in SAGE (Eisenstein et al., 2011) and SPRITE (Paul and Dredze, 2015). How-ever, unlike our models, those models do not use latent feature word vectors to characterize topic-word distributions. The negative log likelihood of the corpus L under our model factorizes topic-wise into factors L t for each topic. With L 2 regularization 1 for topic t, these are: The MAP estimate of topic vectors τ t is obtained by minimizing the regularized negative log likelihood. The derivative with respect to the j th element of the vector for topic t is: We used L-BFGS 2 (Liu and Nocedal, 1989) to find the topic vector τ t that minimizes L t .

Experiments
To investigate the performance of our new LF-LDA and LF-DMM models, we compared their performance against baseline LDA and DMM models on topic coherence, document clustering and document classification evaluations. The topic coherence evaluation measures the coherence of topic-word associations, i.e. it directly evaluates how coherent the assignment of words to topics is. The document clustering and document classification tasks evaluate how useful the topics assigned to documents are in clustering and classification tasks.
Because we expect our new models to perform comparatively well in situations where there is little data about topic-to-word distributions, our experiments focus on corpora with few or short documents. We also investigated which values of λ perform well, and compared the performance when using two different sets of pre-trained word vectors in these new models.

Distributed word representations
We experimented with two state-of-the-art sets of pre-trained word vectors here.
Google word vectors 3 are pre-trained 300dimensional vectors for 3 million words and phrases. These vectors were trained on a 100 billion word subset of the Google News corpus by using the Google Word2Vec toolkit (Mikolov et al., 2013). Stanford vectors 4 are pre-trained 300-dimensional vectors for 2 million words. These vectors were learned from 42-billion tokens of Common Crawl web data using the Stanford GloVe toolkit (Pennington et al., 2014).
We refer to our LF-LDA and LF-DMM models using Google and Stanford word vectors as w2v-LDA, glove-LDA, w2v-DMM and glove-DMM.

Experimental datasets
We conducted experiments on the 20-Newsgroups dataset, the TagMyNews news dataset and the Sanders Twitter corpus.
The 20-Newsgroups dataset 5 contains about 19,000 newsgroup documents evenly grouped into 20 different categories. The TagMyNews news dataset 6 (Vitale et al., 2012) consists of about 32,600 English RSS news items grouped into 7 categories, where each news document has a news title and a short description. In our experiments, we also used a news title dataset which consists of just the news titles from the TagMyNews news dataset.
Each dataset was down-cased, and we removed non-alphabetic characters and stop-words found in the stop-word list in the Mallet toolkit (McCallum, 2002). We also removed words shorter than 3 characters and words appearing less than 10 times in the 20-Newsgroups corpus, and under 5 times in the TagMyNews news and news titles datasets. In addition, words not found in both Google and Stanford vector representations were also removed. 7 We refer to the cleaned 20-Newsgroups, TagMyNews news 3 Download at: https://code.google.com/p/word2vec/ 4 Download at: http://www-nlp.stanford.edu/projects/glove/ 5 We used the "all-terms" version of the 20-Newsgroups dataset available at http://web.ist.utl.pt/acardoso/datasets/ (Cardoso-Cachopo, 2007). 6 The TagMyNews news dataset is unbalanced, where the largest category contains 8,200 news items while the smallest category contains about 1,800 items. Download at: http: //acube.di.unipi.it/tmn-dataset/ 7 1366, 27 and 12 words were correspondingly removed out of the 20-Newsgroups, TagMyNews news and news title datasets. and news title datasets as N20, TMN and TMNtitle, respectively.
We also performed experiments on two subsets of the N20 dataset. The N20short dataset consists of all documents from the N20 dataset with less than 21 words. The N20small dataset contains 400 documents consisting of 20 randomly selected documents from each group of the N20 dataset.  Finally, we also experimented on the publicly available Sanders Twitter corpus. 8 This corpus consists of 5,512 Tweets grouped into four different topics (Apple, Google, Microsoft, and Twitter). Due to restrictions in Twitter's Terms of Service, the actual Tweets need to be downloaded using 5,512 Tweet IDs. There are 850 Tweets not available to download. After removing the non-English Tweets, 3,115 Tweets remain. In addition to converting into lowercase and removing non-alphabetic characters, words were normalized by using a lexical normalization dictionary for microblogs (Han et al., 2012). We then removed stop-words, words shorter than 3 characters or appearing less than 3 times in the corpus. The four words apple, google, microsoft and twitter were removed as these four words occur in every Tweet in the corresponding topic. Moreover, words not found in both Google and Stanford vector lists were also removed. 9 In all our experiments, after removing words from documents, any document with a zero word count was also removed from the corpus. For the Twitter corpus, this resulted in just 2,520 remaining Tweets.

General settings
The hyper-parameter β used in baseline LDA and DMM models was set to 0.01, as this is a common setting in the literature (Griffiths and Steyvers, 2004). We set the hyper-parameter α = 0.1, as this can improve performance relative to the standard setting α = 50 T , as noted by Lu et al. (2011) and Yin and Wang (2014).
We ran each baseline model for 2000 iterations and evaluated the topics assigned to words in the last sample. For our models, we ran the baseline models for 1500 iterations, then used the outputs from the last sample to initialize our models, which we ran for 500 further iterations.
We report the mean and standard deviation of the results of ten repetitions of each experiment (so the standard deviation is approximately 3 standard errors, or a 99% confidence interval).

Topic coherence evaluation
This section examines the quality of the topic-word mappings induced by our models. In our models, topics are distributions over words. The topic coherence evaluation measures to what extent the highprobability words in each topic are semantically coherent (Chang et al., 2009;Stevens et al., 2012).

Quantitative analysis
Newman et al. (2010), Mimno et al. (2011) and Lau et al. (2014) describe methods for automatically evaluating the semantic coherence of sets of words. The method presented in Lau et al. (2014) uses the normalized pointwise mutual information (NPMI) score and has a strong correlation with humanjudged coherence. A higher NPMI score indicates that the topic distributions are semantically more coherent. Given a topic t represented by its top-N topic words w 1 , w 2 , ..., w N , the NPMI score for t is: where the probabilities in equation (12) are derived from a 10-word sliding window over an external corpus. The NPMI score for a topic model is the average score for all topics. We compute the NPMI score based on top-15 most probable words of each topic and use the English Wikipedia 10 of 4.6 million articles as our external corpus. Figures 3 and 4 show NPMI scores computed for the LDA, w2v-LDA and glove-LDA models on the 10 We used the Wikipedia-articles dump of July 8, 2014.  N20short dataset for 20 and 40 topics. We see that λ = 1.0 gives the highest NPMI score. In other words, using only the latent feature model produces the most coherent topic distributions.

Qualitative analysis
This section provides an example of how our models improve topic coherence. Table 5  In table 5, topic 1 of the DMM model consists of words related to "nuclear crisis in Japan" together with other unrelated words. The w2v-DMM model produced a purer topic 1 focused on "Japan earthquake and nuclear crisis," presumably related to the "Fukushima Daiichi nuclear disaster." Topic 3 is about "oil prices" in both models. However, all top-15 words are qualitatively more coherent in the w2v-DMM model. While topic 4 of the DMM model is difficult to manually label, topic 4 of the w2v-DMM model is about the "Arab Spring" event.
Topics 5, 19 and 14 of the DMM model are not easy to label. Topic 5 relates to "entertainment", topic 19 is generally a mixture of "entertainment" and "sport", and topic 14 is about "sport" and "politics." However, the w2v-DMM model more clearly distinguishes these topics: topic 5 is about "entertainment", topic 19 is only about "sport" and topic 14 is only about "politics."

Document clustering evaluation
We compared our models to the baseline models in a document clustering task. After using a topic model to calculate the topic probabilities of a document, we assign every document the topic with the highest probability given the document (Cai et al., 2008;Lu et al., 2011;Xie and Xing, 2013;Yan et al., 2013). We use two common metrics to evaluate clustering performance: Purity and normalized mutual information (NMI): see (Manning et al., 2008, Section 16.3) for details of these evaluations. Purity and NMI scores always range from 0.0 to 1.0, and higher scores reflect better clustering performance. Figures 5 and 6 present Purity and NMI results obtained by the LDA, w2v-LDA and glove-LDA models on the N20short dataset with the numbers of topics T set to either 20 or 40, and the value of the mixture weight λ varied from 0.0 to 1.0.
We found that setting λ to 1.0 (i.e. using only the latent features to model words), the glove-LDA produced 1%+ higher scores on both Purity and NMI results than the w2v-LDA when using 20 topics. However, the two models glove-LDA and w2v-LDA returned equivalent results with 40 topics where they sample if we do not take the order of the most probable words into account.  Table 5: Examples of the 15 most probable topical words on the TMNtitle dataset with T = 20. InitDMM denotes the output from the 1500 th sample produced by the DMM model, which we use to initialize the w2v-DMM model. Iter=1, Iter=2, Iter=3 and the like refer to the output of our w2v-DMM model after running 1, 2, 3 sampling iterations, respectively. The words found in InitDMM and not found in Iter=500 are underlined. Words found by the w2v-DMM model but not found by the DMM model are in bold. gain 2%+ absolute improvement 13 on the two Purity and NMI against the baseline LDA model. By varying λ, as shown in Figures 5 and 6, the w2v-LDA and glove-LDA models obtain their best results at λ = 0.6 where the w2v-LDA model does slightly better than the glove-LDA. Both models sig-13 Using the Student's t-Test, the improvement is significant (p < 0.01). nificantly outperform their baseline LDA models; for example with 40 topics, the w2v-LDA model attains 4.4% and 4.3% over the LDA model on Purity and NMI metrics, respectively.
We fix the mixture weight λ at 0.6, and report experimental results based on this value for the rest of this section. Tables 6, 7 and 8 show clustering results produced by our models and the baseline models on the remaining datasets with different numbers 307 Data

Data
Method Purity  NMI  T=7  T=20  T=40  T=80  T=7  T=20  T=40  T=80 Table 7: Purity and NMI results on the TMN and TMNtitle datasets with the mixture weight λ = 0.6. Purity  NMI  T=4  T=20  T=40  T=80  T=4  T=20  T=40  T=80   New models vs. baseline models: On most tests, our models score higher than the baseline models, particularly on the small N20small dataset where we get 6.0% improvement on NMI at T = 6, and on the short text TMN and TMNtitle datasets we obtain 6.1% and 2.5% higher Purity at T = 80. In addition, on the short and small Twitter dataset with T = 4, we achieve 3.9% and 5.3% improvements in Purity and NMI scores, respectively. Those results show that an improved model of topic-word mappings also improves the document-topic assignments.

Data Method
For the small value of T ≤ 7, on the large datasets of N20, TMN and TMNtitle, our models and baseline models obtain similar clustering results. However, with higher values of T , our models perform better than the baselines on the short TMN and TM-Ntitle datasets, while on the N20 dataset, the baseline LDA model attains a slightly higher clustering results than ours. In contrast, on the short and small Twitter dataset, our models obtain considerably better clustering results than the baseline models with a small value of T .
Google word2vec vs. Stanford glove word vectors: On the small N20short and N20small datasets, using the Google pre-trained word vectors produces 308 higher clustering scores than using Stanford pretrained word vectors. However, on the large datasets N20, TMN and TMNtitle, using Stanford word vectors produces higher scores than using Google word vectors when using a smaller number of topics, for example T ≤ 20. With more topics, for instance T = 80, the pre-trained Google and Stanford word vectors produce similar clustering results. In addition, on the Twitter dataset, both sets of pre-trained word vectors produce similar results.

Document classification evaluation
Unlike the document clustering task, the document classification task evaluates the distribution over topics for each document. Following Lacoste-Julien et al. (2009), Lu et al. (2011), Huh and Fienberg (2012 and Zhai and Boyd-graber (2013), we used Support Vector Machines (SVM) to predict the ground truth labels from the topic-proportion vector of each document. We used the WEKA's implementation (Hall et al., 2009) of the fast Sequential Minimal Optimization algorithm (Platt, 1999) for learning a classifier with ten-fold cross-validation and WEKA's default parameters. We present the macroaveraged F 1 score (Manning et al., 2008, Section 13.6) as the evaluation metric for this task.
Just as in the document clustering task, the mixture weight λ = 0.6 obtains the highest classification performances on the N20short dataset. For example with T = 40, our w2v-LDA and glove-LDA obtain F 1 scores at 40.0% and 38.9% which are 4.5% and 3.4% higher than F 1 score at 35.5% obtained by the LDA model, respectively.
We report classification results on the remaining experimental datasets with mixture weight λ = 0.6 in tables 9, 10 and 11. Unlike the clustering results, the LDA model does better than the DMM model for classification on the TMN dataset.    uations, our models perform better than the baseline models. In particular, on the small N20small and Twitter datasets, when the number of topics T is equal to number of ground truth labels (i.e. 20 and 4 correspondingly), our w2v-LDA obtains 5 + % higher F 1 score than the LDA model. In addition, our w2v-DMM model achieves 5.4% and 2.9% higher F 1 score than the DMM model on short TMN and TMNtitle datasets with T = 80, respectively. Google word2vec vs. Stanford glove word vectors: The comparison of the Google and Stanford pre-trained word vectors for classification is similar to the one for clustering.

Discussion
We found that the topic coherence evaluation produced the best results with a mixture weight λ = 1, which corresponds to using topic-word distributions defined in terms of the latent-feature word vectors. This is not surprising, since the topic coherence evaluation we used (Lau et al., 2014) is based on word co-occurrences in an external corpus (here, Wikipedia), and it is reasonable that the billion-word corpora used to train the latent feature word vectors are more useful for this task than the much smaller topic-modeling corpora, from which the topic-word multinomial distributions are trained.
On the other hand, the document clustering and document classification tasks depend more strongly on possibly idiosyncratic properties of the smaller topic-modeling corpora, since these evaluations reflect how well the document-topic assignments can group or distinguish documents within the topicmodeling corpus. Smaller values of λ enable the models to learn topic-word distributions that include an arbitrary multinomial topic-word distribution, enabling the models to capture idiosyncratic properties of the topic-modeling corpus. Even in these evaluations we found that an intermediate value of λ = 0.6 produced the best results, indicating that better word-topic distributions were produced when information from the large external corpus is combined with corpus-specific topic-word multinomials. We found that using the latent feature word vectors produced significant performance improvements even when the domain of the topic-modeling corpus was quite different to that of the external corpus from which the word vectors were derived, as was the case in our experiments on Twitter data.
We found that using either the Google or the Stanford latent feature word vectors produced very similar results. As far as we could tell, there is no reason to prefer either one of these in our topic modeling applications.

Conclusion and future work
In this paper, we have shown that latent feature representations can be used to improve topic models. We proposed two novel latent feature topic models, namely LF-LDA and LF-DMM, that integrate a latent feature model within two topic models LDA and DMM. We compared the performance of our models LF-LDA and LF-DMM to the baseline LDA and DMM models on topic coherence, document clustering and document classification evaluations. In the topic coherence evaluation, our model outperformed the baseline models on all 6 experimental datasets, showing that our method for exploiting external information from very large corpora helps improve the topic-to-word mapping. Meanwhile, document clustering and document classification results show that our models improve the document-topic assignments compared to the baseline models, especially on datasets with few or short documents.
As an anonymous reviewer suggested, it would be interesting to identify exactly how the latent feature word vectors improve topic modeling performance. We believe that they provide useful information about word meaning extracted from the large corpora that they are trained on, but as the reviewer suggested, it is possible that the performance improvements arise because the word vectors are trained on context windows of size 5 or 10, while the LDA and DMM models view documents as bags of words, and effectively use a context window that encompasses the entire document. In preliminary experiments where we train latent feature word vectors from the topic-modeling corpus alone using context windows of size 10 we found that performance was degraded relative to the results presented here, suggesting that the use of a context window alone is not responsible for the performance improvements we reported here. Clearly it would be valuable to investigate this further.
In order to use a Gibbs sampler in section 3.4, the conditional distributions needed to be distributions we can sample from cheaply, which is not the case for the ratios of Gamma functions. While we used a simple approximation, it is worth exploring other sampling techniques that can avoid approximations, such as Metropolis-Hastings sampling (Bishop, 2006, Section 11.2.2).
In order to compare the pre-trained Google and Stanford word vectors, we excluded words that did not appear in both sets of vectors. As suggested by anonymous reviewers, it would be interesting to learn vectors for these unseen words. In addition, it is worth fine-tuning the seen-word vectors on the dataset of interest.
Although we have not evaluated our approach on very large corpora, the corpora we have evaluated on do vary in size, and we showed that the gains from our approach are greatest when the corpora are small. A drawback of our approach is that it is slow on very large corpora. Variational Bayesian inference may provide an efficient solution to this problem (Jordan et al., 1999;Blei et al., 2003).