Neural Lattice Language Models

In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence and marginalize across this lattice to calculate sequence probabilities or optimize parameters. This approach allows us to seamlessly incorporate linguistic intuitions — including polysemy and the existence of multiword lexical items — into our language model. Experiments on multiple language modeling tasks show that English neural lattice language models that utilize polysemous embeddings are able to improve perplexity by 9.95% relative to a word-level baseline, and that a Chinese model that handles multi-character tokens is able to improve perplexity by 20.94% relative to a character-level baseline.


Introduction
Neural network models have recently contributed towards a great amount of progress in natural language processing. These models typically share a common backbone: recurrent neural networks (RNN), which have proven themselves to be capable of tackling a variety of core natural language processing tasks (Hochreiter and Schmidhuber, 1997;Elman, 1990). One such task is language modeling, in which we estimate a probability distribution over sequences of tokens that corresponds to observed sentences ( §2). Neural language models, particularly models conditioned on a particular input, have many applications including in machine translation (Bahdanau et al., 2016), abstractive summarization (Chopra et al., 2016), and speech processing (Graves et al., 2013).  Similarly, state-of-the-art language models are almost universally based on RNNs, particularly long short-term memory (LSTM) networks (Jozefowicz et al., 2016;Inan et al., 2017;Merity et al., 2016).
While powerful, LSTM language models usually do not explicitly model many commonly-accepted linguistic phenomena. As a result, standard models lack linguistically informed inductive biases, potentially limiting their accuracy, particularly in lowdata scenarios (Adams et al., 2017;Koehn and Knowles, 2017). In this work, we present a novel modification to the standard LSTM language modeling framework that allows us to incorporate some varieties of these linguistic intuitions seamlessly: neural lattice language models ( §3.1). Neural lattice language models define a lattice over possible paths through a sentence, and maximize the marginal probability over all paths that lead to generating the reference sentence, as shown in Fig. 1. Depending on how we define these paths, we can incorporate different assumptions about how language should be modeled.
In the particular instantiations of neural lattice language models covered by this paper, we focus on two properties of language that could potentially be of use in language modeling: the existence of multiword lexical units (Zgusta, 1967) ( §4.1) and poly-semy (Ravin and Leacock, 2000) ( §4.2). Neural lattice language models allow the model to incorporate these aspects in an end-to-end fashion by simply adjusting the structure of the underlying lattices.
We run experiments to explore whether these modifications improve the performance of the model ( §5). Additionally, we provide qualitative visualizations of the model to attempt to understand what types of multi-token phrases and polysemous embeddings have been learned.

Language Models
Consider a sequence X for which we want to calculate its probability. Assume we have a vocabulary from which we can select a unique list of |X| tokens x 1 , x 2 , . . . , x |X| such that X = [x 1 ; x 2 ; . . . ; x |X| ], i.e. the concatenation of the tokens (with an appropriate delimiter). These tokens can be either on the character level (Hwang and Sung, 2017;Ling et al., 2015) or word level (Inan et al., 2017;Merity et al., 2016). Using the chain rule, language models generally factorize p(X) in the following way: Note that this factorization is exact only in the case where the segmentation is unique. In characterlevel models, it is easy to see that this property is maintained, because each token is unique and nonoverlapping. In word-level models, this also holds, because tokens are delimited by spaces, and no word contains a space.

Recurrent Neural Networks
Recurrent neural networks have emerged as the state-of-the-art approach to approximating p(X). In particular, the LSTM cell (Hochreiter and Schmidhuber, 1997) is a specific RNN architecture which has been shown to be effective on many tasks, including language modeling (Press and Wolf, 2017;Jozefowicz et al., 2016;Merity et al., 2016;Inan et al., 2017). 1 LSTM language models recursively cal-culate the hidden and cell states (h t and c t respectively) given the input embedding e t−1 corresponding to token x t−1 : then calculate the probability of the next token given the hidden state, generally by performing an affine transform parameterized by W and b, followed by a softmax: 3 Neural Lattice Language Models

Language Models with Ambiguous Segmentations
To reiterate, the standard formulation of language modeling in the previous section requires splitting sentence X into a unique set of tokens x 1 , . . . , x |X| . Our proposed method generalizes the previous formulation to remove the requirement of uniqueness of segmentation, similar to that used in non-neural n-gram language models such as Dupont and Rosenfeld (1997) and Goldwater et al. (2007). First, we define some terminology. We use the term "token", designated by x i , to describe any indivisible item in our vocabulary that has no other vocabulary item as its constituent part. We use the term "chunk", designated by k i or x j i , to describe a sequence of one or more tokens that represents a portion of the full string X, containing the unit tokens x i through x j : We also refer to the "token vocabulary", which is the subset of the vocabulary containing only tokens, and to the "chunk vocabulary", which similarly contains all chunks.
Note that we can factorize the probability of any sequence of chunks K using the chain rule, in precisely the same way as sequences of tokens: We can factorize the overall probability of a token list X in terms of its chunks by using the chain rule, and marginalizing over all segmentations. For any particular token list X, we define a set of valid segmentations S(X), such that for every sequence s ∈ S(X), X = [x s 1 −1 s 0 ; x s 2 −1 s 1 ; . . . ; x s |s| s |s|−1 ]. The factorization is: Note that, by definition, there exists a unique segmentation of X such that x 1 , x 2 , . . . are all tokens, in which case |S| = |X|. When only that one unique segmentation is allowed per X, S contains only that one element, so summation drops out, and therefore for standard character-level and word-level models, Eq. (5) reduces to Eq. (4), as desired. However, for models that license multiple segmentations per X, computing this marginalization directly is generally intractable. For example, consider segmenting a sentence using a vocabulary containing all words and all 2-word expressions. The size of S would grow exponentially with the number of words in X, meaning we would have to marginalize over trillions of unique segmentations for even modestlysized sentences.

Lattice Language Models
To avoid this, it is possible to re-organize the computations in a lattice, which allows us to dramatically reduce the number of computations required (Dupont and Rosenfeld, 1997;Neubig et al., 2010).
All segmentations of X can be expressed as the edges of paths through a lattice over token-level prefixes of X: x <1 , x <2 , . . . , X. The infimum is the empty prefix x <1 ; the supremum is X; an edge from prefix x <i to prefix x <j exists if and only if there exists a chunk x j i in our chunk vocabulary such that [x <i ; x j i ] = x <j . Each path through the lattice from x <1 to X is a segmentation of X into the list of tokens on the traversed edges, as seen in Fig. 1.
The probability of a specific prefix p(x <j ) is calculated by marginalizing over all segmentations leading up to x j−1 where by definition s |S| = j. The key insight here that allows us to calculate this efficiently is that this is a recursive formula and that instead of marginalizing over all segmentations, we can marginalize over immediate predecessor edges in the lattice, A j . Each item in A j is a location i (= s t−1 ), which indicates that the edge between prefix x <i and prefix x <j , corresponding to token x j i , exists in the lattice. We can thus calculate p(x <j ) as Since X is the supremum prefix node, we can use this formula to calculate p(X) by setting j = |X|. In order to do this, we need to calculate the probability of each of its |X| predecessors. Each of those takes up to |X| calculations, meaning that the computation for p(X) can be done in O(|X| 2 ) time. If we can guarantee that each node will have a maximum number of incoming edges D so that |A j | ≤ D for all j, then this bound can be reduced to O(D|X|) time. 2 The proposed technique is completely agnostic to the shape of the lattice, and Fig. 2 illustrates several potential varieties of lattices. Depending on how the lattice is constructed, this approach can be useful in a variety of different contexts, two of which we discuss in §4.

Neural Lattice Language Models
There is still one missing piece in our attempt to apply neural language models to lattices. Within our overall probability in Eq. (7), we must calculate the probability p(x j i | x <i ) of the next segment given the history. However, given that there are potentially an exponential number of paths through the lattice leading to x i , this is not as straightforward as in the case where only one segmentation is possible. Previous work on lattice-based language models (Neubig et al., 2010;Dupont and Rosenfeld, 1997) utilized count-based n-gram models, which depend on only a limited historical context at each step making it possible to compute the marginal probabilities in an exact and efficient manner through dynamic programming. On the other hand, recurrent neural  Figure 2: Example of (a) a single-path lattice, (b) a sparse lattice, (c) a dense lattice with D = 2, and (d) a multilattice with D = 2, for sentence "the dog barked ." models depend on the entire context, causing them to lack this ability. Our primary technical contribution is therefore to describe several techniques for incorporating lattices into a neural framework with infinite context, by providing ways to approximate the hidden state of the recurrent neural net.

Direct Approximation
One approach to approximating the hidden state is the TreeLSTM framework described by Tai et al. (2015). 3 In the TreeLSTM formulation, new states are derived from multiple predecessors by simply summing the individual hidden and cell state vectors of each of them. For each predecessor location i ∈ A j , we first calculate the local hidden stateh and local cell statec by combining the embedding e j i with the hidden state of the LSTM at x <i using the standard LSTM update function as in Eq. (2): We then sum the local hidden and cell states: This framework has been used before for calculating neural sentence representations involving lattices by Su et al. (2016) and Sperber et al. (2017), but not for the language models that are the target of this paper. This formulation is powerful, but comes at the cost of sacrificing the probabilistic interpretation of which paths are likely. Therefore, even if almost all of the probability mass comes through the "true" segmentation, the hidden state may still be heavily influenced by all of the "bad" segmentations as well.

Monte-Carlo Approximation
Another approximation that has been proposed is to sample one predecessor state from all possible predecessors, as seen in Chan et al. (2017). We can calculate the total probability that we reach some prefix x <j , and we know how much of this probability comes from each of its predecessors in the lattice, so we can construct a probability distribution over predecessors in the lattice: Therefore, one way to update the LSTM is to sample one predecessor x <i from the distribution M and simply set h j =h i and c j =c i . However, sampling is unstable and difficult to train: we found that the model tended to over-sample short tokens early on during training, and thus segmented every sentence into unigrams. This is similar to the outcome reported by Chan et al. (2017), who accounted for it by incorporating an encouraging exploration.

Marginal Approximation
In another approach, which allows us to incorporate information from all predecessors while maintaining a probabilistic interpretation, we can utilize the probability distribution M to instead calculate the expected value of the hidden state:

Gumbel-Softmax Interpolation
The Gumbel-Softmax trick, or concrete distribution, described by Jang et al. (2017) and Maddison et al. (2017), is a technique for incorporating discrete choices into differentiable neural computations. In this case, we can use it to select a predecessor. The Gumbel-Softmax trick works by taking advantage of the fact that adding Gumbel noise to the pre-softmax predecessor scores and then taking the argmax is equivalent to sampling from the probability distribution. By replacing the argmax with a softmax function scaled by a temperature τ , we can get this pseudo-sampled distribution through a fully differentiable computation: This new distribution can then be used to calculate the hidden state by taking a weighted average of the states of possible predecessors: When τ is large, the values of N (x <i | θ) are flattened out; therefore, all the predecessor hidden states are summed with approximately equal weight, equivalent to the direct approximation ( §3.3.1). On the other hand, when τ is small, the output distribution becomes extremely peaky, and one predecessor receives almost all of the weight. Each predecessor x <i has a chance of being selected equal to M (x <i | θ), which makes it identical to ancestral sampling ( §3.3.2). By slowly annealing the value of τ , we can smoothly interpolate between these two approaches, and end up with a probabilistic interpretation that avoids the instability of pure samplingbased approaches.

Instantiations of Neural Lattice LMs
In this section, we introduce two instantiations of neural lattice languages models aiming to capture features of language: the existence of coherent multi-token chunks, and the existence of polysemy.

Motivation
Natural language phrases often demonstrate significant non-compositionality: for example, in English, the phrase "rock and roll" is a genre of music, but this meaning is not obtained by viewing the words in isolation. In word-level language modeling, the network is given each of these words as input, one at a time; this means it must capture the idiomaticity in its hidden states, which is quite roundabout and potentially a waste of the limited parameters in a neural network model. A straightforward solution is to have an embedding for the entire multitoken phrase, and use this to input the entire phrase to the LSTM in a single timestep. However, it is also important that the model is able to decide whether the non-compositional representation is appropriate given the context: sometimes, "rock" is just a rock.
Additionally, by predicting multiple tokens in a single timestep, we are able to decrease the number of timesteps across which the gradient must travel, making it easier for information to be propagated across the sentence. This is even more useful in non-space-delimited languages such as Chinese, in which segmentation is non-trivial, but characterlevel modeling leads to many sentences being hundreds of tokens long.
There is also psycho-linguistic evidence which supports the fact that humans incorporate multitoken phrases into their mental lexicon. Siyanova-Chanturia et al. (2011) show that native speakers of a language have significantly reduced response time when processing idiomatic phrases, whether they are used in an idiomatic sense or not, while Bannard and Matthews (2008) show that children learning a language are better at speaking common phrases than uncommon ones. This evidence lends credence to the idea that multi-token lexical units are a useful tool for language modeling in humans, and so may also be useful in computational models.

Modeling Strategy
The underlying lattices utilized in our multi-token phrase experiments are "dense" lattices: lattices where every edge (below a certain length L) is present (Fig. 2, c). This is for two reasons. First, since every sequence of tokens is given an opportunity to be included in the path, all segmentations are candidates, which will potentially allow us to discover arbitrary types of segmentations without a prejudice towards a particular theory of which multitoken units we should be using. Second, using a dense lattice makes minibatching very straightforward by ensuring that the computation graphs for each sentence are identical. If the lattices were not dense, the lattices of various sentences in a minibatch could be different; it then becomes necessary to either calculate a differently-shaped graph for every sentence, preventing minibatching and hurting training efficiency, or calculate and then mask out the missing edges, leading to wasted computation. Since only edges of length L or less are present, the maximum in-degree of any node in the lattice D is no greater than L, giving us the time bound O(L|X|).

Token Vocabularies
Storing an embedding for every possible multitoken chunk would require |V | L unique embeddings, which is intractable. Therefore, we construct our multi-token embeddings by merging compositional and non-compositional representations.
Non-compositional Representation We first establish a priori a set of "core" chunk-level tokens that each have a dense embedding. In order to guarantee full coverage of sentences, we first add every unit-level token to this vocabulary, e.g. every word in the corpus for a word-level model. Following this, we also add the most frequent n-grams (where 1 < n ≤ L). This ensures that the vast majority of sentences will have several longer chunks appear within them, and so will be able to take advantage of tokens at larger granularities.
Compositional Representation However, the non-compositional embeddings above only account for a subset of all n-grams, so we additionally construct compositional embeddings for each chunk by running a BiLSTM encoder over the individual embeddings of each unit-level token within it (Dyer et al., 2016). In this way, we can create a unique embedding for every sequence of unit-level tokens.
We use this composition function on chunks regardless of whether they are assigned noncompositional embeddings or not, as even highfrequency chunks may display compositional properties. Thus, for every chunk, we compute the chunk embedding vector x j i by concatenating the compositional embedding with the non-compositional embedding if it exists, or otherwise with an <UNK> embedding.
Sentinel Mixture Model for Predictions At each timestep, we want to use our LSTM hidden state h t to assign some probability mass to every chunk with length less than L. To do this, we follow Merity et al. (2016) in creating a new "sentinel" token <s> and adding it to our vocabulary. At each timestep, we first use our neural network to calculate a score for each chunk C in our vocabulary, including the sentinel token. We do a softmax across these scores to assign a probability p main (C t+1 | h t ; θ) to every chunk in our vocabulary, and also to <s>. For token sequences not represented in our chunk vocabulary, this probability p main (C t+1 | h t ; θ) = 0.
Next, the probability mass assigned to the sentinel value, p main (<s> | h t ; θ), is distributed across all possible tokens sequences of length less than L, using another LSTM with parameters θ sub . Similar to Jozefowicz et al. (2016), this sub-LSTM is initialized by passing in the hidden state of the main lattice LSTM at that timestep. This gives us a probability for each sequence p sub (c 1 , c 2 , . . . , c L | h t ; θ sub ).
The final formula for calculating the probability mass assigned to a specific chunk C is:

Motivation
A second shortcoming of current language modeling approaches is that each word is associated with only one embedding. For highly polysemous words, a single embedding may be unable to represent all meanings effectively.
There has been past work in word embeddings which has shown that using multiple embeddings for each word is helpful in constructing a useful representation. Athiwaratkun and Wilson (2017) represented each word with a multimodal Gaussian distribution and demonstrated that embeddings of this form were able to outperform more standard skipgram embeddings on word similarity and entailment tasks. Similarly, Chen et al. (2015) incorporate standard skip-gram training into a Gaussian mixture framework and show that this improves performance on several word similarity benchmarks.
When a polysemous word is represented using only a single embedding in a language modeling task, the multimodal nature of the true embedding distribution may causes the resulting embedding to be both high-variance and skewed from the positions of each of the true modes. Thus, it is likely useful to represent each token with multiple embeddings when doing language modeling.

Modeling Strategy
For our polysemy experiments, the underlying lattices are "multilattices": lattices which are also multigraphs, and can have any number of edges between any given pair of nodes (Fig. 2, d). Lattices set up in this manner allow us to incorporate multiple embeddings for each word. Within a single sentence, any pair of nodes corresponds to the start and end of a particular subsequence of the full sentence, and is thus associated with a specific token; each edge between them is a unique embedding for that token. While many strategies for choosing the number of embeddings exist in the literature (Neelakantan et al., 2014), in this work, we choose a number of embeddings E and assign that many embeddings to each word. This ensures that the maximum indegree of any node in the lattice D, is no greater than E, giving us the time bound O(E|X|).
In this work, we do not explore models that include both chunk vocabularies and multiple embeddings. However, combining these two techniques, as well as exploring other, more complex lattice structures, is an interesting avenue for future work.

Data
We perform experiments on two languages: English and Chinese, which provide an interesting contrast in linguistic features. 4 In English, the most common benchmark for language modeling recently is the Penn Treebank, specifically the version preprocessed by Tomáš Mikolov (2010). However, this corpus is limited by being relatively small, only containing approximately 45,000 sentences, which we found to be insufficient to effectively train lattice language models. 5 Thus, we instead used the Billion Word Corpus (Chelba et al., 2014). Past experiments on the BWC typically modeled every word without restricting the vocabulary, which results in a number of challenges regarding the modeling of open vocabularies that are orthogonal to this work. Thus, we create a pre-4 Code to reproduce datasets and experiments is available at: http://github.com/jbuckman/ neural-lattice-language-models 5 Experiments using multi-word units resulted in overfitting, regardless of normalization and hyperparameter settings. processed version of the data in the same manner as Mikolov, lowercasing the words, replacing numbers with <N> tokens, and <UNK>ing all words beyond the ten thousand most common. Additionally, we restricted the data set to only include sentences of length 50 or less, ensuring that large minibatches could fit in GPU memory. Our subsampled English corpus contained 29,869,166 sentences, of which 29,276,669 were used for training, 5,000 for validation, and 587,497 for testing. To validate that our methods scale up to larger language modeling scenarios, we also report a smaller set of large-scale experiments on the full billion word benchmark in Appendix A.
In Chinese, we ran experiments on a subset of the Chinese GigaWord corpus. Chinese is also particularly interesting because unlike English, it does not use spaces to delimit words, so segmentation is non-trivial. Therefore, we used a character-level language model for the baseline, and our lattice was composed of multi-character chunks. We used sentences from Guangming Daily, again <UNK>ing all but the 10,000 most common tokens and restricting the selected sentences to only include sentences of length 150 or less. Our subsampled Chinese corpus included 934,101 sentences for training, 5,000 for validation, and 30,547 for testing.

Main Experiments
We compare a baseline LSTM model, dense lattices of size 1, 2, and 3, and a multilattice with 2 and 3 embeddings per word.
The implementation of our networks was done in DyNet .All LSTMs had 2 layers, each with hidden dimension of 200. Variational dropout (Gal and Ghahramani, 2016) of .2 was used on the Chinese experiments, but hurt performance on the English data, so it was not used. The 10,000 word embeddings each had dimension 256. For lattice models, chunk vocabularies were selected by taking the 10,000 words in the vocabulary and adding the most common 10,000 n-grams with 1 < n ≤ L. The weights on the final layer of the network were tied with the input embeddings, as done by (Press and Wolf, 2017;Inan et al., 2017). In all lattice models, hidden states were computed using weighted expectation ( §3.3.3) unless mentioned otherwise. In multi-embedding models, embedding  sizes were decreased so as to maintain the same total number of parameters. All models were trained using the Adam optimizer with a learning rate of .01 on a NVIDIA K80 GPU. The results can be seen in Table 1 and Table 2.
In the multi-token phrase experiments, many additional parameters are accrued by the BiLSTM encoder and sub-LSTM predictive model, making them not strictly comparable to the baseline. To account for this, we include results for L = 1, which, like the baseline LSTM approach, fails to leverage multi-token phrases, but includes the same number of parameters as L = 2 and L = 3.
In both the English and Chinese experiments, we see the same trend: increasing the maximum lattice size decreases the perplexity, and for L = 2 and above, the neural lattice language model outperforms the baseline. Similarly, increasing the number of embeddings per word decreases the perplexity, and for E = 2 and above, the multiple-embedding model outperforms the baseline.

Hidden State Calculation Experiments
We compare the various hidden-state calculation approaches discussed in Section 3.3 on the English data using a lattice of size L = 2 and dropout of .2. These results can be seen in Table 3. For all hidden state calculation techniques, the neural lattice language models outperform the LSTM baseline. The ancestral sampling technique used by Chan et al. (2017) is worse than the others, which we found to be due to it getting stuck in a local minimum which represents almost everything as unigrams. There is only a small difference between the perplexities of the other techniques.

Discussion and Analysis
Neural lattice language models convincingly outperform an LSTM baseline on the task of language modeling. One interesting note is that in English, which is already tokenized into words and highly polysemous, utilizing multiple embeddings per word is more effective than including multiword tokens. In contrast, in the experiments on the Chinese data, increasing the lattice size of the multicharacter tokens is more important than increasing the number of embeddings per character. This corresponds to our intuition; since Chinese is not tokenized to begin with, utilizing models that incorporate segmentation and compositionality of elementary units is very important for effective language modeling.
To calculate the probability of a sentence, the neural lattice language model implicitly marginalizes across latent segmentations. By inspecting the probabilities assigned to various edges of the lattice, we can visualize these segmentations, as is done in Fig. 3. The model successfully identifies bigrams which correspond to non-compositional compounds, like "prime minister", and bigrams which correspond to compositional compounds, such as "a quarter". Interestingly, this does not occur for all high-frequency bigrams; it ignores those that are not inherently meaningful, such as "<UNK> in", yielding qualitatively good phrases.
In the multiple-embedding experiments, it is pos- Figure 3: Segmentation of three sentences randomly sampled from the test corpus, using L = 2. Green numbers show probability assigned to token sizes. For example, the first three words in the first sentence have a 59% and 41% chance of being "please let me" or "please let me" respectively. Boxes around words show greedy segmentation. sible to see which of the two embeddings of a word was assigned the higher probability for any specific test-set sentence. In order to visualize what types of meanings are assigned to each embedding, we select sentences in which one embedding is preferred, and look at the context in which the word is used. Sev-eral examples of this can be seen in Table 4; it is clear from looking at these examples that the system does learn distinct embeddings for different senses of the word. What is interesting, however, is that it does not necessarily learn intuitive semantic meanings; instead it tends to group the words by the con-text in which they appear. In some cases, like profile and edition, one of the two embeddings simply captures an idiosyncrasy of the training data.
Additionally, for some words, such as rodham in Table 4, the system always prefers one embedding. This is promising, because it means that in future work it may be possible to further improve accuracy and training efficiency by assigning more embeddings to polysemous words, instead of assigning the same number of embeddings to all words.

Related Work
Past work that utilized lattices in neural models for natural language processing centers around using these lattices in the encoder portion of machine translation. Su et al. (2016) utilized a variation of the Gated Recurrent Unit that operated over lattices, and preprocessed lattices over Chinese characters that allowed it to effectively encode multiple segmentations. Additionally, Sperber et al. (2017) proposed a variation of the TreeLSTM with the goal of creating an encoder over speech lattices in speech-to-text. Our work tackles language modeling rather than encoding, and thus addresses the issue of marginalization over the lattice.
Another recent work which marginalized over multiple paths through a sentence is Ling et al. (2016). The authors tackle the problem of code generation, where some components of the code can be copied from the input, via a neural network. Our work expands on this by handling multi-word tokens as input to the neural network, rather than passing in one token at a time.
Neural lattice language models improve accuracy by helping the gradient flow over smaller paths, preventing vanishing gradients. Many hierarchical neural language models have been proposed with a similar objective Koutnik et al. (2014;Zhou et al. (2017). Our work is distinguished from these by the use of latent token-level segmentations that capture meaning directly, rather than simply being highlevel mechanisms to encourage gradient flow. Chan et al. (2017) propose a model for predicting characters at multiple granularities in the decoder segment of a machine translation system. Our work expands on theirs by considering the entire lattice at once, rather than considering a only a sin-gle path through the lattice via ancestral sampling. This allows us to train end-to-end without the model collapsing to a local minimum, with no exploration bonus needed. Additionally, we propose a more broad class of models, including those incorporating polysemous words, and apply our model to the task of word-level language modeling, rather than character-level transcription.
Concurrently to this work, van Merriënboer et al. (2017) have proposed a neural language model that can similarly handle multiple scales. Our work is differentiated in that it is more general: utilizing an open multi-token vocabulary, proposing multiple techniques for hidden state calculation, and handling polysemy using multi-embedding lattices.

Future Work
In the future, we would like to experiment with utilizing neural lattice language models in extrinsic evaluation, such as machine translation and speech recognition. Additionally, in the current model, the non-compositional embeddings must be selected a priori, and may be suboptimal. We are exploring techniques to store fixed embeddings dynamically, so that the non-compositional phrases can be selected as part of the end-to-end training.

Conclusion
In this work, we have introduced the idea of a neural lattice language model, which allows us to marginalize over all segmentations of a sentence in an endto-end fashion. In our experiments on the Billion Word Corpus and Chinese GigaWord corpus, we demonstrated that the neural lattice language model beats an LSTM-based baseline at the task of language modeling, both when it is used to incorporate multiple-word phrases and multiple-embedding words. Qualitatively, we observed that the latent segmentations generated by the model correspond well to human intuition about multi-word phrases, and that the varying usage of words with multiple embeddings seems to also be sensible.

A Large-Scale Experiments
To verify that our findings scale to state-of-theart language models, we also compared a baseline model, dense lattices of size 1 and 2, and a multilattice with 2 embeddings per word on the full bytepair encoded Billion Word Corpus.
In this set of experiments, we take the full Billion Word Corpus, and apply byte-pair encoding as described by Sennrich et al. (2015) to construct a vocabulary of 10,000 sub-word tokens. Our model consists of three LSTM layers, each with 1500 hidden units. We train the model for a single epoch over the corpus, using the Adam optimizer with learning rate .0001 on a P100 GPU. We use a batch size of 40, and variational dropout of 0.1. The 10,000 sub-word embeddings each had dimension 600. For lattice models, chunk vocabularies were selected by taking the 10,000 sub-words in the vocabulary and adding the most common 10,000 n-grams with 1 < n ≤ L. The weights on the final layer of the network were tied with the input embeddings, as done by Press and Wolf (2017;Inan et al. (2017). In all lattice models, hidden states were computed using weighted expectation ( §3.3.3). In multi-embedding models, embedding sizes were decreased so as to maintain the same total number of parameters.
Results of these experiments are in Table 5. The performance of the baseline model is roughly on par with that of state-of-the-art models on this database; differences can be explained by model size and hyperparameter tuning. The results show the same trend as the results of our main experiments, indicating that the performance gains shown by our smaller neural lattice language models generalize to the much larger datasets used in state-of-the-art systems.

B Chunk Vocabulary Size
We compare a 2-lattice with a non-compositional chunk vocabulary of 10,000 phrases with a 2lattice with a non-compositional chunk vocabulary of 20,000 phrases. The results can be seen in Table  6. Doubling the number of non-compositional embeddings present decreases the perplexity, but only by a small amount. This is perhaps to be expected, given that doubling the number of embeddings corresponds to a large increase in the number of model parameters for phrases that may have less data with which to train them.