One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations

Discourse relations bind smaller linguistic units into coherent texts. Automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lowerlevel components, such as entity mentions. Our solution computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. We also perform a downward compositional pass to capture the meaning of coreferent entity mentions. Implicit discourse relations are then predicted from these two representations, obtaining substantial improvements on the Penn Discourse Treebank.


Introduction
The high-level organization of text can be characterized in terms of discourse relations between adjacent spans of text (Knott, 1996;Mann, 1984;Webber et al., 1999). Identifying these relations has been shown to be relevant to tasks such as summarization (Louis et al., 2010a;Yoshida et al., 2014), sentiment analysis (Somasundaran et al., 2009), and coherence evaluation . While the Penn Discourse Treebank (PDTB) now provides a large dataset annotated for discourse relations (Prasad et al., 2008), the automatic identification of implicit relations is a difficult task, with  state-of-the-art performance at roughly 40% (Lin et al., 2009).
One reason for this poor performance is that predicting implicit discourse relations is a fundamentally semantic task, and the relevant semantics may be difficult to recover from surface level features. For example, consider the implicit discourse relation between the following two sentences (also shown in Figure 1a): (1) Bob gave Tina the burger.
She was hungry.
While a connector like because seems appropriate here, there is little surface information to signal this relationship, unless the model has managed to learn a bilexical relationship between burger and hungry. Learning all such relationships from annotated data -including the relationship of hungry to knish, pierogie, pupusa etc -would require far more data than can possibly be annotated. We address this issue by applying a discriminatively-trained model of compositional distributional semantics to discourse relation classification (Socher et al., 2013b;Baroni et al., 2014a). The meaning of each discourse argument is represented as a vector (Turney et al., 2010), which is computed through a series of compositional operations over the syntactic parse tree. The discourse relation can then be predicted as a bilinear combination of these vector representations. Both the prediction matrix and the compositional operator are trained in a supervised large-margin framework (Socher et al., 2011), ensuring that the learned compositional operation produces semantic representations that are useful for discourse. We show that when combined with a small number of surface features, this approach outperforms prior work on the classification of implicit discourse relations in the PDTB.
Despite these positive results, we argue that purely vector-based representations are insufficiently expressive to capture discourse relations. To see why, consider what happens if make a tiny change to example (1): (2) Bob gave Tina the burger.
He was hungry.
After changing the subject of the second sentence to Bob, the connective "because" no longer seems appropriate; a contrastive connector like although is preferred. But despite the radical difference in meaning, the distributional representation of the second sentence will be almost unchanged: the syntactic structure remains identical, and the words he and she have very similar word representations (see Figure 2). If we reduce each discourse argument span to a single vector, we cannot possibly capture the ways that discourse relations are signaled by entities and their roles (Cristea et al., 1998;Louis et al., 2010b). As Mooney (2014) puts it, "you can't cram the meaning of a whole %&!$# sentence into a single $&!#* vector!" We address this issue by computing vector representations not only for each discourse argument, but also for each coreferent entity mention. These representations are meant to capture the role played by the entity in the text, and so they must take the entire span of text into account. We compute entity-role representations using a novel feed-forward compositional model, which combines "upward" and "downward" passes through the syntactic structure, shown in Figure 1b. In the example, the downward representations for Tina and she are computed from a combination of the parent and sibling nodes in the binarized parse tree. Representations for these coreferent mentions are then combined in a bilinear product, and help to predict the implicit discourse relation. In example (2), we resolve he to Bob, and combine their vector representations instead, yielding a different prediction about the discourse relation.
Our overall approach achieves a 3% improvement in accuracy over the best previous work (Lin et al., 2009) on multiclass discourse relation classification, and also outperforms more recent work on binary classification. The novel entityaugmented distributional representation improves accuracy over the "upward" compositional model, showing the importance of representing the meaning of coreferent entity mentions.

Entity augmented distributional semantics
We now formally define our approach to entityaugmented distributional semantics, using the notation shown in Table 1. For clarity of exposition, we focus on discourse relations between pairs of sentences. The extension to non-sentence arguments is discussed in Section 5.

Upward pass: argument semantics
Distributional representations for discourse arguments are computed in a feed-forward "upward" pass: each non-terminal in the binarized syntac-Notation Explanation (i), r(i) left and right children of i ρ(i), s(i) parent and sibling of i A(m, n) set of aligned entities between arguments m and n Y set of discourse relations y * ground truth relation ψ(y) decision function u upward vector d downward vector Ay classification parameter associated with upward vectors By classification parameter associated with downward vectors U composition operator in upward composition procedure V composition operator in downward composition procedure L(θ) objective function Table 1: Table of notation tic parse tree has a K-dimensional distributional representation that is computed from the distributional representations of its children, bottoming out in pre-trained representations of individual words. We follow the Recursive Neural Network (RNN) model of Socher et al. (2011). For a given parent node i, we denote the left child as (i), and the right child as r(i); we compose their representations to obtain, where tanh (·) is the element-wise hyperbolic tangent function (Pascanu et al., 2012), and U ∈ R K×2K is the upward composition matrix. We apply this compositional procedure from the bottom up, ultimately obtaining the argument-level representation u 0 . The base case is found at the leaves of the tree, which are set equal to pre-trained word vector representations. For example, in the second sentence of Figure 1, we combine the word representations of was and hungry to obtain u (r) 1 , and then combine u (r) 1 with the word representation of she to obtain u (r) 0 . Note that the upward pass is feedforward, meaning that there are no cycles and all nodes can be computed in linear time.

Downward pass: entity semantics
As seen in the contrast between Examples 1 and 2, a model that uses a single vector representation for each discourse argument would find little to distinguish between she was hungry and he was hungry.
It would therefore almost certainly fail to identify the correct discourse relation for at least one of these cases, which requires tracking the roles played by the entities that are coreferent in each pair of sentences. To address this issue, we augment the representation of each argument with additional vectors, representing the semantics of the role played by each coreferent entity in each argument. For example, in (1a), Tina got the burger, and in (1b), she was hungry. Rather than represent this information in a logical form -which would require robust parsing to a logical representation -we represent it through additional distributional vectors.
The role of a constituent i can be viewed as a combination of information from two neighboring nodes in the parse tree: its parent ρ(i), and its sibling s(i). We can make a downward pass, computing the downward vector d i from the downward vector of the parent d ρ(i) , and the upward vector of the sibling u s(i) : where V ∈ R K×2K is the downward composition matrix. The base case of this recursive procedure occurs at the root of the parse tree, which is set equal to the upward representation, d 0 u 0 . This procedure is illustrated in Figure 1b: for Tina, the parent node is d ( ) 2 , and the sibling is u ( ) 3 . The up-down compositional algorithm is designed to maintain the feedforward nature of the neural network, so that we can efficiently compute all nodes without iterating. Each downward node d i influences only other downward nodes d j where j > i, meaning that the downward pass is feedforward. The upward node is also feedforward: each upward node u i influences only other upward nodes u j where j < i. Since the upward and downward passes are each feedforward, and the downward nodes do not influence any upward nodes, the combined up-down network is also feedforward. This ensures that we can efficiently compute all u i and d i in time that is linear in the length of the input.

Connection to the inside-outside algorithm
In the inside-outside algorithm for computing marginal probabilities in a probabilistic contextfree grammar (Lari and Young, 1990), the inside scores are constructed in a bottom-up fashion, like our upward nodes; the outside score for node i is constructed from a product of the outside score of the parent ρ(i) and the inside score of the sibling s(i), like our downward nodes. The standard inside-outside algorithm sums over all possible parse trees, but since the parse tree is observed in our case, a closer analogy would be to the constrained version of the inside-outside algorithm for latent variable grammars (Petrov et al., 2006). Cohen et al. (2014) describe a tensor formulation of the constrained inside-outside algorithm; similarly, we could compute the downward vectors by a tensor contraction of the parent and sibling vectors (Smolensky, 1990;Socher et al., 2013a). However, this would involve K 3 parameters, rather than the K 2 parameters in our matrixvector composition.

Predicting discourse relations
To predict the discourse relation between an argument pair (m, n), the decision function is a sum of bilinear products, where A y ∈ R K×K and B y ∈ R K×K are the classification parameters for relation y. A scalar b y is used as the bias term for relation y, and A(m, n) is the set of coreferent entity mentions shared among the argument pair (m, n). The decision value ψ(y) of relation y is therefore based on the upward vectors at the root, u To avoid overfitting, we apply a lowdimensional approximation to each A y , The same approximation is also applied to each B y , reducing the number of classification parameters from 2 × #|Y| × K 2 to 2 × #|Y| × 3K.
Surface features Prior work has identified a number of useful surface-level features (Lin et al., 2009), and the classification model can easily be extended to include them. Defining φ (m,n) as the vector of surface features extracted from the argument pair (m, n), the corresponding decision function is modified as, where β y is the classification weight on surface features for relation y. We describe these features in Section 5.

Large-margin learning framework
There are two sets of parameters to be learned: the classification parameters θ class = {A y , B y , β y , b y } y∈Y , and the composition parameters θ comp = {U, V}. We use pre-trained word representations, and do not update them. While prior work shows that it can be advantageous to retrain word representations for discourse analysis (Ji and Eisenstein, 2014), our preliminary experiments found that updating the word representations led to serious overfitting in this model.
Following Socher et al. (2011), we define a large margin objective, and use backpropagation to learn all parameters of the network jointly (Goller and Kuchler, 1996). Learning is performed using stochastic gradient descent (Bottou, 1998), so we present the learning problem for a single argument pair (m, n) with the gold discourse relation y * . The objective function for this training example is a regularized hinge loss, where θ = θ class ∪ θ comp is the set of learning parameters. The regularization term λ||θ|| 2 2 indicates that the squared values of all parameters are penalized by λ; this corresponds to penalizing the squared Frobenius norm for the matrix parameters, and the squared Euclidean norm for the vector parameters.

Learning the classification parameters
In Equation 6, L(θ) = 0, if for every y = y * , ψ(y * ) − ψ(y ) ≥ 1 holds. Otherwise, the loss will be caused by any y , where y = y * and ψ(y * ) − ψ(y ) < 1. The gradient for the classification parameters therefore depends on the margin value between ground truth label and all other labels. Specifically, taking one component of A y , a y,1 , as an example, the derivative of the objective for y = y * is where δ (·) is the delta function. The derivative for y = y * is During learning, the updating rule for A y is where η is the learning rate. Similarly, we can obtain the gradient information and updating rules for parameters {B y , β y , b y } y∈Y .

Learning the composition parameters
There are two composition matrices U and V, corresponding to the upward and downward composition procedures respectively. Taking the upward composition parameter U as an example, the derivative of L(θ) with respect to U is As with the classification parameters, the derivative depends on the margin between y and y * . For every y ∈ Y, we have the unified derivative form, The gradient information of U also depends on the gradient information of ψ(y) with respect to every downward vector d, as shown in the last two terms in Equation 11. This is because the computation of each downward vector d i includes the upward vector of the sibling node, u s(i) , as shown in Equation 2. For an example, see the construction of the downward vectors for Tina and she in Figure 1b.
The partial derivatives of the decision function in Equation 11 are computed as, The partial derivatives of the upward and downward vectors with respect to the upward compositional operator are computed as, and where T (u m ) is the set of all nodes in the upward composition model that help to generate u m . For example, in Figure 1a, the set T (u

Implementation
Our implementation will be made available online after review. Training on the PDTB takes roughly three hours to converge. 1 Convergence is faster if the surface feature weights β are trained separately first. We now describe some additional details of our implementation.
Learning During learning, we used AdaGrad (Duchi et al., 2011) to tune the learning rate in each iteration. To avoid the exploding gradient problem (Bengio et al., 1994), we used the norm clipping trick proposed by Pascanu et al. (2012), fixing the norm threshold at τ = 5.0.  Syntactic structure Our model requires that the syntactic structure for each argument as a binary tree. We run the Stanford parser (Klein and Manning, 2003) to obtain constituent parse trees of each sentence in the PDTB, and binarize all resulting parse trees. Argument spans in the Penn Discourse Treebank need not be sentences or syntactic constituents: they can include multiple sentences, non-constituent spans, and even discontinuous spans (Prasad et al., 2008). In all cases, we identify the syntactic subtrees within the argument span, and construct a right branching superstructure that unifies them into a tree.
Coreference To extract entities from the PDTB, we ran the Berkeley coreference system (Durrett and Klein, 2013) on each document. For each argument pair, we simply ignore the non-corefential entity mentions. Line 1 in Table 2 shows the proportion of the instances with shared entities in the PDTB training and test data. We also consider the intersection of the PDTB with the OntoNotes corpus (Pradhan et al., 2007), which contains gold coreference annotations. The intersection PDTB∩Onto contains 597 documents; the statistics for automatic and gold coreference are shown in lines 2 and 3 of Table 2.
Additional features We supplement our classification model using additional surface features proposed by Lin et al. (2009). These include four categories: lexical features, constituent parse features, dependency parse features, and contextual features. Following this prior work, we use mutual information to select features in the first three categories, obtaining 500 lexical features, 100 constituent features, and 100 dependency features.

Experiments
We evaluate our approach on the Penn Discourse Treebank (PDTB) (Prasad et al., 2008), which provides a discourse level annotation over the Wall Street Journal corpus. In the PDTB, each discourse relation is annotated between two argument spans. Identifying the argument spans of discourse relations is a challenging task (Lin et al., 2014), which we do not attempt here; instead, we use gold argument spans, as in most of the prior work on this task. PDTB relations may be explicit, meaning that they are signaled by discourse connectives (e.g., because); alternatively, they may be implicit, meaning that the connective is absent. Pitler et al. (2008) show that most connectives are unambiguous, so we focus on the more challenging problem of classifying implicit discourse relations.
The PDTB provides a three-level hierarchy of discourse relations. The first level consists of four major relation classes: TEMPORAL, CON-TINGENCY, COMPARISON and EXPANSION. For each class, a second level of types is defined to provide finer semantic distinctions; there are sixteen such relation types. A third level of subtypes is defined for only some types, specifying the semantic contribution of each argument.
There are two main approaches to evaluating implicit discourse relation classification. Multiclass classification requires identifying the discourse relation from all possible choices. This task was explored by Lin et al. (2009), who focus on second-level discourse relations. More recent work has emphasized binary classification, where the goal is to build and evaluate separate "one-versus-all" classifiers for each discourse relation (Pitler et al., 2009;. We primarily focus on multiclass classification, because it is more relevant for the ultimate goal of building a PDTB parser; however, to compare with recent prior work, we also evaluate on binary relation classification.

Multiclass classification
Our main evaluation involves predicting the correct discourse relation for each argument pair, from among the second-level relation types. Following Lin et al. (2009), we exclude five relation types that are especially rare: CONDI-TION, PRAGMATIC CONDITION, PRAGMATIC CONTRAST, PRAGMATIC CONCESSION and EX- PRESSION. In addition, about 2% of the implicit relations in the PDTB are annotated with more than one type. During training, each argument pair that is annotated with two relation types is considered as two training instances, each with one relation type. During testing, if the classifier assigns either of the two types, it is considered to be correct.

Baseline and competitive systems
Most common class The most common class is CAUSE, accounting for 26.03% of the implicit discourse relations in the PDTB test set. Additive word representations Blacoe and Lapata (2012) show that simply adding word vectors can perform surprisingly well at assessing the meaning of short phrases. In this baseline, we represent each argument as a sum of its word representations, and estimate a bilinear prediction matrix. Lin et al. (2009) To our knowledge, the best published accuracy on multiclass classification of second-level implicit discourse relations is from Lin et al. (2009), who apply feature selection to obtain a set of lexical and syntactic features over the arguments. Surface feature model We re-implement the system of Lin et al. (2009), enabling a more precise comparison. The major difference is that we apply our online learning framework, rather than a batch classification algorithm. Compositional Finally, we report results for the method described in this paper. Since it is a distributional compositional approach to discourse relations, we name it DISCO2. Table 3 presents results for multiclass identification of second-level PDTB relations. As shown in lines 7 and 8, DISCO2 outperforms both baseline systems and the prior state-of-the-art (line 3). The strongest performance is obtained by including the entity distributional semantics, with a 3.4% improvement over the accuracy reported by Lin et al. (2009) (p < .05). The improvement over our reimplementation of this work is even greater, which shows how the distributional representation provides additional value over the surface features. Because we have reimplemented this system, we can observe individual predictions, and can therefore use the more sensitive sign test for statistical significance. This test shows that even without entity semantics, DISCO2 significantly outperforms the surface feature model (p < .05).

Results
The latent dimension K is chosen from a development set (see Section 5). Test set performance for each setting of K is shown in Figure 3, with accuracies in a narrow range between 41.9% and 43.6%.

Coreference
The contribution of entity semantics is shown in Table 3 by the accuracy differences between lines 5 and 6, and between lines 7 and 8. On the subset of relations in which the arguments share at Model +Entity semantics +Surface features K Accuracy(%) Yes Yes 50 43.56 * * signficantly better than (Lin et al., 2009) with p < 0.05 † signficantly better than line 4 with p < 0.05 Table 3: Experimental results on multiclass classification of level-2 discourse relations. The results of Lin et al. (2009) are shown in line 3; the results for our reimplementation of this system are shown in line 4. least one coreferent entity, the difference is substantially larger: the accuracy of DISCO2 is 44.9% with entity semantics, and 42.2% without. Considering that only 29.1% of the relations in the PDTB test set include shared entities, it therefore seems likely that a more sensitive coreference system could yield further improvements for the entity-semantics model. Indeed, gold coreference annotation on the intersection between the PDTB and the OntoNotes corpus shows that 40-50% of discourse relations involve coreferent entities ( Table 2). Evaluating our model on just this intersection, we find that the inclusion of entity semantics yields an improvement in accuracy from 37.1% to 38.8%.

Binary classification
Much of the recent work in PDTB relation detection has focused on binary classification, building and evaluating separate one-versus-all classifiers for each relation type (Pitler et al., 2009;. This work has focused on recognition of the four firstlevel relations, grouping ENTREL with the EX-PANSION relation. We follow this evaluation approach as closely as possible, using sections 2-20 of the PDTB as a training set, sections 0-1 as a development set for parameter tuning, and sections 21-22 for testing.

Classification method
We apply DISCO2 with the downward composition procedure and the same surface features listed in Section 5; this corresponds to the system reported in line 8 of Table 3. However, instead of employing a multiclass classifier for all four relations, we train four binary classifiers, one for each first-level discourse relation. We optimize the hyperparameters K, λ, η separately for each classifier (see Section 5 for details), by performing a grid search to optimize the F-measure on the development data. Following , we obtain a balanced training set by resampling training instances in each class until the number of positive and negative instances are equal.

Competitive systems
We compare our model with the published results from several competitive systems. Since we are comparing with previously published results, we focus on systems which use the predominant training / test split, with sections 2-20 for training and 21-22 for testing. This means we cannot compare with recent work from Li and Nenkova (2014), who use sections 20-24 for testing.  present a classification model using linguistically-informed features, such as polarity tags and Levin verb classes.  predict discourse connective words, and then use these predicted connectives as features in a downstream model to predict relations.  showed that the performance on each relation can be improved by selecting a locally-optimal feature set.  reweight word pair features using distributional statistics from the Gigaword corpus, obtaining denser aggregated score features.

Related Work
This paper draws mainly on previous work in discourse relation detection and compositional distributional semantics.

Discourse relations
Many models of discourse structure focus on relations between spans of text (Knott, 1996), including rhetorical structure theory (RST; Mann and Thompson, 1988), lexical tree-adjoining grammar for discourse (D-LTAG;Webber, 2004), and even centering theory (Grosz et al., 1995), which posits relations such as CONTINUATION and SMOOTH SHIFT between adjacent spans. Consequently, the automatic identification of discourse relations has long been considered a key component of discourse parsing (Marcu, 1999). We work within the D-LTAG framework, as annotated in the Penn Discourse Treebank (PDTB; Prasad et al., 2008), with the task of identifying implicit discourse relations. The seminal work in this task is from ) and Lin et al. (2009 focus on lexical features, including linguistically motivated word groupings such as Levin verb classes and polarity tags. Lin et al. (2009) identify four different feature categories, based on the raw text, the context, and syntactic parse trees; the same feature sets are used in later work on end-to-end discourse parsing (Lin et al., 2014), which also includes components for identifying argument spans. Subsequent research has explored feature selection (Park and Cardie, 2012;Lin et al., 2014), as well as combating feature sparsity by aggregating features (Biran and McKeown, 2013). Our model includes surface features that are based on a reimplementation of the work of Lin et al. (2009), because they also undertake the task of multiclass relation classification; however, the techniques introduced in more recent research may also be applicable and complementary to the distributional representation that constitutes the central contribution of this paper; if so, applying these techniques could further improve performance.
Our contribution of entity-augmented distributional semantics is motivated by the intuition that entities play a central role in discourse structure. Centering theory draws heavily on referring expressions to entities over the discourse (Grosz et al., 1995;Barzilay and Lapata, 2008); similar ideas have been extended to rhetorical structure theory (Corston-Oliver, 1998;Cristea et al., 1998). In the specific case of identification of implicit PDTB relations, Louis et al. (2010b) explore a number of entity-based features, including grammatical role, syntactic realization, and information status. Despite the solid linguistic foundation for these features, they are shown to contribute little in comparison with more traditional word-pair features. This suggests that syntax and information status may not be enough, and that it is crucial to capture semantics of each entity's role in the discourse. Our approach does this by propagating distributional semantics from throughout the sentence into the entity span, using our up-down compositional procedure.

Compositional distributional semantics
Distributional semantics begins with the hypothesis that words and phrases that tend to appear in the same contexts have the same meaning (Firth, 1957). The current renaissance of interest in distributional semantics can be attributed in part to the application of discriminative techniques, which emphasize predictive models (Bengio et al., 2006;Baroni et al., 2014b), rather than contextcounting and matrix factorization Turney et al., 2010). In addition, recent work has made practical the idea of propagating distributional information through linguistic structures (Smolensky, 1990;Collobert et al., 2011). In such models, the distributional representations and compositional operators can be finetuned by backpropagating supervision from taskspecific labels, enabling accurate and fast models for a wide range of language technologies (Socher et al., 2011;Socher et al., 2013b;Chen and Manning, 2014). Table 4: Evaluation on the first-level discourse relation identification. The results of the competitive systems are reprinted.
The application of distributional semantics to discourse includes the use of latent semantic analysis for text segmentation (Choi et al., 2001) and coherence assessment , as well as paraphrase detection by the factorization of matrices of distributional counts (Kauchak and Barzilay, 2006;Mihalcea et al., 2006). These approaches essentially compute a distributional representation in advance, and then use it alongside other features. In contrast, our approach follows more recent work in which the distributional representation is driven by supervision from discourse annotations. For example, Ji and Eisenstein (2014) show that RST parsing can be performed by learning task-specific word representations, which perform considerably better than generic word2vec representations (Mikolov et al., 2013).  propose a recurrent neural network approach to RST parsing, which is similar to the upward pass in our model. However, prior work has not applied these ideas to the classification of implicit relations in the PDTB, and does not consider the role of entities. As we argue in the introduction, a single vector representation is insufficiently expressive, because it obliterates the entity chains that help to tie discourse together.
More generally, our entity-augmented distributional representation can be viewed in the context of recent literature on combining distributional and formal semantics: by representing entities, we are taking a small step away from purely distributional representations, and towards more traditional logical representations of meaning. In this sense, our approach is "bottom-up", as we try to add a small amount of logical formalism to distributional representations; other approaches are "top-down", softening purely logical representations by using distributional clustering (Poon and Domingos, 2009;Lewis and Steedman, 2013) or Bayesian non-parametrics (Titov and Klemen-tiev, 2011) to obtain types for entities and relations. Still more ambitious would be to implement logical semantics within a distributional compositional framework (Clark et al., 2011;Grefenstette, 2013). At present, these combinations of logical and distributional semantics have been explored only at the sentence level. In generalizing such approaches to multi-sentence discourse, we argue that it will not be sufficient to compute distributional representations of sentences: a multitude of other elements, such as entities, will also have to represented.

Conclusion
Discourse relations are determined by the meaning of their arguments, and progress on discourse parsing therefore requires computing representations of the argument semantics. We present a compositional method for inducing distributional representations not only of discourse arguments, but also of the entities that thread through the discourse. In this approach, semantic composition is applied up the syntactic parse tree to induce the argument-level representation, and then down the parse tree to induce representations of entity spans. Discourse arguments can then be compared in terms of their overall distributional representation, as well as by the representations of coreferent entity mentions. This enables the compositional operators to be learned by backpropagation from discourse annotations. This approach outperforms previous work on classification of implicit discourse relations in the Penn Discourse Treebank. Future work may consider joint models of discourse structure and coreference, as well as representations for other discourse elements, such as event coreference and shallow semantics.