Constructing Datasets for Multi-hop Reading Comprehension Across Documents

Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently no resources exist to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence — effectively performing multihop, alias multi-step, inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information; and providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 54.5% on an annotated test set, compared to human performance at 85.0%, leaving ample room for improvement.


Introduction
Devising computer systems capable of answering questions about knowledge described using text has been a longstanding challenge in Natural Language 1 Available at http://qangaroo.cs.ucl.ac.ukProcessing (NLP).Contemporary end-to-end Reading Comprehension (RC) methods can learn to extract the correct answer span within a given text and approach human-level performance (Kadlec et al., 2016;Seo et al., 2017).
However, for existing tasks, relevant information is often concentrated locally within a single sentence, emphasising the role of locating, matching, and aligning information between query and support text. 2 For example, Weissenborn et al. (2017) observed that a simple binary word-in-query indicator feature boosted the relative accuracy of a baseline model by 27.9%.
We argue that, in order to further the ability of machine comprehension methods to extract knowledge from text, we must move beyond a scenario where relevant information is coherently and explicitly stated within a single document.Methods with this capability would benefit search and Question Answering (QA) applications where the required information cannot be found in one location.They would also aid Information Extraction (IE) applications, such as discovering drug-drug interactions by connecting protein interactions reported across different publications.
Figure 1 shows an example from WIKIPEDIA, where the goal is to identify the country property of the Hanging Gardens of Mumbai.This cannot be inferred solely from the article about them without additional background knowledge -since the answer is not stated explicitly.However, several of the linked articles mention the correct answer India (and other countries), but cover different topics (e.g.Mumbai, The Arabian Sea, etc.).Finding the answer requires multi-hop reasoning:3 figuring out that the Hanging Gardens are located in Mumbai, and then, from a second document, that Mumbai is a city in India.
We define a novel RC task in which a model should learn to answer queries by combining evidence stated across documents.We introduce a methodology to induce datasets for this task and derive two datasets.The first, WIKIHOP, uses sets of WIKIPEDIA articles where answers to queries about specific properties of an entity cannot be located in the entity's article.In the second dataset, MEDHOP, the goal is to establish drug-drug interactions based on scientific findings about drugs, proteins, as well as their interactions, found across multiple MED-LINE abstracts.For both datasets we draw upon existing Knowledge Bases (KBs), WIKIDATA and DRUGBANK, as ground truth, utilising distant supervision (Mintz et al., 2009) to induce the data -similar to Hewlett et al. (2016) and Joshi et al. (2017).
We establish that for 74.1% and 68.0% of the samples, the answer can be inferred from the given documents by a human annotator.Still, constructing multi-document datasets is challenging; we encounter and prescribe remedies for several pitfalls associated with their assembly -for example spurious co-locations of answers and specific documents.
For both datasets we then establish several strong baselines and evaluate the performance of two previ- ously proposed competitive RC models (Seo et al., 2017;Weissenborn et al., 2017).We find that one can integrate information across documents, but neither excels at selecting relevant information from a larger set of documents as their accuracy increases significantly when provided only documents guaranteed to be relevant.The best model reaches 42.9%, compared to human performance at 74.0%, indicating ample room for improvement.
In summary, our key contributions are as follows: Firstly, proposing a cross-document multi-step RC task, as well as a general dataset induction strategy.Secondly, assembling two datasets from different domains and identifying dataset construction pitfalls and remedies.Thirdly, establishing multiple baselines, including two recently proposed RC models, as well as analysing model behaviour in detail through ablation studies.

Task & Dataset Construction Method
We will now formally define the multi-hop RC task, and a generic methodology to construct multi-hop RC datasets.Later, in Sections 3 and 4 we will demonstrate how this method is applied in practice by creating datasets for two different domains.
Task Formalisation A model is given a query q, a set of supporting documents S q , and a set of candidate answers C q -all of which are mentioned in S q .The goal is to identify the correct answer a * ∈ C q by drawing on the support documents S q .Queries could potentially have several true answers when not constrained to rely on a specific set of support documents -e.g.queries about the parent of a certain individual.However, in our setup each sample has only one true answer among C q and S q .
Note that even though we will utilise background information during dataset assembly, such information will not be available to a model: the document set will be provided in random order and without any metadata.While certainly beneficial, this would distract from our goal of fostering end-to-end RC methods that infer facts by combining separate facts stated in text.

Dataset Assembly
We will now describe a generic method to construct datasets for the aforementioned task, in a way such that finding the answer to a query depends on multiple documents with distinct pieces of relevant information.We assume that there exists a document corpus D, together with a KB containing fact triples (s, r, o) -with subject entity s, relation r, and object entity o.For example, one such fact could be (Hanging Gardens of Mumbai, country, India).We start with individual KB facts and transform them into query-answer pairs by leaving the object slot empty, i.e. q=(s, r, ?) and a * =o.
Next, we define a directed bipartite graph, where vertices on one side correspond to documents in D, and vertices on the other side are entities from the KB -see Figure 2 for an example.A document node d is connected to an entity e if e is mentioned in d, though there may be further constraints when defining the graph connectivity.For a given (q, a * ) pair, the candidates C q and support documents S q ⊆ D are identified by traversing the bipartite graph using breadth-first search; the documents visited will become the support documents S q .
As the traversal starting point, we use the node belonging to the subject entity s of the query q.As traversal end points, we use the set of all entity nodes that are type-consistent answers to q.4 Note that whenever there is another fact (s, r, o ) in the KB, i.e. a fact producing the same q but with a different a * , we will not include o into the set of end points for this sample.This ensures that precisely one of the end points corresponds to a correct answer to q. 5When traversing the graph starting at s, several of the end points will be visited, though generally not all; those visited define the candidate set C q .If however the correct answer a * is not among them we discard the entire (q, a * ) pair.The documents visited to reach the end points will define the support document set S q .That is, S q comprises chains of documents leading not only from the query subject to the correct answer candidate, but also to type-consistent false answer candidates.
With this methodology, relevant textual evidence for (q, a * ) will be spread out across documents along the chain connecting s and a * -ensuring that multi-hop reasoning goes beyond resolving coreference within a single document.Note that including other type-consistent candidates alongside a * as end points in the graph traversal -and thus into the support documents -renders the task considerably more challenging (Jia and Liang, 2017).Models could otherwise identify a * in the documents by simply relying on type consistency heuristics.It is worth pointing out that by introducing false candidates we counterbalance a type consistency bias, in contrast to Hermann et al. (2015) and Hill et al. (2015).We will next describe how we apply this generic dataset construction methodology in two domains to create the WIKIHOP and MEDHOP datasets.

WIKIHOP
WIKIPEDIA contains an abundance of humancurated cross-domain information and has several structured resources such as infoboxes and WIKIDATA (Vrandečić, 2012) associated with it.WIKIPEDIA has thus been used for a wealth of research to build datasets posing queries about a single sentence (Morales et al., 2016;Levy et al., 2017) or article (Yang et al., 2015a;Hewlett et al., 2016;Rajpurkar et al., 2016).However, no attempt has been made to construct a cross-document multi-step RC dataset based on WIKIPEDIA.
A recently proposed RC dataset is WIKIREAD-ING (Hewlett et al., 2016), where WIKIDATA tuples (item, property, answer) are aligned with the WIKIPEDIA articles regarding their item.The tuples define a slot filling task with the goal of predicting the answer, given an article and property.One problem with using WIKIREADING as an extractive RC dataset is that 54.4% of the samples do not state the answer explicitly in the given article (Hewlett et al., 2016).However, we observed that some of the articles accessible by following hyperlinks from the given article often state the answer, alongside other plausible false answer candidates.

Assembly
We now apply the methodology from Section 2 to create a multi-hop dataset with WIKIPEDIA as document corpus and WIKIDATA as structured knowledge triples. 6In this setup, (item, property, answer) WIKIDATA triples correspond to (s, r, o) triples, and the item and property of each sample together form our query q -e.g."(Hanging Gardens of Mumbai, country, ?)".Similar to Yang et al. (2015a) we only use the first paragraph of each article, since relevant information is more likely to be stated in the beginning.Starting with all samples in WIKIREADING, we first remove samples where the answer is stated explicitly in the WIKIPEDIA article about the item. 7he bipartite graph is structured as follows: (1) for edges from articles to entities: all articles mentioning an entity e are connected to e; (2) for edges from entities to articles: each entity e is only connected to the WIKIPEDIA article about the entity.Traversing the graph is then equivalent to iteratively following hyperlinks to new articles about the anchor text entities.
For a given query-answer pair, the item entity is chosen as starting point for the graph traversal.A traversal will always pass through the article about the item, since this is the only document connected from there.The end point set includes the correct answer alongside other type consistent candidate expressions, which are determined by considering all facts belonging to WIKIREAD-ING training examples, selecting those triples with the same property as in q and keeping their answer expressions.As an example, for the WIKI-DATA property country, this would be the set {France, Russia, . . .}.
Graph traversal is executed up to a maximum chain length of 3 documents.To not pose unreasonable computational constraints to RC models, examples with more than 64 different support documents or 100 candidates are discarded, resulting in a loss of ≈ 1% of the data.

Mitigating Dataset Biases
Dataset creation is always fraught with the risk of inducing unintended errors and biases (Chen et al., 2016;Schwartz et al., 2017).As Hewlett et al. (2016) only carried out limited analysis of their WIKIREADING dataset, we present an analysis of the downstream effects we observe on WIKIHOP.
Candidate Frequency Imbalance A first observation is that there is a significant bias in the answer distribution of WIKIREADING.For example, in the majority of the samples the property country has the United States of America as answer.A simple majority class baseline would thus prove successful, but tell us little about multi-hop reasoning.To combat this issue, we subsampled the dataset to ensure that examples of any one particular answer candidate make up no more than 0.1% of the dataset, and omitted articles about the United States.

Document-Answer Correlations
A problem unique to our multi-document setting is the possibility of spurious correlations between candidates and documents induced by the graph traversal method.In fact, if we were not to address this issue, a model specifically designed to exploit these regularities could achieve 74.6% accuracy (detailed in Section 6).
Concretely, we observed that certain documents frequently co-occur with the correct answer, independently of the query.For example, if the article about London is present among S q in a country query, the answer is likely to be United Kingdom, independent of the query type or entity in question.If the article about FIFA appears, the answer is likely to be association football.Appendix A contains a list with several additional examples.
We designed a statistic to measure this effect and then used it to sub-sample the dataset.The statistic counts how often a candidate c is observed as the correct answer when a certain document is present in S q across training set samples.More formally, for a given document d and answer candidate c, let cooccurrence(d, c) denote the total count of how often c co-occurs with d in a sample where c is also the correct answer.We use this statistic to filter the dataset, by discarding samples with at least one document-candidate pair (d, c) for which cooccurrence(d, c) > 20.This successfully mitigated the dataset bias, empirically supported by the experiments in Section 6.

MEDHOP
Following the same general methodology, we next construct a second dataset for the domain of molecular biology -a field that has been undergoing exponential growth in the number of publications (Cohen and Hunter, 2004).The promise of applying NLP methods to cope with this increase has led to research efforts in IE (Hirschman et al., 2005;Kim et al., 2011) and QA for biomedical text (Hersh et al., 2007;Nentidis et al., 2017).There is a plethora of manually curated structured resources (Ashburner et al., 2000; The UniProt Consortium, 2017) which can either serve as ground truth or to induce training data using distant supervision for NLP systems (Craven and Kumlien, 1999;Bobic et al., 2012).Existing RC datasets are either severely limited in size (Hersh et al., 2007) or cover a very diverse set of query types (Nentidis et al., 2017), complicating the application of neural models that have seen successes for other domains (Wiese et al., 2017).
A task that has received significant attention is detecting Drug-Drug Interactions (DDIs) (Gurulingappa et al., 2012).Existing DDI efforts have focused on explicit mentions of interactions in single sentences (Gurulingappa et al., 2012;Percha et al., 2012;Segura-Bedmar et al., 2013).However, as shown by Peng et al. (2017), cross-sentence relation extraction increases the number of available relations.It is thus likely that cross-document interactions would further improve recall, which is of particular importance considering interactions that are never stated explicitly -but rather need to be inferred from separate pieces of evidence.The promise of multi-hop methods is finding and combining individual observations that can suggest previously unobserved DDIs, aiding the process of making scientific discoveries.
DDIs are caused by Protein-Protein Interaction (PPI) chains, forming biomedical pathways.If we consider PPI chains across documents, we find examples like in Figure 3.Here the first document states that the drug Leuprolide causes GnRH receptor-induced synaptic potentiations, which can be blocked by the protein Progonadoliberin-1.The last document states that another drug, Triptorelin, is a superagonist of the same protein.It is therefore likely to increase the potency of Leuprolide, describing a way in which the two drugs interact.Besides the true interaction there is also a false candidate Urofollitropin that, although mentioned together with GnRH receptor within one document, provides no textual evidence indicating interactions with Leuprolide.

Assembly
We construct MEDHOP using DRUGBANK (Law et al., 2014) as structured knowledge resource and research paper abstracts from MEDLINE as documents.There is only a single relation type for DRUGBANK facts, interacts with, that connects pairs of drugs -an example of a MEDHOP query would thus be "(Leuprolide, interacts with, ?)".We start by processing the 2016 MEDLINE release using the preprocessing pipeline employed for the BioNLP 2011 Shared Task (Stenetorp et al., 2011).
We restrict the set of entities in the bipartite graph to drugs in DRUGBANK and human proteins in SWISS-PROT (Bairoch et al., 2004).That is, the graph has drugs and proteins on one side, and MED-LINE abstracts on the other.
The edge structure is as follows: (1) there is an edge from a document to all proteins mentioned in it.
(2) there is an edge between a document and a drug, if this document also mentions a protein known to be a target for the drug according to DRUGBANK.This edge is bidirectional, i.e. can be traversed both ways, since there is no canonical document describing each drug -thus one can "hop" to any document mentioning the drug and its target.(3) there is an edge from a protein p to a document mentioning p, but only if the document also mentions another protein p which is known to interact with p according to REACTOME (Fabregat et al., 2016).Given our distant supervision assumption, these additionally constraining requirements err on the side of precision.
As a mention, similar to Percha et al. (2012), we consider any exact match of a name variant of a drug or human protein in DRUGBANK or SWISS-PROT.For a given DDI (drug 1 , interacts with, drug 2 ), we then select drug 1 as the starting point for the graph traversal.As possible end points we consider any other drug, apart from drug 1 and those interacting with drug 1 other than drug 2 .Similar to WIKIHOP, we exclude samples with more than 64 support documents and impose a maximum document length of 300 tokens plus title. 8Document Sub-sampling The bipartite graph for MEDHOP is orders of magnitude more densely connected than for WIKIHOP.This can lead to potentially large support document sets S q , to a degree where it becomes computationally infeasible for a majority of existing RC models.After the traversal has finished, we thus subsample documents by first adding a set of documents that connects the drug in the query with its answer.We then iteratively add documents to connect false candidates until we reach the limit of 64 documents -while ensuring that all candidates have the same number of paths through the bipartite graph.
Mitigating Candidate Frequency Imbalance Some drugs interact with more drugs than others -Aspirin for example interacts with 743 other drugs, but Isotretinoin with only 34.This leads to similar candidate frequency imbalance issues as with WIKIHOP -but due to its smaller size MEDHOP is difficult to sub-sample.Nevertheless we can successfully combat this issue by masking entity names, detailed in Section 6.2.

Dataset Analysis
Table 1 shows the dataset sizes.Note that WIKIHOP inherits the train, development, and test set splits from WIKIREADING -i.e. the full dataset creation, filtering, and sub-sampling pipeline is executed on each set individually.Also note that sub-sampling according to document-answer collocation (Section 3.2) significantly reduces the size of WIKIHOP from ≈528, 000 training samples to ≈44, 000. Figure 4 illustrates the distribution of the number of support documents per sample.WIKIHOP shows a Poisson-like behaviour -most likely due to structural regularities in WIKIPEDIA-whereas MEDHOP exhibits a bimodal distribution, in line with our observation that certain drugs and proteins have far more interactions and studies associated with them.

Qualitative Analysis
To establish the quality of the data and analyse potential distant supervision errors, we sample and annotate 100 samples from each development set.
WIKIHOP Table 2 lists characteristics along with the proportion of samples that exhibit them.For 45%, the true answer either uniquely follows from multiple texts directly or is suggested as likely.For 26%, more than one candidate is plausibly supported by the documents, including the correct an-swer.This is often due to hypernymy, where the appropriate level of granularity for the answer is difficult to predict -e.g.(west suffolk, administrative entity, ?) with candidates suffolk and england.This is a direct consequence of including type-consistent false answer candidates from WIKIDATA, which can lead to questions with several true answers.For 9% of cases a single document suffices; these samples contain a linked document that states enough information about item and answer together.Finally, although our task is significantly more complex than most previous tasks where distant supervision has been applied, the distant supervision assumption is only violated for 20% of the samples -a proportion similar to previous work (Riedel et al., 2010).These cases can either be due to conflicting information between WIKI-DATA and WIKIPEDIA (8%), e.g. when the date of birth for a person differs between WIKIDATA and what is stated in the WIKIPEDIA article, or because the answer is consistent but cannot be inferred from the support documents (12%).When answering 100 questions, the annotator knew the answer prior to reading the documents for 9%, and produced the correct answer after reading the document sets for 74% of the cases.
MEDHOP Since both document complexity and number of documents per sample was significantly larger compared to WIKIHOP (see Figure 4) it was not feasible to ask an annotator to read all support documents for 100 samples.We thus opted to verify the dataset quality by providing only the subset of documents relevant to support the correct answer, i.e. those traversed along the path reaching the answer.The annotator was asked if the answer to the query "follows", "is likely", or "does not follow", given the relevant documents.68% of the cases were considered as "follows" or as "is likely".The majority of cases violating the distant supervision assumption were errors due to the lack of a necessary PPI in one of the connecting documents.

Crowdsourced Human Annotation
We asked human annotators on Amazon Mechanical Turk to evaluate samples of the WIKIHOP de- velopment and test set.9Annotators were shown the query-answer pair as a fact, and the chain of relevant documents leading to the answer -similar to our qualitative analysis of MEDHOP.They were then instructed to answer (1) whether they knew the fact before, (2) whether the fact follows from the texts (with options "fact follows", "fact is likely", and "fact does not follow"), and (3), whether a single or several of the documents are required.Each sample was shown to three annotators and a majority vote was used to aggregate the annotations.Annotators were familiar with the fact 4.6% of the time; prior knowledge of the fact is thus not likely to be a confounding effect on the other judgements.Interannotator agreement as measured with Fleiss' kappa is 0.253 in (2), and 0.281 in (3) -indicating a fair overall agreement.Overall, 9.5% of samples have no clear majority in (2).Among the samples with a majority judgement, 59.8% are cases where the fact "follows", for 14.2% the fact was judged as "likely", and as "not follow" for 25.9%.This again provides good justification for the distant supervision strategy employed during dataset construction.
Among the samples with a majority vote for (2) of either "follows" or "likely", 55.9% were marked with majority vote as requiring multiple documents to infer the fact, and 44.1% as requiring only a single document.The latter number is larger than initially expected, given the construction of samples through graph traversal.However, when inspecting cases judged as "single" more closely, we observed that many indeed provide a clear hint about the correct answer within one document, but without stating it explicitly.For example, for the fact (witold ci-

Experiments
This section describes experiments on WIKIHOP and MEDHOP with the goal of establishing the performance of several baseline models, including recent neural RC models.We empirically demonstrate the importance of mitigating dataset biases, probe whether multi-step behaviour is beneficial for solving the task, and investigate if RC models can learn to perform lexical abstraction.

Models
Random Selects a random candidate; note that the number of candidates differs between samples.

Max-mention
Predicts the most frequently mentioned candidate in the support documents S q of a sample -randomly breaking ties.
Majority-cand.-per-query-type Predicts the candidate c ∈ C q that was most frequently observed as the true answer in the training set, given the query type of q.For WIKIHOP, the query type is the property p of the query, for MEDHOP there is only the single query typeinteracts with.
TF-IDF Retrieval-based models are known to be strong QA baselines if candidate answers are provided (Clark et al., 2016;Welbl et al., 2017).They search for individual documents based on keywords in the question, but typically do not combine information across documents.The purpose of this baseline is to see if it is possible to identify the correct answer from a single document alone through lexical correlations.The model forms its prediction as follows: For each candidate c, the concatenation of the query q with c is fed as an OR query into the whoosh text retrieval engine. 10 The order of the concatenated support documents is randomised.In a preliminary experiment we trained models using various document order permutations, but found that the performance did not change significantly.We thus conclude that the specific order chosen does not have a major impact on the experiments we conducted.
For BiDAF, the default hyperparameters from the implementation of Seo et al. ( 2017) are used,11 with pretrained GloVe (Pennington et al., 2014) embeddings.However, we restrict the maximum document length to 8,192 tokens and hidden size to 20, and train for 5,000 iterations with batchsize 16 in order to fit the model into memory. 12For FastQA we use the implementation provided by the authors, also with pre-trained GloVe embeddings, no characterembeddings, no maximum support length, hidden size 50, and batch size 64 for 50 epochs.

Lexical Abstraction: Candidate Masking
The presence of lexical regularities among answers is a problem in RC dataset assembly -a phenomenon already observed by Hermann et al. (2015).
When comprehending a text, the correct answer should become clear from its context -rather than from an intrinsic property of the answer expression.To evaluate the ability of models to rely on context alone, we created masked versions of the datasets: we replace any candidate expression randomly using 100 unique placeholder tokens, e.g."Mumbai is the most populous city in MASK7."Masking is consistent within one sample, but generally different for the same expression if it appears in another sample.This not only removes answer frequency cues, it also removes statistical correlations between frequent answer strings and support documents.Models consequently cannot base their prediction on intrinsic properties of the answer expression, but have to rely on the context surrounding the mentions.

Results & Discussion
Table 3 shows the experimental outcomes for WIK-IHOP and MEDHOP, together with results for the masked setting; we will first discuss the former.A first observation is that candidate mention frequency does not produce better predictions than a random guess.Predicting the answer most frequently observed at training time achieves strong results: as much as 38.8% and 58.4% on the two datasets.That is, a simple frequency statistic together with answer type constraints alone is a relatively strong predictor, and the strongest overall for the "unmasked" version of MEDHOP.
The TF-IDF retrieval baseline clearly performs better than random for WIKIHOP, but is not very strong overall.That is, the question tokens are helpful to detect relevant documents, but exploiting only this information compares poorly to the other baselines.On the other hand, as no co-mention of an interacting drug pair occurs within any single document in MEDHOP, the TF-IDF baseline performs worse than random.We conclude that lexical match-ing with a single support document is not enough to build a strong predictive model for both datasets.
The Document-cue baseline can predict more than a third of samples correctly, for both datasets, even after sub-sampling frequent document-answer pairs in WIKIHOP.The relative strength of this and other baselines proves to be an important issue when designing multi-hop datasets, which we addressed through the measures described in Section 3.2.In Table 4 we compare the two relevant baselines on WIKIHOP before and after applying filtering measures.The absolute strength of these baselines before filtering shows how vital addressing this issue is: 74.6% accuracy could be reached through exploiting the cooccurrence(d, c) statistic alone.This underlines the paramount importance of investigating and addressing dataset biases that otherwise would confound seemingly strong RC model performance on a given dataset.The relative drop demonstrates that the measures undertaken can successfully mitigate the issue.A downside to this aggressive filtering is a significantly reduced dataset size, which renders it infeasible for smaller datasets like MEDHOP.
Among the two neural RC models, BiDAF is overall strongest across both datasets -this is in contrast to the reported results for SQuAD where their performance is nearly indistinguishable.This is possibly due to the iterative latent interactions in the BiDAF architecture: we hypothesise that these are of increased importance for our task, where information is distributed across documents.Overall it is worth emphasising that both FastQA and BiDAF, which are extractive QA models, do not rely on the candidate options C q at all -unlike the other baselines -but predict the answer by extracting a span from the support documents.
In the masked setup all baseline models reliant on lexical cues fail in face of the randomised answer expressions, since the same answer option has different placeholders in different examples.Especially on MEDHOP, where dataset sub-sampling is not a viable option, masking proves to be a valuable alternative, effectively circumventing spurious statistical correlations that RC models can learn to exploit.
Both neural RC models are able to largely retain or even improve their strong performance when answers are masked: they are able to leverage the con-

Using only relevant documents
We conducted further experiments to examine the RC models when presented with only the relevant documents in S q , i.e. the chain of documents leading to the correct answer.This allows us to investigate the hypothetical performance of the models if they were able to select and read only relevant documents: Table 5 summarises these results.Models improve greatly in this gold chain setup with up to 81.2% on WIKIHOP in the masked setting for BiDAF.This demonstrates that RC models are capable of identifying the answer when few or no plausible false candidates are mentioned, which is particularly evident for MEDHOP where documents tend to discuss only single drug candidates.On the other hand, it also shows that the models' answer selection process is not robust to the introduction of unrelated documents with type consistent candidates.Lastly, the results indicate that learning to intelligently select relevant documents before RC may be among the most promising directions for future model development.

Removing relevant documents
To investigate whether the neural RC models can draw upon information requiring multi-step inference we designed an experiment where we discard all documents from S q that do not contain a candidate mention, including the first documents traversed.Table 6 shows the results: we can observe that performance drops across the board for BiDAF.There is a significant drop of 3.3% on MED-HOP, but the drop for WIKIHOP is drastic at 10%demonstrating that BiDAF, is able to leverage crossdocuments information.FastQA shows a slight increase of 2.2% for WIKIHOP and a decrease of 2.7% on MEDHOP.While inconclusive, it is clear that FastQA with fewer latent interactions than BiDAF has problems integrating cross-document information.

Related Work
Related Datasets End-to-end text-based QA has witnessed a surge in interest with the advent of largescale datasets, which have been assembled based on FREEBASE (Berant et al., 2013;Bordes et al., 2015b), WIKIPEDIA (Yang et al., 2015b;Rajpurkar et al., 2016;Hewlett et al., 2016), web search queries (Nguyen et al., 2016), news articles (Hermann et al., 2015;Onishi et al., 2016), books (Hill et al., 2015;Paperno et al., 2016), science exams (Welbl et al., 2017), and trivia (Boyd-Graber et al., 2012;Dunn et al., 2017).Besides Trivi-aQA (Joshi et al., 2017), all these datasets are confined to single documents, and RC typically does not require a combination of multiple independent facts.In contrast, WIKIHOP and MEDHOP are specifically designed for cross-document RC and multistep inference.There exist other multi-hop RC resources, but they are either very limited in size, such as the FraCaS test suite, or based on synthetic language (Weston et al., 2015).Fried et al. (2015) have demonstrated that exploiting information from other related documents based on lexical semantic similarity is beneficial for re-ranking answers in opendomain non-factoid QA.Their method is related to ours, but the document connections are based on the lexical semantic similarity between words, whereas in our approach it is based on the relation between specific entities.TriviaQA partly involves multi-step reasoning, but the complexity largely stems from parsing compositional questions.Our datasets centre around compositional inference from comparatively simple queries and the cross-document setup ensures that multi-step inference goes beyond resolving co-reference.

Compositional Knowledge Base Inference
Combining multiple facts is common for structured knowledge resources which formulate facts using first-order logic.KB inference methods include Inductive Logic Programming (Quinlan, 1990;Pazzani et al., 1991;Richards and Mooney, 1991) and probabilistic relaxations to logic like Markov Logic (Richardson and Domingos, 2006;Schoenmackers et al., 2008).These approaches suffer from limited coverage and inefficient inference, though efforts to circumvent sparsity have been undertaken (Schoenmackers et al., 2008;Schoenmackers et al., 2010).A more scalable approach to composite rule learning is the Path Ranking Algorithm (PRA) (Lao and Cohen, 2010;Lao et al., 2011), which performs random walks to identify salient paths between entities.Gardner et al. (2013) circumvent the sparsity problems in PRA by introducing synthetic links via dense latent embeddings.Several other multi-fact inference methods based on dense representations have been proposed, using composition functions such as vector addition (Bordes et al., 2014), RNNs (Neelakantan et al., 2015;Das et al., 2016), and memory networks (Jain, 2016).Another approach is the Neural Theorem Prover (Rocktäschel and Riedel, 2017), which uses dense rule and symbol embeddings to learn a differentiable backward chaining algorithm.All these previous approaches centre around learning how to combine facts from a KB, i.e. in a structured form with pre-defined schema.That is, they work as part of a pipeline, and either rely on output of a previous IE step (Banko et al., 2007), or on direct human annotation (Bollacker et al., 2008) which tends to be costly and biased in coverage.However, recent neural RC methods (Seo et al., 2017;Shen et al., 2017a) have demonstrated that end-to-end language understanding approaches can infer answers directly from text -sidestepping intermediate query parsing and IE steps.Our work aims to evaluate whether end-to-end multi-step RC models can indeed operate on raw text documents only -while performing the kind of inference most commonly associated with logical inference methods operating on structured knowledge.

Text-Based Multi-Step Reading Comprehension
A rich collection of neural network models tailored towards multi-step RC has been developed over the last few years.Memory networks (Weston et al., 2014;Sukhbaatar et al., 2015;Kumar et al., 2016) define a generic model class that iteratively attends over memory items defined via text, and they show promising performance on synthetic tasks requiring multi-step reasoning (Weston et al., 2015).One common characteristic of neural multi-hop models is their rich structure that enables matching and interaction between question, context, answer candidates and combinations thereof (Peng et al., 2015;Weissenborn, 2016;Xiong et al., 2016;Liu and Perez, 2017), which often is iterated over several times (Sordoni et al., 2016;Mark et al., 2016;Seo et al., 2016;Hu et al., 2017) and may contain trainable stopping mechanisms (Graves, 2016;Shen et al., 2017b).All these methods show promise in single-document RC, and by design should be capable of integrating multiple facts across documents.However, thus far they have not been evaluated for a cross-document multi-step RC task -as in this work.
Learning Search Expansion Other research addresses expanding the document set available to a QA system, either in the form of web navigation (Nogueira and Cho, 2016), or via query reformulation techniques, which often use neural reinforcement learning (Narasimhan et al., 2016;Nogueira and Cho, 2017;Buck et al., 2017).While related, this work ultimately aims at reformulating queries to better acquire evidence documents, and not at answering queries through combining facts.

Conclusions
We have introduced a new cross-document multihop RC task, devised a generic dataset derivation strategy and applied it to two separate domains.The resulting datasets test RC methods in their ability to perform composite reasoning -something thus far limited to models operating on structured knowledge resources.In our experiments we found that contemporary RC models can leverage cross-document information, but a sizeable gap to human performance remains.Finally, we identified the selection of relevant document sets as the most promising direction for future research.

A Appendix: Document-Cue examples
Table 7 shows examples of answers and articles which frequently appear together in WIKIHOP before filtering.

B Appendix: Gold Chain Examples
Table 8 shows examples of document gold chains in WIKIHOP.Note that their lengths differ, with a maximum of 3 documents.
C Appendix: Query Types

Figure 1 :
Figure 1: A sample from the WIKIHOP dataset where it is necessary to combine information spread across multiple documents to infer the correct answer.

Figure 2 :
Figure 2: A bipartite graph connecting entities and documents mentioning them.The bold edges are those traversed for the first fact in the small KB on the right; yellow highlighting indicates documents in S q and candidates in C q .Check and cross indicate correct and false answer candidates.

Figure 5 :
Figure 5: Histogram for the number of candidates per sample in WIKIHOP.
Query: (the big broadcast of 1937, genre, ?)Answer: musical film Text 1: The Big Broadcast of 1937 is a 1936 Paramount Pictures production directed by Mitchell Leisen, and is the third in the series of Big Broadcast movies.The musical comedy stars Jack Benny, George Burns, Gracie Allen, Bob Burns, Martha Raye, Shirley Ross [...] Text 2: Shirley Ross(January 7, 1913 March 9, 1975)  was an American actress and singer, notable for her duet with Bob Hope, "Thanks for the Memory" from "The Big Broadcast of 1938"[...] Text 3: The Big Broadcast of 1938 is a Paramount Pictures musical film featuring W.C. Fields and Bob Hope.Directed by Mitchell Leisen, the film is the last in a series of "Big Broadcast" movies[...] Query: (cmos, subclass of, ?)Answer: semiconductor device Text 1: Complementary metal-oxide-semiconductor (CMOS) [...] is a technology for constructing integrated circuits.[...] CMOS uses complementary and symmetrical pairs of p-type and n-type metal oxide semiconductor field effect transistors (MOSFETs) for logic functions.[...] Text 2: A transistor is a semiconductor device used to amplify or switch electronic signals[...] Query: (raik dittrich, sport, ?)Answer: biathlon Text 1: Raik Dittrich (born October 12, 1968 in Sebnitz) is a retired East German biathlete who won two World Championships medals.He represented the sports club SG Dynamo Zinnwald [...] Text 2: SG Dynamo Zinnwald is a sector of SV Dynamo located in Altenberg, Saxony[...] The main sports covered by the club are biathlon, bobsleigh, luge, mountain biking, and Skeleton (sport)[...] Query: (minnesota gubernatorial election, office contested, ?)Answer: governor Text 1: The 1936 Minnesota gubernatorial election took place on November 3, 1936.Farmer-Labor Party candidate Elmer Austin Benson defeated Republican Party of Minnesota challenger Martin A. Nelson.Text 2: Elmer Austin Benson [...] served as the 24th governor of Minnesota, defeating Republican Martin Nelson in a landslide victory in Minnesota's 1936 gubernatorial election.[...] Query: (ieee transactions on information theory, publisher, ?)Answer: institute of electrical and electronics engineers Text 1: IEEE Transactions on Information Theory is a monthly peer-reviewed scientific journal published by the IEEE Information Theory Society [...] the journal allows the posting of preprints [...] Text 2: The IEEE Information Theory Society (ITS or ITSoc), formerly the IEEE Information Theory Group, is a professional society of the Institute of Electrical and Electronics Engineers (IEEE) [...] Query: (country of citizenship, louis-philippe fiset, ?)Answer: canada Text1: Louis-Philippe Fiset [...] was a local physician and politician in the Mauricie area [...] Text2: Mauricie is a traditional and current administrative region of Quebec.La Mauricie National Park is contained within the region, making it a prime tourist location.[...] Text3: La Mauricie National Park is located near Shawinigan in the Laurentian mountains, in the Mauricie region of Quebec, Canada [...]

Table 1 :
Dataset sizes for our respective datasets.
8 Same restriction as the journal PLoS ONE.
chy, country of citizenship, poland) with documents d 1 : Witold Cichy (born March 15, 1986 in Wodzisaw lski) is a Polish footballer[...] and d 2 : Wodzisaw lski[...] is a town in Silesian Voivodeship, southern Poland[...], the information provided in d 1 suffices for a human given the background knowledge that Polish is an attribute related to Poland, removing the need for d 2 to infer the answer.
It then predicts the (Weissenborn et al., 2017)QA & BiDAF In our experiments we evaluate two recently proposed LSTM-based extractive QA models: the Bidirectional Attention Flow model (BiDAF, Seo et al. (2017)), and FastQA(Weissenborn et al., 2017), which have shown robust performance across several extractive QA datasets.These models predict an answer span within a single document.In order to apply them in a multi-document setting, we concatenate all d ∈ S q and add document separator tokens.During training, the first answer mention in the concatenated document serves as the gold span; at test time we measure accuracy based on exact string matching.

Table 3 :
Test accuracies for the WIKIHOP and MEDHOP datasets.

Table 4 :
Accuracy comparison for simple baseline models on WIKIHOP before and after filtering.
Scotland [...] is a country that is part of the United Kingdom and covers the northern third of the island of Great Britain.Shanghai [...] often abbreviated as Hu or Shen, is one of the four directcontrolled municipalities of the People's Republic of China.

Table 7 :
Examples with largest cooccurrence(d, c) statistic, before filtering.The Count column states cooccurrence(d, c); the last column states the corresponding relative proportion of training samples (total 527,773).

Table 8 :
Examples of document gold chains in WIKIHOP.Article titles are boldfaced, the correct answer is underlined.

Table 9 :
The 25 most frequent query types in WIKIHOP alongside their proportion in the training set.