Comparison of bibliographic data sources: Implications for the robustness of university rankings

Universities are increasingly evaluated, both internally and externally on the basis of their outputs. Often these are converted to simple, and frequently contested, rankings based on quantitative analysis of those outputs. These rankings can have substantial implications for student and staff recruitment, research income and perceived prestige of a university. Both internal and external analyses usually rely on a single data source to define the set of outputs assigned to a specific university. Although some differences between such databases are documented, few studies have explored them at the institutional scale and examined the implications of these differences for the metrics and rankings that are derived from them. We address this gap by performing detailed bibliographic comparisons between three key databases: Web of Science (WoS), Scopus and, the recently relaunched Microsoft Academic (MSA). We analyse the differences between outputs with DOIs identified from each source for a sample of 155 universities and supplement this with a detailed manual analysis of the differences for fifteen universities. We find significant differences between the sources at the university level. Sources differ in the publication year of specific objects, the completeness of metadata, as well as in their coverage of disciplines, outlets, and publication type. We construct two simple rankings based on citation counts and open access status of the outputs for these universities and show dramatic changes in position based on the choice of bibliographic data sources. Those universities that experience the largest changes are frequently those from non-English speaking countries and those that are outside the top positions in international university rankings. Overall MSA has greater coverage than Scopus or WoS, but has less complete affiliation metadata. We suggest that robust evaluation measures need to consider the effect of choice of data sources and recommend an approach where data from multiple sources is integrated to provide a more robust dataset.

Bibliometric statistics are commonly used by university leadership, governments, funders and related industries to quantify academic performance. This in turn may define academic promotion, tenure, funding and other functional facets of academia. This obsession with excellence is highly correlated to various negative impacts on both academic behaviour and research bias (Anderson et al., 2007;Fanelli, 2010;van Wessel, 2016;Moore et al., 2017). Furthermore, these metrics (such as citation counts and impact factors) are often derived from one of the large bibliographic sources such as Web of Science (WoS), Scopus or Google Scholar (GS). Given the potential differences between their coverages of the scholarly literature, quantitative evaluations of research based on a single database present a risky basis on which to make policy decisions.
In a related manner, these bibliographic sources and metrics are also used in various university rankings. For example, Scopus is utilised by QS University Rankings and THE World University Rankings for citation counts, while Academic Ranking of World Universities makes use of WoS for a similar purpose . These rankings, and others, have been driving 1 systematic transformations to higher education, including increased focus on student satisfaction, and changes in consumer behaviour. A focus on performance according to the narrow set of measures reflected in university rankings comes with a number of side effects, such as institutional homogenization, distorting disciplinary balance and altering institutional focus (Shin & Toutkoushian, 2011;Hazelkorn, 2007). As a result of heavy criticism by the scientific community, university rankings (together with impact factors) have recently been boycotted by some academic stakeholders (Stergiou & Lessenich, 2014). This also includes domestic rankings . Nevertheless, they are still widely marketed and used, without 2 necessarily being carefully comprehended by decision makers (e.g., policymakers, students).
Bibliographic data sources evidently make a significant impact on the academic landscape. This makes the selection and use of such databases essential to various stakeholders. As such, a number of important research questions arise: 1. Are there differences across bibliographic databases? 2. If there are differences, can we characterise them? 3. Do these differences matter? How do they matter? 4. And, to who these differences matter?
Answers to these questions may shed light on better and more robust ways to understanding scholarly outputs. For all of these questions our concern is how these different analytical instruments differ in the completeness, comparability and precision of information they provide at the institutional level. Our focus is not on reconstructing a 'true' view of scholarly outputs but in a comparison of this set of tools.

Literature review
Citation indexing of academic publications began in the 1960s, with the introduction of the Science Citation Index (SCI) by Eugene Garfield. This was followed by the annual release, starting from 1975, of Impact Factors through Journal Citation Reports . This was initially developed to select additional journals for inclusion in the SCI. At that stage, much of the citation extraction was done manually (e.g., using punched cards as input to primitive computers) and results were restricted to a niche selection of articles and journals. However, with the explosion of the Internet in the 1990s, citation indexing became automated and led to the creation of CiteSeer (Giles et al., 1998), the first automatic public citation indexing system.
The rapid up-scaling of citation records created opportunities for new research explorations and bibliographic services. The former is often driven by citation analysis in the fields of bibliometrics and scientometrics, where quantitative evaluations of the academic literature play major roles. The latter is evidenced by the rise of large bibliographic and citation databases. Some of the most popular databases include WoS, Scopus, GS, and, more recently, Microsoft Academic (MSA).
WoS was the only systematic source for citation counts until 2004, when Scopus and GS were introduced. One of the earliest comparisons of these three sources was done by Jacs ó (2005). The article reported on search results for citations to an article, citations to a journal and citations to top 30 most cited papers in a particular journal. At that time, WoS had the highest number of records simply because of its longer time span, Scopus had the widest coverage for more recent years, and GS had the lowest number of records with very limited search functions and incoherent metadata records.
Other early studies showed that Scopus offered 20% more coverage (than WoS) in citations, while GS (although with good coverage) had inconsistent accuracy in its results (Falagas et al., 2008). A number of studies have shown that the average citation counts across disciplines varied by source (Bakkalbasi et al., 2006;Yang & Meho, 2006;Kulkarni et al., 2009). It was also shown that, for a small list of researchers, the h-index calculated from these three sources gave very different results (Bar-Ilan, 2008). The latest large scale comparison showed that GS had significantly more coverage of citations than WoS and Scopus, though the rank correlations were high (Martín-Martín et al., 2018). Interestingly, Archambault et al. (2009) also showed that the rankings of countries by number of papers and citations were highly correlated between results extracted separately from WoS and Scopus. Mongeon & Paul-Hus (2016) found that the journal coverages of both WoS and Scopus were biased towards Natural Sciences, Engineering and Biomedical Research. More importantly, their overall coverages differed significantly. Similar findings were obtained by Harzing & Alakangas (2016) when GS was added to the comparison, although for a much smaller sample of objects. Franceschini et al. (2016) also studied database errors in both Scopus and WoS, and found that the distributions of errors were very different between these two sources.
MSA was re-launched (in beta version) in 2016 as the newly improved incarnation of the outdated Microsoft Academic Services. MSA obtains bibliographic data through web pages crawled by Bing. MSA's emergence and fast growth (at a rate of 1.3 million records per month, according to  has spurred its use in several bibliometrics studies (De Domenico et al., 2016;Portenoy et al., 2016;Sandulescu & Chiru, 2016;Wesley-Smith et al., 2016;Vaccario et al., 2017;Portenoy & West, 2017;Effendy & Yap, 2017). At the same time, various papers have tracked changes in the MSA database and compared it to other bibliographic sources (Paszcza, 2016;Harzing, 2016;Harzing & Alakangas, 2017a;Harzing & Alakangas, 2017b;. Its rapid development, especially in correcting some vital errors, over the past two years and strength in coverage have been very encouraging. Tsay et al. (2017) indicated that MSA had similar coverage to GS and the Astrophysics Data System for publications of a sample of Physics Nobel Laureates from 2001 to 2013, with MSA having a much lower internal overlap percentage than that of GS. MSA has also recently been used to predict Article Influence scores for open access (OA) journals (Norlander et al., 2018).  and Thelwall (2018), using samples of publications, showed there was uniformity between citation analyses done via MSA and Scopus. Harzing & Alakangas (2017a) also showed, for individual researchers, that the citation counts by MSA were similar to or higher than Scopus and WoS, varying across disciplines.

What is different in this study?
As discussed by Neylon & Wu (2009), using a singular article-level or journal-level metric as a filter for scientific literature is deeply flawed and incorporating diverse effective measurement tools is a necessary practice. In a similar vein, using a single bibliographic source for evaluating specific aspects of academia can be very misleading. Given the immense social and academic impacts of the results of such evaluations, and the unlikeliness of them (as either part of research quantification or rankings) being completely discarded anytime soon, one ought to be cautious in both interpreting and constructing such evaluation frameworks. With this in mind, we aim to provide a deep exploration in comparing the coverage of research objects with DOIs (digital object identifiers) in WoS, Scopus and MSA , in terms of both volume and various bibliographic variables, at the institutional level. 3 In particular, a sample list of fifteen universities is selected (ranging in geography, prestige and size) and data affiliated with each university are drawn from all three sources (from 2000 to 2018). Less detailed data are also collected for another 140 universities to be used as a supplementary set where applicable. An automated process is used to compare the coverage of the sources and the discrepancies in publication year recorded. On the other hand, manual online searches were deployed to validate affiliation correctness and plausibility for samples of DOIs. The focus on DOIs also provides broader opportunities for cross-validation of bibliographic variables, such as OA status and document types from Unpaywall , and citations data from OpenCitations . This will assist in further understanding 4 5 of the differences between these sources and the kind of biases that they may lead to.
Previous studies that compared WoS, Scopus and MSA were limited to publications linked to an individual researcher, a small group of researchers, or one university. These comparisons were also mostly drawn in relation to citation counts. This article extends the literature by expanding the study set to include several universities and drawing institutional comparisons across a larger selection of characteristic and measures. The study further includes analyses of potential effects in the exclusive selection of one source for evaluating a set of bibliographic metrics, i.e, potential effects on the ranking of universities. The use of secondary data sources, i.e., Unpaywall and OpenCitations, to construct metrics for OA and citations is another variation from some of the previous work. This gives standardised contrasting sets of records for comparisons across bibliographic sources and potentially reduces the level of dissimilarity caused by internal bias. The results lead up to the main message that it is essential to integrate diverse data sources in any institutional evaluation framework.
The remainder of this article is structured as follows: Section 2 gives an overview of some global characteristics across the various bibliographic databases. Section 3 provides detailed descriptions of our data collection and manual cross-validation processes. All analyses and results are presented in Section 4. Sections 5 and 6 are discussions on limitations and conclusions, respectively.

Global comparison of features and characteris -tics across WoS, Scopus & MSA
WoS and Scopus are both online subscription-based academic indexing services. WoS was originally produced by the Institute for Scientific Information (ISI), but was later acquired by Thomson Reuters, and then Clarivate Analytics (formerly a part of Thomson Reuters). It contains a list of several databases, where access (full or partial) to each depends on the selection of subscription models. The search functionalities can also vary according to which databases are selected (for example, the "Organization-Enhanced" search option is not available when all WoS databases are included). On the other hand, Scopus (provided by Elsevier) seems to offer one unified database of all document types (the only exception is data on patents, which pops up as a separate list in search results). A quick manual online search would reveal a wider variety of document types in WoS. For example, it contains items listed as "poetry", which does not seem to fit into any of the types in Scopus.  MSA is open to the public through the Academic Knowledge API, though both a rate limit and a monthly usage cap apply to this free version . The subscription version is documented 9 as relatively cheap at $0.32 per 1000 transactions . Its semantic search functionality and 10 ability to cater for natural language queries are amongst the main differences from the other two bibliographic sources. Its coverage in patents has greatly increased through the recent 6 As per website search or report on 7 August 2018. Numbers reported are not necessarily the same as total number of user accessible records. For estimates of user accessible records, see Gusenbauer (2018). 7 As permitted through the advanced search functions in WoS and Scopus on 7 August 2018. 8 While this article was being prepared, Elsevier announced their agreement to use Unpaywall data, see https://www.elsevier.com/connect/elsevier-impactstory-agreement-will-make-open-accessarticles-easier-to-find-on-scopus and later implemented it https://blog.scopus.com/posts/scopus -makes-millions-of-open-access-articles-easily-discoverable 9 See https://dev.labs.cognitive.microsoft.com/products/5636d970e597ed0690ac1b3f 10 See https://azure.microsoft.com/en-au/pricing/details/cognitive-services/academic-knowledge-api/ 6 inclusion of Lens.org metadata . As a preliminary examination, we take a look at some 11 global characteristics and features across the three sources. Table 1 provides an overview of coverage and comparative strengths in each source.
WoS has several databases from which it extracts data. The most commonly used version is WoS Core, which allows for more functionality. On the other hand, WoS All Databases includes all databases listed by WoS (with increased coverage for Social Sciences and local languages, for example), but due to varying levels of availability of information it functionalities are limited, e.g., less search query options. Scopus does not seem to index Art & Humanities, while MSA appears to have significantly more coverage in Social Sciences and Arts & Humanities than WoS Core and Scopus. With higher coverages for journals and conferences, MSA tracks a significantly larger set of records. It is also interesting to note that MSA had approximately 127 million documents only a couple of years ago (Herrmannova & Knoth, 2016). The annual total numbers of objects for the various sources from 1970 to 2017 are 13 displayed in Figure 1. In comparison to Jasc ó (2005), and other studies mentioned earlier, there seems to be significant increases in both Scopus and WoS, in terms of both growth over time and backfilling. However, both sources still have significantly less total counts than that of MSA. The figure also shows a high degree of correlation between Scopus, WoS Core and WoS All. However, this figure does not provide any information on internal or external overlaps across the sources (which we shall explore).
To get a better overview of research disciplines covered by each source, the percentage spread of objects across disciplines, for each source, is displayed in Figure 2. Evidently all sources are dominated by the sciences, as commonly noted in the literature. However, MSA does seem to have relatively higher proportions for both Social Sciences and Arts & Humanities.

Methodology & data
To perform a more detailed comparison of sources, we gather output for a selected set of fifteen universities (which range in geography, prestige and size) from each bibliographic source, i.e, WoS, Scopus and MSA. This is done through the use of APIs for each source.
We Unpaywall and OpenCitations data. Unpaywall is used to query the OA status of (Crossref) DOIs and document type. For this article, we only require the general OA status and not the type of OA (e.g., gold OA, green OA, etc.). Hence, we only use the "is_oa" field in the Unpaywall metadata to determine the OA status of DOIs in our data. Document type is determined via the data field "genre". OpenCitations records citation links between Crossref DOIs. By querying and merging all links to a DOI, it allows us to determine the number of citations this DOI receives. We gather this information for a set of DOIs of interest (e.g., DOIs from WoS affiliated to one university) and obtain total citation counts for this set. This total can then be divided by the number of (Crossref) DOIs affiliated to this university to produce an average citation count . 20 A manual process is followed for checking characteristic 6. The procedure for the manual validation is focused on the non-overlapping parts of the three sources (i.e., shaded sections in Figure 3). The overlapping parts indicate agreement by at least two sources, over both affiliation and publication year records (when filtered down to a particular year). Given the different ways in which the sources gather data, the reliability of information for these parts is much more convincing. In contrast, the non-overlapping sections are not validated by other sources (at least appears so through the data gathering process). This leads to the need for the manual validation process.
The publication year can be a reason for the discrepancy of coverage due to inconsistencies in which date is recorded. For example, in the case of a journal article, a source may choose to record either the date of the journal issue, publication date for the article, or the date for which the article first appears online. Hence, our first step is to check whether DOIs from the non-overlapping sections are indeed in another source but fall in a different year. After removing these DOIs that were identified via comparison to adjacent years, we sample the remaining DOIs from each non-overlapping section for manual validation ( Figure 3). This is processed for DOIs from 2016.

Figure 3: Non-overlapping sections (in grey) of the spread of DOIs from 3 sources for an institution in a particular year.
The process that leads to the manual validation is summarised in the flowchart given in Figure 4 . Once DOIs are sampled from each non-overlapping section, they are compared 21 against the other two sources (via DOI and title searches on each source's webpage) and also the original document (online versions) .

22
Assume we have three sources A, B and C, and the current set of DOIs are from source A.
The following questions are asked as part of the manual checking process (with a likewise procedure used for DOIs from the other two sources): The numbers of DOIs to be sampled for each institution are 30, 30 and 40 from (exclusively) WoS, Scopus and MSA, respectively, after removal of DOIs that are found in another source for a different year.

Analysis & discussion
In this section, we proceed with the comparisons across sources. We will start with exploring the coverage of DOIs by each source. This is followed by examining the amount of agreement, or disagreement, of publication year recorded by each bibliographic source. The document types, citation counts and OA percentages, as per source, are the subsequent analyses. Lastly, a manual cross-validation procedure is employed for samples extracted from non-overlapping sections of the Venn diagrams for each institution in our sample of 15 institutions.

Coverage and distribution of DOIs
Here we take an exploration of the spread of the DOIs across the sources. Figure 5 shows the Venn diagrams of DOI counts for our initial set of 15 universities combined from 2000 to 2018 and for just 2016 , respectively. Evidently, the central regions (overlap of all three 28 sources) have the highest count in each Venn diagram. These are DOIs that have been indexed by all three sources and, given the intended global coverage of these sources, the relatively higher counts here are not at all surprising. However, there are also significant portions of DOIs exclusively accessed via a single source in both Venn diagrams. This gives rise to the potential biases in any bibliometric measure to be calculated from a single source. This pattern of difference in coverage is mirrored at the institutional level. Appendix 3 contains two Venn diagrams for each institution, both for 2016. In each case, the Venn diagram on the left records all DOIs as per bibliographic source and the one on the right is a subset of these DOIs that are also in the Unpaywall database. It is noted that the two Venn diagrams for each institution are quite similar due to the high coverage of these DOIs by 28 Dates as per source's metadata.

13
Unpaywall. The only exception being the Scopus coverage of DOIs for DUT, for which the DOIs exclusively indexed by Scopus significantly decreased when moving from the left Venn diagram to the one on the right. This is consistent with what we observed earlier, with many these DOIs (provided by agencies other than Crossref) not indexed in Unpaywall. The overall pattern is that there appears to be significant portions of DOIs only indexed by a single source. Hence, pulling together these sources can greatly enhance coverage. Interestingly, for most institutions, MSA has the most number of exclusively indexed DOIs. The only exception being UCL.
To have a better overview of how coverages of these three sources vary across institutions, we perform several analyses as follows. First, we identify each institution with the seven different counts as per its own Venn diagram of all DOIs (the Venn diagrams on the left in Appendix 3). We also include another 140 universities for comparison. We view each 29 (GRID ID, DOI) pair as a distinct object. Hence, we obtain a 155 by 7 contingency table.
Each column of this table represents the number of DOIs falling in the respective section of the Venn diagram, e.g., column 1 is the number of DOIs in section WSM of the Venn diagram (refer to Figure 3). We can also convert these counts to proportions through dividing them by the total number of DOIs for each institution. Figure 6 shows the distribution of these proportions for each section of the Venn diagram. The higher proportion in the central region (section WSM) of the Venn diagram is again observed. The general pattern having emerged is that, for all sections of the Venn diagram, there appears to be a concentrated central location with many extreme cases (excess kurtosis of 2. 29, 9.72, 5.96, 1.82, 22.24, 11.49 and 6.88, from sections WSM, WS, WM, SM, W, S and M, respectively) and substantial skewness.  Again, the pattern of high central peak, skewness and heavy tails are observed. The peakedness and heavy tails are confirmed by the excess kurtosis of 4.29, 3.34 and 8.60 for WoS, Scopus and MSA respectively. The skewness to the left with number of extreme cases highlights the low degree of coverage for some universities. Meanwhile, a correlation analysis of the proportions for the three sources is quite intriguing (see Table 3). Both Spearman's rank correlation and Pearson's correlation matrices are presented here. There appears to be a negative correlation between coverage by WoS and coverage by MSA, i.e., when there is a high proportion of coverage by WoS, the coverage by MSA is relatively low. There is also a low correlation between WoS and Scopus. While much of these may be attributed to the different methodological structure and focus across WoS, Scopus and MSA, the degree of non-alignments is still quite a surprise. 30   where d 1 is the sum of absolute differences across the whole Venn diagram, d 2 is sum of inner differences and d 3 is sum of differences across the outer regions of the Venn diagram. We calculate values for these three measures for each university's Venn diagram and compare their distributions to those produced by randomly generated Venn diagrams. Firstly, they are compared to randomly generated symmetrical Venn diagrams . The resulting 33 distributions are presented in Figure 8. It is quite obvious that the results from our data do not correspond to those of generated symmetrical Venn diagrams. As further contrasts, we also compare these measures against Venn diagrams generated from various other distributions (see Appendix 5). As expected, our data is better represented by other distributions rather than that produced by symmetrical Venn diagrams. Furthermore, there appears to be some differences in distributions across d 1 , d 2 and d 3 , which we do not further examine and leave for future exploration. Now that we have confirmed the differences in DOI distributions across institutions and negative to low correlations between the non-symmetrical coverages by the three bibliographic sources, a follow-up question may be whether there are groupings amongst these universities. We proceed with a hierarchical cluster analysis for both the sample of 15 universities and for all 155 universities, using dissimilarities between the proportions of the 31 None of the cells in these contingency tables has an expected count less than 10. 32 Using the sampling procedure for that of Fisher's exact test with 5000 replicates. See https://www.rdocumentation.org/packages/stats/versions/3.6.1/topics/chisq.test 33 p_wos = p_scopus = p_msa generated from a uniform distribution (truncated at ⅓ and 1).

16
Venn diagrams as clustering criteria . At the same time, we also colour code the universities 34 by their regions and rank positions on the 2019 THE World University Rankings. Some of these are presented in Appendix 6. While no striking patterns emerge, there does appear to be some interesting groupings. For example, there seems to be a block of European and American universities towards the left of the dendrogram coloured by region. Perhaps unsurprisingly, around the same area for the dendrogram coloured by THE ranking, there is also a rough cluster of the most highly ranked universities. The contrasts may be more apparent for the smaller sample of 15 universities. An example of this is presented in Figure 9. ITB is clearly an outlier from the rest of the group (we shall come across this again later) and the two highest ranked universities are placed quite close to each other. Seven of the universities ranking from 201 and above are placed on the right of the dendrogram (perhaps in 2 clusters). One of these also consists mainly of universities from non-English speaking regions (Loughborough being the exception). In general, there appear to be some general patterns of prestige and regional clustering. However, we may need a bigger set of universities for a full analysis.

Comparison of publication years
As mentioned earlier, discrepancies in publication year recorded by different bibliographic sources is possible, given there is no universal standard to the definition of publication year (or publication date for that matter). It could potentially refer to various dates linked to a research output. This poses a problem when one would like to combine sources to evaluate and track a bibliometric variable (or metric) over time. If not dealt with, a DOI can be double-counted, i.e., counted two or more times in different years via different sources. In the following, we explore the amount of agreement (or disagreement) on publication years by WoS, Scopus and MSA. The overall numbers are presented in  In this table, the number of DOIs jointly indexed by pairs of bibliographic sources (columns 2 to 4) and by all three bibliographic sources (column 5) are recorded (row 3). The corresponding numbers and percentages of DOIs for which the sources agree on publication years are given in rows 2 and 4, respectively. It should be noted that these percentages are calculated over different sets of DOIs (i.e., different denominators). For example, number of DOIs common to all three sources (i.e., 404710) is less than number of DOIs common to only Scopus and MSA (i.e., 522026).

Figure 10: Number (in log scale) of 2016 DOIs from each source (exclusively) that falls in another source but in a different year (15 universities combined).
It is clear that the overall levels of agreements are very high. However, two follow-up questions are: 1) for DOIs that does lie in a different source for a different year, what is the spread of these DOIs over years? 2) while the overall agreement of publication years is high, does that carry over to individual institutions?
To answer these questions, we now focus our attention to the year 2016 and DOIs that are exclusively indexed by a single source for that year. Figure 10 displays the spread of such DOIs from a particular sources when matched against the other sources for different years. These are again DOIs from our sample of 15 institutions combined. The majority of the discrepancies are within one year (i.e., falling in 2015 and 2017), while going a further one year period in both directions covers almost all remaining cases. We also note some differences across the sources.  Next we explore how these discrepancies of the publication year are distributed for individual institutions. Table 6 records, for each source, the percentages of DOIs from 2016 that lies in other two sources but differ by a year and two years, respectively. For WoS, the percentage of matches over one year is consistently small for all institutions, ranging from 0.8% to 2%. This also significantly decreases when moving to the two year gap. In contrast, Scopus and MSA seem to have more varied results for the one year gap across institutions and with generally higher percentages than those of WoS.
The one standout case is ITB, an Indonesian university situated in the City of Bandung. Its results for WoS is similar to other institutions, but one year comparisons from Scopus and MSA yielded 24.6% and 25.6%, respectively. We believe that this may be due to two reasons. Firstly, WoS has a significantly less coverage for ITB (see Venn diagrams for ITB in Appendix 3) than those of Scopus and MSA. There is also a much lower number of DOIs exclusively indexed by WoS. Secondly, Indonesia has an extraordinary large number of local journals owned by universities and many of them OA (with or without OA license). This is largely driven by government policy that requires academics and students to publish research results and theses in academic journals . Many of these journals are also linked to 38 conference output. This may have resulted in a systematic difference on how publication years (or dates) are recorded (or defined). The other two cases that stand out, although less extreme, are Cairo and IISC.
In Appendix 7, the directions of the comparisons are displayed in more detail for the three standout cases (i.e., Cairo, IISC and ITB). The comparisons are also narrowed down to just Scopus and MSA. It is immediately clear that the difference between Scopus and MSA are the main contributors to these standout cases. Also, it appears that MSA tends to record the publication year once year earlier than Scopus. This is in line with our earlier comments regarding MSA recording date of first online and the publishing venues in Indonesia.
Let us now focus on the outer parts of the Venn diagrams (i.e., DOIs that appear to be exclusively indexed by a single source). Results for these sets of DOIs are presented in Table 7. Columns 2, 5 and 8 lists the number of 2016 DOIs exclusively indexed by WoS, Scopus and MSA (compare these again with Venn diagrams in Appendix 3), without checking against DOIs listed in other years. The subsequent columns list the percentages of these DOIs that can be matched against DOIs in other sources for a one year and a two year gap, respectively. Consistent with Table 6, significantly higher portions of DOI matches occur after incorporating the first one year gap, as compared to including a further one year on both sides. Relatively, the most impacted university is ITB, which corresponds to the observation made in Table 6. In general, the effect on these exclusive sets of DOIs varies considerably across institutions and sources (more so than observed in Table 6, as expected).

Document Types
Another important bibliographic variable is the document types (e.g., journal articles, proceedings, book chapters, etc) that relate to each DOI. In particular, the coverage of different document types can lead to insights into potential disciplinary biases in data sources and differences in institutional focuses on output types.
For this study, we use the "genre" variable in Unpaywall metadata to determine the document type of each DOI. These are Crossref-reported types for all DOI objects in the Crossref database . proportion of the DOIs. This is true overall and for individual parts of the Venn diagram, but not unexpectedly so. The scenario is again more interesting when we consider the outer parts of the Venn diagram (sections W, S and M of the Venn diagram). The set of DOIs exclusive to MSA contains significantly more book chapters and proceeding-papers relative to any other parts. It also provides almost all thesis entries in our data and is the only source to provide posted-contents. On the other hand, Scopus seems to provide many books and monographs not indexed by the other two sources.  Figure 5. This is because here we are only including DOIs that are also recorded in Unpaywall. 43 See Figure 3 for the labelling of the Venn diagram. Again we would like to examine how the situation plays out for individual institutions. After filtering the sets of DOIs to each institution and to the year 2016, we follow the same procedure as above to produce the spread of document types across each part of an institution's Venn diagram. These are recorded in Appendix 8. As we have observed for the combined data set, journal articles make up the highest portion of the DOIs for each institution. The next two most common document types are book chapters and proceeding papers. The only exception being ITB, where there are slightly more proceeding papers than journal articles. Interestingly, there are a few universities with more book chapters than proceeding papers, i.e., Curtin, UNAM, UCL, UCT, Giessen and WSU.
There are high proportions of book chapters indexed exclusively by MSA for all institutions. MSA also have the highest proportion of exclusively indexed journal articles, except for MIT, UCL and Giessen (WoS has the highest such proportion for these three institutions). It is also observed that MSA and Scopus seem to bring in more additional proceeding papers than WoS (only exception being UNAM where all three sources have similar exclusive coverage on proceeding papers). Scopus also seem to often add books and monographs not indexed by the other two sources. For all universities, journal articles make up the majority of DOIs exclusively indexed by WoS. In contrast, the document types of DOIs exclusively indexed by Scopus or MSA are more diverse. Overall, we observe that each source has a different exclusive coverage of document types and this coverage also varies across institutions.

Citation counts
One set of commonly used bibliographic metrics, in the evaluation of academic output, are those that relate to citation counts. These include metrics such as h-index, impact factor and eigenfactor. However, these citation metrics can also be calculated via different sources. WoS, Scopus and MSA all record and maintain their own citation data. While some research have shown that the citation counts across these sources showed high correlations at the author level and journal level (Harzing, 2016;Harzing & Alakangas, 2017a;Harzing & Alakangas, 2017b), the corresponding effects on a set of universities remain relatively unknown. These analyses were also performed using internal citation counts of each source. In this study, we rather prefer to use a standard set of citation links to be applied to all three sources of DOIs. As such, we introduce a further reference set of data from OpenCitations. We match each DOI against the list of DOI citation links in OpenCitations and obtain (if exist) its total citation count. In Table 9, we present the results combining DOIs for our initial set of 15 universities and for all years from 2000 to 2018.
The results show that the total number of citations to MSA DOIs are slightly lower than WoS and Scopus. This is in addition to an already larger set of (Unpaywall/Crossref) DOIs. Hence, MSA resulted in a lower average citation number from the OpenCitations citation links.
As a further analysis, we would like to investigate how the change of bibliographic source influences the perceived performance of an institution. Figure 11 presents two different charts for total citations and ranks by average citations for each of the sample of 15 universities. UCL and MIT experience the biggest changes in total citation counts: decreases of 34% and 38% respectively (left side chart in Figure 11), when shifted from WoS to MSA. While the remaining universities' total citation counts seem to have changed at a lesser degree across sources, the differing coverage of DOIs (i.e., different number of DOIs recorded) by each source can still significantly change the average citation counts. This is evidenced in the second chart of Figure 11. Only 4 universities' rankings remain unchanged across sources (top 3 and last place). All other universities' positions have shifted at least once across the three sources, with the biggest changes affecting IISC, USP and UNAM.    For further insight into the distribution of shifts across sources, we summarise the pairwise changes to average citations and rankings by average citations into box plots in Figure 13. The median change to average citations when moving from WoS to Scopus is just below zero, while the corresponding medians for WoS to MSA and Scopus to MSA are both just above zero. The corresponding mean values are -0.2, 1.2 and 1.3, respectively. As for the changes to rankings, the median and mean values are all close to zero. The distributions of these box plots are characterised by a concentrated centre with long tails. Again, signifying the existence of two contrasting groups: those universities that were less affected by shifts in bibliographic sources, and those that can have their performance levels, in terms of average citations, greatly altered depending on the choice of source.

OA status
A recent topic of interest is the amount of OA publications produced at different levels of the academic system. In particular, universities may wish to evaluate their OA standings for compliance with funder policies and OA initiatives. For objects with DOIs (and, in particular, Crossref DOIs), various information on accessibility can be queried through Unpaywall . We 47 match all DOIs from the sample of 15 universities to the Unpaywall metadata and calculated the percentage of OA output across each bibliographic source and for all (unique) DOIs combined. This is presented in Table 10.  There does not appear to be substantial changes to the overall OA percentage when shifting across sources for the combined sets of DOIs. However, we should keep in mind that there are significant differences in each source's DOI coverage, as observed earlier.
To see whether such consistency in OA percentages carries over to the institutional level for 2016, we again filter the data down to each university. Figure 14 provides the percentages of OA output and the corresponding relative ranks for each institution, as per set of 2016 DOIs indexed by each source and also recorded in Unpaywall. It is observed that, for quite a few universities, the OA percentages considerably vary depending on which source is used to obtain the sets of DOIs. The most extreme case is again ITB, which had about a 20% drop when moving from WoS to Scopus. Also, the direction of OA percentage changes differ across universities. For example, OA percentage for MIT decreased when moving from Scopus to MSA, but the opposite occurred for USP. This is especially critical if one is to compare the relative OA status across universities, which can vary according to source of DOIs used. As for OA ranks, it seems to indicate a group of universities not affected by changing source, while the other group have their ranks shifted significantly. The most affected cases seem to be USP, ITB and UNAM. 47 https://unpaywall.org/ 48 This is the total number of DOIs in each source that are recorded in Unpaywall. The effects on OA levels and ranks are more difficult to express directly for the larger set of 155 universities. Again, instead of labelling the full set of universities, we highlight only those that have shifted by 20 positions or more at least once. This is displayed in Figure 15. There are 24 out of 155 universities that have shifted at least 20 positions in OA ranking when moved across sources. Seventeen of these are from non-English speaking regions, including six Latin American universities (out of seven in the full set). This is an indication of the potential difference in coverage of the three sources due to language. Analogous to the earlier analysis on citations, we calculate differences in OA percentages and OA ranks when shifting from one source to another and present these in a number of box plots in Figure 16.
Evidently, the median OA% changes when shifting from WoS to Scopus, WoS to MSA, and Scopus to MSA are all positive. The corresponding mean changes are also positive at 3.4%, 4.9% and 1.5% respectively. The median and mean changes to rankings are all close to zero. However, in both OA% and OA rank changes, there are many recorded extreme points (including both negative ones and positive ones). These include an OA% change as large as 31.1% (moving from WoS to MSA) and an extreme drop in OA rank of 96 positions (MSA to WoS). The general distributions of both changes to OA% and changes to OA rankings are characterised by high central peaks and long tails. This implies that, while changes are small for a bulk of the universities, there is also a significant number of cases where universities are largely affected by shifts in data sources.

Manual cross-validation
This section provides a summary of our manual cross-validation results of DOIs exclusively indexed by each source. For each of the 15 institutions, we randomly sampled 40, 30 and 30 DOIs from their sets of 2016 DOIs exclusively indexed by WoS, Scopus and MSA, respectively (i.e., sections W, S and M from Venn diagram in Figure 3). This was done after the removal of DOIs that match-up to other sources in a different year (this includes the neighbouring two years, i.e., 2014, 2015, 2017 and 2018). Subsequently, these lists of DOIs go through a thorough manual cross-validation process. Various questions were asked against each DOI and compared across the three bibliographic sources. These are summarised into a table in Appendix 9.
In the following, we shall highlight some of the main findings in a few simple charts, with further detailed analysis provided in Appendix 9. Firstly, we focus on the plausibility of affiliation associated with each DOI. In Figure 17, we present results related to affiliation of each DOI as per source. For each DOI, the target affiliation is checked against its online original document . When the original 50 document is not accessible (e.g., not OA), the affiliation is matched against the other two sources. The decision is made to indicate the affiliation as plausible when the target affiliation (i.e., affiliation as per our data collection process) appears exact (including obvious versions of the university name) on the document, a plausible affiliation name variant 51 appears on the document, or the affiliation is confirmed by at least one of the other two bibliographic sources (if found). This should (roughly) inform us about whether each source have correctly assigned these DOIs to the target affiliations.
The result shows that all sources have only correctly assigned roughly 80% of their respective DOIs from our sample to the target affiliations, with very little difference in performance across the sources. When this filtered down as per university, we see a more varied performance across universities. 50 This is done via doi.org as first pass, followed by manual title search online. 51 The decision of whether an affiliation is a plausible variant of the target affiliation is made somewhat subjectively but informed via simple online searches. These may include subdivisions under the target affiliation (e.g., departments, research groups, etc), aliases, etc. The strategy is that this should be a simple decision via a quick online search, otherwise a negative response is recorded.
Interestingly, not all percentages are very high across the universities. This is especially apparent for DUT and IISC, where MSA seems to have affiliated many DOIs to these two institutions without the target affiliations actually appearing on the original documents or confirmed by another source. Similarly, for DOIs that were assigned to MSU and UNAM by Scopus, only 46.7% (for both institutions) have a plausible affiliation match.
We have also checked each DOI against the DOI string actually recorded as per original document (where applicable) or via doi.org. These percentages (of correct DOIs) are 93.1%, 98.2% and 96.7 for WoS, Scopus and MSA respectively (with all 15 institutions combined). While these numbers are relatively high, the significant number of errors suggests that DOIs are not being systematically checked against authoritative sources such as Crossref which we find surprising. In addition the nature of these errors which in some cases appear to be transcription or OCR errors is concerning (see supplementary Information in Appendix 10).  We now take an overview of results from the DOI and title matching, given in Figure 19. As an initial analysis, no affiliation information is considered here and the results represent all DOIs for the 15 universities combined. Each bar represent the percentage of output corresponding to DOIs (that initially appear to be) exclusively indexed by one source that can be found in another source by DOI matching and title matching (via manual searches online). For example, the first bar corresponds to objects with DOIs sampled from Scopus. The height of the blue bar shows the percentage of these object that can be found in WoS by DOI matching. The orange bar then indicates how much more can be found by title matching.
We found that in all cases where there is a DOI match, there is also a title match. However, the opposite is not necessary true. Hence, title matching increases the coverage slightly in all scenarios. This does imply that all three sources have missing DOIs in their metadata, though there appears to be fewer cases for Scopus. Scopus also seems to have a good coverage of DOIs from WoS. More strikingly, very high proportion of DOIs and titles from WoS and Scopus are found in MSA. In contrast, much fewer MSA DOIs and titles are covered by WoS and Scopus.
In Figure 20, we added affiliation matching to the mix, i.e., check whether the target affiliation (i.e., affiliation as per our data collection process) appears in the metadata of the matching source, after an object is found by DOI or title match. This decreased the coverage in all cases, indicating the potential disagreement of affiliation across sources. MSA is the most affected out of the three sources.
The general picture that has emerged is that MSA seems to have good coverage of DOIs that initially appeared to be exclusively from WoS or Scopus. However, it falls short on getting the affiliation correct and recording the corresponding DOIs. MSA also seems to bring in more objects that genuinely appear to be exclusive to MSA. The correctness of affiliation metadata for these is high overall but tends to vary across institutions.

Limitations and challenges
One obvious limitation is our focus on DOIs and our dependence on the uniqueness of DOIs. We do note that there may be research objects with multiple DOIs and related objects may also be assigned a common DOI (e.g., books can fit in both cases). A related matter is the correctness of DOIs, i.e., whether they were recorded correctly (as per doi.org) in each source's metadata. DOIs that did not generate Unpaywall returns could include such cases. While our manual cross-validation process did check our samples against doi.org, it is not clear what the scale of this issue is for the overall data.
Our manual cross-validation process is carried out over a number of months after the initial data collection process. This means that there may be potential discrepancies between metadata content at the time of collection and time of manual search. However, we expect such cases to be few given that we are focused on 2016 data and a number of manual spot checks did not reveal any obvious such cases.

Conclusion
This article has taken on the task of comparing and cross-validating various bibliographic characteristics (including coverage, publication date, OA status, document type, citations and affiliation) across three major research output indexing databases, i.e., WoS, Scopus and MSA. This is done mainly with a focus on identifying institutional level differences and the corresponding effects of using different data sources in comparing institutions. Our data consists of all objects with DOIs extracted from the three bibliographic sources for an initial sample of 15 universities and a further supplementary 140 universities (used only where applicable).
Firstly, we found the coverage of DOIs not only differ across the three sources, their relative coverages are also non-symmetrical and the distribution of DOIs across the sources varied from institution to institution. This means that the sole use of one bibliographic source can potentially seriously disadvantage some institutions and advantage others in terms of total number of output. While the general level of agreement on publication year is high across sources, there were individual universities with large differences in coverage per year. The comparison of document types showed that different sources can systematically add coverage of selected research output types. This may be of importance when considering the coverage of different research discipline areas.
Our subsequent analyses further showed that while the aggregate levels (i.e., for 15 universities combined) in citation counts and OA levels varied little across sources, there are significant impacts at the institutional level. There were clear examples of universities shifting dramatically in both of these metrics when moving across sources, some in opposite directions. This makes any rank comparison of citations or OA levels strongly dependent on the selection of bibliographic source.
Finally, we implemented a manual cross-validation process to check metadata records for samples of DOIs that initially appeared to be exclusive from each source, for each of the 15 universities. The records were compared across the three bibliographic sources and against (where accessible) the corresponding online research documents. The process revealed cases of missing links between metadata and search functionalities within each database (for both affiliation and DOI). This means the real coverage of each source is unnecessarily truncated. Overall, it appears that MSA has the highest coverage of objects that initially appeared exclusive to other sources. However, it often has missing DOIs and affiliations that do not match with WoS, Scopus or online documents.
There is also strong evidence that the effects of shifting sources may be more prominent for non-English speaking and non-European universities. Similar signs were observable for universities that are medium-ranked in both citations and OA levels, while those that achieve high rankings in these measures show much smaller shifts in position when the data source 33 is changed. Universities that are highly ranked on these measures also tend to be highly ranked in general rankings like the THES, suggesting a bias in reliability and therefore curation effort towards prestigious universities.
Our concluding message is: any institutional evaluation framework that is serious about coverage should consider incorporating multiple bibliographic sources. The challenge is in concatenating unstandardised data infrastructures that do not necessarily agree with each other. For example, one primary task would be to standardise the publication dates, especially for longitudinal study. This may be possible, to a certain level, using Crossref or Unpaywall metadata, as an external reference set. Such problems are by no means trivial. However, it has the potential to greatly enhance the delivery of fairer and more robust evaluation.
34 Figure A4.2: Histograms of d 1 , d 2 and d 3 (left to right, respectively) for our data (in red) and for randomly generated Venn diagrams (in purple) from multivariate skew Cauchy distribution . 54 Figure A4.3: Histograms of d 1 , d 2 and d 3 (left to right, respectively) for our data (in red) and for randomly generated Venn diagrams (in purple) from multivariate skew normal distribution . 55 54 Parameters estimated from data. Values were generated using the mscFit command in R package fMultivar . 55 parameters estimated from data. Values were generated using the msnFit command in R package fMultivar . The top chart indicates that a relatively high proportion of DOIs from WoS are indexed in both Scopus and MSA. This implies that Scopus and MSA actually have good coverages of DOIs that initially seemed exclusive to WoS, but they were not assigned to the target affiliations by Scopus and MSA. In contrast, much less of DOIs from MSA can be found in WoS and Scopus (bottom chart). As for DOIs from Scopus (middle chart), relatively high proportion of them can be found in MSA, but much less so in WoS. An extreme exception is DUT, where neither WoS nor MSA have very high coverage of DOIs exclusively from Scopus (it may be worth recalling we saw earlier that Scopus had exclusive coverage of many DOIs registered with Chinese DOI agencies). Overall, it appears that MSA has a broader coverage of all DOIs, but the completeness of affiliation metadata is lower.
While a DOI may be missing from a database, the object to which the DOI is assigned to may actually still be in the database, i.e., the DOI was simply not recorded in the metadata of the research object. Hence, we also performed "title match" (instead of matching DOI strings) across sources. For each sampled DOI, we check whether the corresponding document title can be found in the other two sources. These are summarised in the three charts of Figure A9.2. All percentages in Figure A9.2 are equal or greater than the corresponding percentages in Figure A9.1. This is because for all cases for which the DOI was found, the corresponding title was also found (i.e., various objects have correct titles but no record of their DOIs, not the other way around). The results here also further highlight the extent of MSA's high coverage of objects related to DOIs initially appeared to be exclusively indexed by the other two sources. Otherwise, the general pattern is similar to what we observed earlier, with WoS titles mostly covered by both MSA and Scopus, MSA having more coverage (than WoS) of Scopus titles, and both Scopus and WoS have relatively lower coverage of titles from MSA.
Having the correct affiliation recorded in metadata is not necessarily the same as having the correct affiliation linkage (e.g., an object may have the correct metadata, but does not show up in the affiliation search). As a way to gain some insight into the degree of this issue, we match the affiliation across sources. When two sources both match a DOI to its target affiliation, we refer this to as an "affiliation match". Figure A9.3 demonstrates the findings again via three different charts. Each bar represents the percentage DOIs from one source having title match and affiliation match with another source. For example, the green bars in the top chart of Figure A9.3 denote percentages of those exclusive WoS titles that were found in Scopus and also have affiliation metadata in Scopus that match the target affiliations. It certainly appears that more WoS DOIs actually have title and affiliation matches in the other two sources. One can also note the decrease of percentages when compared to Figure A9.2. This is a clear indication that many of these titles were simply not assigned to their target affiliations by the contrasting sources. However, the numbers here also include title matches that does not necessarily have DOI matches. This means we cannot tell how many of these should have been collected from the contrasting sources via our data collection process. Hence, we now filter down to objects that have both DOI matches and affiliation matches. These are presented in Figure A9.4 and are indications of numbers of DOIs that our data collection process should have captured (but did not) from each contrasting source, given they have DOIs and are plausibly affiliated to target affiliations. The reason for these to be missing from our collection process is likely to be that the affiliation linkages are broken. 57 This could include various reasons but the most prominent one seems to be that the metadata (as per source website) is not synchronized with the API returns we gathered. 57 The other reason would be metadata changes between time of data collection and time of manual cross-validation. But given that we are primary using 2016 data, the scale of this is expected to be relatively small. Manual spot checks did not find any cases where the metadata appears to have changed.