On this page, a summary of the data collection methodology of the CWTS Leiden Ranking 2014 is provided. It should be emphasized that, in general, universities did not verify and approve the publication data of their institution and that publications have been assigned to universities on the basis of the institutional affiliations mentioned by the authors of the publications. However, the assignment of publications from these affiliations is by no means a straightforward task. A university may be referred to by many different (non-English) name variants and abbreviations. In addition, the definition and delimitation of universities as separate entities is not always obvious.
The criteria that have been adopted to define universities for the Leiden Ranking are not very formal. Typically, a university is characterized by a combination of education and research tasks in conjunction with a doctorate-granting authority. However, these characteristics do not mean that the universities are particularly homogeneous entities that allow for international comparison on every aspect. The focus of the Leiden Ranking on scientific research certifies that the institutions included in the Leiden Ranking have a high degree of research intensity in common. Nevertheless, the ranking scores for each institution should be evaluated in the context of its particular mission and responsibilities. These missions and responsibilities in turn are strongly linked to the national and regional academic systems in which universities operate. Academic systems - and the role of universities therein - differ substantially from one another and are constantly changing. Inevitably, the outcomes of the Leiden Ranking reflect these differences and changes.
The international variety in the organization of academic systems also poses difficulties in terms of identifying the proper unit of analysis. In many countries, there are collegiate universities, university systems, or federal universities. Again, instead of applying formal criteria when possible we followed common practice based on the way these institutions are perceived locally. Consequently, we treated the University of Cambridge and the University of Oxford as entities but in the case of the University of London, we distinguished between the constituent colleges. For the United States, university systems (e.g. University of California) were split up into separate universities. The higher education sector in France, like in many other countries, has gone through many reorganizations in recent years. Many French institutions of higher education have been grouped together in Pôles de Recherche et d'Enseignement Supérieur (PRES), or in consortia. In most cases, the Leiden Ranking still distinguishes between the different constituent institutions but in particular cases of very tight integration, consortia were treated as if they were a single university (e.g. Grenoble INP).
Publications are assigned to universities based on their most recent configuration. Changes in the organizational structures of universities up to 2013 have been taken into account. For example, in the Leiden Ranking 2014, the University of Lisbon which merged with the Technical University of Lisbon in 2013 encompasses all publications assigned to the old University of Lisbon as well as the publications previously assigned to the Technical University of Lisbon.
A key challenge in the compilation of a university ranking is the handling of publications originating from research institutes and hospitals associated with universities. Among academic systems a wide variety exists in the types of relations maintained by universities with these affiliated institutions. Usually, these relationships are shaped by local regulations and practices and affect the comparability of universities on a global scale. As there is no easy solution for this issue, it is important that producers of university rankings employ a transparent methodology in their treatment of affiliated institutions.
CWTS distinguishes three different types of affiliated institutions:
In the case of components the affiliated institution is actually part of the university or so tightly integrated with it or with one of its faculties that the two can be considered as a single entity. The University Medical Centres in the Netherlands are examples of components. All teaching and research tasks in the field of medicine that were traditionally the responsibility of the universities have been delegated to these separate organizations that combine the medical faculties and the university hospitals.
Joint research facilities or organizations are the same as components except for the fact that they are administered by more than one organization. The Brighton & Sussex Medical School, the joint medical faculty of the University of Brighton and the University of Sussex and, Charité, the medical school for both the Humboldt University and Freie Universität Berlin are both examples of this type of affiliated institution.
The third type of affiliated institution is the associated organization which is more loosely connected to the university. This organization is an autonomous institution that collaborates with one or more universities based on a joint purpose but at the same time has separate missions and tasks. In many countries, hospitals that operate as teaching or university hospitals fall into this category. Massachusetts General Hospital, one of the teaching hospitals of Harvard Medical School, is an example of an associated organization.
The treatment of university hospitals in particular is of substantial consequence as medical research has a strong presence in the Web of Science. The importance of associated organizations is growing as universities present themselves more and more frequently as network organizations. As a result, researchers formally employed by the university but working at associated organizations may not always mention the university in publications. On the other hand, as universities become increasingly aware of the significance of their visibility in research publications, they actively exert pressure on researchers to mention their affiliation with the university in their publications.
In the Leiden Ranking 2014, publications from affiliated institutions of the first two types are considered as output from the university. A different procedure has been followed for publications from associated organizations. A distinction is made between publications from associated organizations that also mention the university and publications from associated organizations that do not contain such a university affiliation. In the latter case, publications are not counted as publications originating from the university. In the event that a publication contains affiliations from a particular university as well as affiliations from its associated organization(s), both type of affiliations are credited to the contribution of that particular university to the publication in the fractional counting method.
The 750 universities that appear in the Leiden Ranking have been selected based on their contribution to articles and review articles published in international scientific journals in the period of 2009–2012. The contribution of a university to an article is calculated based on the number of affiliations mentioned in the article. If an article mentions three different affiliations of which two belong to a particular university, then the contribution of that university to the article is counted as two thirds. Only publications in core journals are included. The equivalent of more than 1,000 papers was required for a university to be ranked among the 750 universities with the largest scientific output.
It is important to highlight that the assignment of publications to universities is not free of errors. There are generally two types of errors: 'false positives', which are publications that have been assigned to a university when they do not in fact belong to that university, and 'false negatives', which are publications that have not been assigned to a university when they should in fact have been. Considerably more false negatives than false positives should be expected, especially since the 5% least frequently occurring addresses in the database may not have been manually checked. This can be considered a reasonable upper bound for errors, since the majority of these addresses are probably non-university addresses.