This web site depends on the use of JavaScript. It seems that JavaScript is disabled in your browser.

Indicators

The CWTS Leiden Ranking 2014 is based on publications in Thomson Reuters' Web of Science database (Science Citation Index Expanded, Social Sciences Citation Index, and Arts & Humanities Citation Index) in the period 2009–2012. Book publications, publications in conference proceedings, and publications in journals not indexed in the Web of Science database are not included. Within the Web of Science database, only publications in international scientific journals are included. In addition, only publications of the Web of Science document types article and review are considered.

Impact indicators

The Leiden Ranking offers the following indicators of the scientific impact of a university:

  • MCS (mean citation score). The average number of citations of the publications of a university.
  • MNCS (mean normalized citation score). The average number of citations of the publications of a university, normalized for field differences and publication year. An MNCS value of two for instance means that the publications of a university have been cited twice above world average.
  • PP(top 10%) (proportion of top 10% publications). The proportion of the publications of a university that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited.

Citations are counted until the end of 2013 in the above indicators. Author self citations are excluded. Both the MNCS indicator and the PP(top 10%) indicator correct for differences in citation practices between scientific fields. 828 fields are distinguished. These fields are defined at the level of individual publications. Using a computer algorithm, each publication in the Web of Science database has been assigned to a field based on its citation relations with other publications. Because the PP(top 10%) indicator is more stable than the MNCS indicator, the PP(top 10%) indicator is regarded as the most important impact indicator of the Leiden Ranking.

Share this page


Contact

Collaboration indicators

The following indicators of scientific collaboration are provided in the Leiden Ranking:

  • PP(collab) (proportion of interinstitutional collaborative publications). The proportion of the publications of a university that have been co-authored with one or more other organizations.
  • PP(int collab) (proportion of international collaborative publications). The proportion of the publications of a university that have been co-authored by two or more countries.
  • PP(UI collab) (proportion of collaborative publications with industry). The proportion of the publications of a university that have been co-authored with one or more industrial partners. For more details, see University-Industry Research Connections 2013.
  • PP(<100 km) (proportion of short distance collaborative publications). The proportion of the publications of a university with a geographical collaboration distance of less than 100 km, where the geographical collaboration distance of a publication equals the largest geographical distance between two addresses mentioned in the publication's address list.
  • PP(>1000 km) (proportion of long distance collaborative publications). The proportion of the publications of a university with a geographical collaboration distance of more than 1000 km.

Core journals

A journal is considered a core journal if it meets the following two conditions:

  • The journal publishes in English and has an international scope, as reflected by the countries in which researchers publishing in the journal and citing to the journal are located.
  • The journal has a sufficiently large number of references to other core journals in the Web of Science database, indicating that in terms of citation traffic the journal is well-connected to these other journals. Many journals in the humanities do not meet this condition. The same applies to trade journals and popular magazines.

In the calculation of the Leiden Ranking indicators, only publications in core journals are included. The MNCS and PP(top 10%) indicators become significantly more accurate by excluding publications in non-core journals. About 16% of the publications in the Web of Science database are excluded because they have appeared in non-core journals. A list of core and non-core journals is available in this Excel file.

Size-dependent vs. size-independent indicators

The Leiden Ranking by default reports size-independent indicators. These indicators provide average statistics per publication, such as a university's average number of citations per publication. The advantage of size-independent indicators is that they enable comparisons between smaller and larger universities. As an alternative to size-independent indicators, the Leiden Ranking can also report size-dependent indicators, which provide overall statistics of the publications of a university. An example is the total (rather than the average) number of citations of the publications of a university. Size-dependent indicators are strongly influenced by the size of a university (i.e., a university's total publication output) and therefore tend to be less useful for comparison purposes.

Counting method

The impact indicators included in the Leiden Ranking can be calculated using either a full counting method or a fractional counting method. The full counting method gives equal weight to all publications of a university. The fractional counting method gives less weight to collaborative publications than to non-collaborative ones. For instance, if the address list of a publication contains five addresses and two of these addresses belong to a particular university, then the publication has a weight of 2 / 5 = 0.4 in the calculation of the indicators for this university. The fractional counting method leads to a more proper field normalization of impact indicators and to fairer comparisons between universities active in different fields. Fractional counting is therefore regarded as the preferred counting method in the Leiden Ranking. Collaboration indicators are always calculated using the full counting method.

Stability intervals

A stability interval indicates a range of values of an indicator that are likely to be observed when the underlying set of publications changes. For instance, the MNCS indicator may be equal to 1.50 for a particular university, with a stability interval from 1.40 to 1.65. This means that the true value of the MNCS indicator equals 1.50 for this university, but that changes in the set of publications of the university may relatively easily lead to MNCS values in the range from 1.40 to 1.65. The Leiden Ranking employs 95% stability intervals constructed using a statistical technique known as bootstrapping.

More information

More information on the Leiden Ranking methodology can be found in a number of publications by CWTS researchers. An extensive discussion of the Leiden Ranking is offered by Waltman et al. (2012). This publication relates to the 2011/2012 edition of the Leiden Ranking. Although not entirely up to date anymore, the publication still provides a lot of relevant information on the Leiden Ranking. The bottom-up approach taken in the Leiden Ranking to define scientific fields is described in detail by Waltman and Van Eck (2012). The methodology adopted in the Leiden Ranking for identifying core journals is outlined by Waltman and Van Eck (2013a, 2013b).

  • Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E.C.M., Tijssen, R.J.W., Van Eck, N.J., Van Leeuwen, T.N., Van Raan, A.F.J., Visser, M.S., & Wouters, P. (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419-2432. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378-2392. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2013a). Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison. Scientometrics, 96(3), 699-716. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2013b). A systematic empirical comparison of different approaches for normalizing citation impact indicators. Journal of Informetrics, 7(4), 833–849. (paper, preprint)