Indicators

The CWTS Leiden Ranking 2015 is based on publications in Thomson Reuters’ Web of Science database (Science Citation Index Expanded, Social Sciences Citation Index, and Arts & Humanities Citation Index) in the period 2010–2013. Book publications, publications in conference proceedings, and publications in journals not indexed in Web of Science are not included. Within Web of Science, only so-called core publications are included, which are publications in international scientific journals. In addition, only publications of the Web of Science document types article and review are considered.

Size-dependent vs. size-independent indicators

Except for the publication output indicator P, all indicators included in the Leiden Ranking have two variants: A size-dependent and a size-independent variant. Size-dependent indicators are obtained by counting the absolute number of publications of a university that have a certain property, while size-independent indicators are obtained by calculating the proportion of the publications of a university with a certain property. For instance, the number of highly cited publications of a university and the number of publications of a university co-authored with other organizations are size-dependent indicators. The proportion of the publications of a university that are highly cited and the proportion of a university’s publications co-authored with other organizations are size-independent indicators. In the case of size-dependent indicators, universities with a larger publication output tend to perform better than universities with a smaller publication output. Size-independent indicators have been corrected for the size of the publication output of a university. So when size-independent indicators are used, both larger and smaller universities may perform well.

Share this page


Contact

Impact indicators

The Leiden Ranking offers the following indicators of the scientific impact of a university:

  • P(top 1%) and PP(top 1%). The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 1% most frequently cited.
  • P(top 10%) and PP(top 10%). The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited.
  • P(top 50%) and PP(top 50%). The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 50% most frequently cited.
  • TCS and MCS. The total and the average number of citations of the publications of a university.
  • TNCS and MNCS. The total and the average number of citations of the publications of a university, normalized for field and publication year. An MNCS value of two for instance means that the publications of a university have been cited twice above the average of their field and publication year.

Citations are counted until the end of 2014 in the calculation of the above indicators. Author self citations are excluded. All indicators except for TCS and MCS are normalized for differences in citation practices between scientific fields. For the purpose of this field normalization, about 4000 fields are distinguished. These fields are defined at the level of individual publications. Using a computer algorithm, each publication in Web of Science is assigned to a field based on its citation relations with other publications. By default, the Leiden Ranking ranks universities based on either the size-dependent P(top 10%) indicator or the size-independent PP(top 10%) indicator.

It should be noted that the TCS, MCS, TNCS, and MNCS indicators are not available on the main ranking page. To view these indicator, click on the name of a university, after which a page will be opened that presents an overview of all bibliometric statistics for the university. This overview also includes the TCS, MCS, TNCS, and MNCS indicators.

Collaboration indicators

The following indicators of scientific collaboration are provided in the Leiden Ranking:

  • P(collab) and PP(collab). The number and the proportion of a university’s publications that have been co-authored with one or more other organizations.
  • P(int collab) and PP(int collab). The number and the proportion of a university’s publications that have been co-authored by two or more countries.
  • P(industry) and PP(industry). The number and the proportion of a university’s publications that have been co-authored with one or more industrial partners. For more details, see University-Industry Research Connections 2014.
  • P(<100 km) and pp(<100 km). The number and the proportion of a university’s publications with a geographical collaboration distance of less than 100 km, where the geographical collaboration distance of a publication equals the largest geographical distance between two addresses mentioned in the publication’s address list.
  • P(>5000 km) and PP(>5000 km). The number and the proportion of a university’s publications with a geographical collaboration distance of more than 5000 km.

Core publications

The Leiden Ranking does not simply take into account all publications in Web of Science. Instead, the ranking is based on so-called core publications, which are a subset of all publications in Web of Science. Core publications are publications in international scientific journals in fields that are suitable for citation analysis.

In order to be classified as a core publication, a publication must satisfy the following criteria:

  • The publication has been written in English.
  • The publication has one or more authors. (Anonymous publications are not allowed.)
  • The publication has not been retracted.
  • The publication has appeared in a core journal.

The last criterion is a very important one. In the Leiden Ranking, a journal is considered a core journal if it meets the following conditions:

  • The journal has an international scope, as reflected by the countries in which researchers publishing in the journal and citing to the journal are located.
  • The journal has a sufficiently large number of references to other core journals, indicating that the journal is in a field that is suitable for citation analysis. Many journals in the arts and humanities do not meet this condition. The same applies to trade journals and popular magazines.

In the calculation of the Leiden Ranking indicators, only core publications are included. Excluding non-core publications ensures that the Leiden Ranking is based on a relatively homogeneous set of publications, namely publications in international scientific journals in fields that are suitable for citation analysis. Field-normalized impact indicators such as P(top 10%) and PP(top 10%) become more accurate by excluding non-core publications. It should be emphasized that non-core publications are excluded not because they are considered less important than core publications. Non-core publications may have an important scientific value. About one-sixth of the publications in Web of Science are excluded because they have been classified as non-core publications.

A list of core and non-core journals is available in this Excel file.

Counting method

The impact indicators in the Leiden Ranking can be calculated using either a full counting or a fractional counting method. The full counting method gives equal weight to all publications of a university. The fractional counting method gives less weight to collaborative publications than to non-collaborative ones. For instance, if the address list of a publication includes five addresses and two of these addresses belong to a particular university, the publication has a weight of 2 / 5 = 0.4 in the calculation of the impact indicators for this university. The fractional counting method leads to a more proper field normalization of impact indicators and therefore to fairer comparisons between universities active in different fields. For this reason, fractional counting is the preferred counting method for the impact indicators in the Leiden Ranking. Collaboration indicators are always calculated using the full counting method.

Trend analysis

To facilitate trend analyses, the Leiden Ranking provides statistics not only based on publications from the period 2010–2013, but also based on publications from four earlier periods: 2006–2009, 2007–2010, 2008–2011, and 2009–2012. The statistics for the different periods are calculated in a fully consistent way. For each period, citations are counted until the end of the first year after the period has ended. For instance, in the case of the period 2006–2009 citations are counted until the end of 2010, while in the case of the period 2010–2013 citations are counted until the end of 2014.

Stability intervals

Stability intervals aim to provide some insight into the uncertainty in bibliometric statistics. A stability interval indicates a range of values of an indicator that are likely to be observed when the underlying set of publications changes. For instance, the PP(top 10%) indicator may be equal to 15.3% for a particular university, with a stability interval ranging from 14.1% to 16.5%. This means that the PP(top 10%) indicator equals 15.3% for this university, but that changes in the set of publications of the university may relatively easily lead to PP(top 10%) values in the range from 14.1% to 16.5%. The Leiden Ranking employs 95% stability intervals constructed using a statistical technique known as bootstrapping.

More information

More information on the Leiden Ranking methodology can be found in a number of publications by CWTS researchers. An detailed discussion of the Leiden Ranking is presented by Waltman et al. (2012). This publication relates to the 2011/2012 edition of the Leiden Ranking. Although not entirely up-to-date anymore, the publication still provides a lot of relevant information on the Leiden Ranking. The algorithmic approach taken in the Leiden Ranking to define scientific fields is described in detail by Waltman and Van Eck (2012). Field normalization of impact indicators based on algorithmically defined fields is studied by Ruiz-Castillo and Waltman (2014). The methodology adopted in the Leiden Ranking for identifying core publications and core journals is outlined by Waltman and Van Eck (2013a, 2013b). Finally, the importance of using fractional rather than full counting in the calculation of field-normalized impact indicators is explained by Waltman and Van Eck (2015).

  • Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E.C.M., Tijssen, R.J.W., Van Eck, N.J., Van Leeuwen, T.N., Van Raan, A.F.J., Visser, M.S., & Wouters, P. (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2013a). Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison. Scientometrics, 96(3), 699–716. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2013b). A systematic empirical comparison of different approaches for normalizing citation impact indicators. Journal of Informetrics, 7(4), 833–849. (paper, preprint)
  • Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9(1), 102–117. (paper)
  • Waltman, L., & Van Eck, N.J. (2015). Field-normalized citation impact indicators and the choice of an appropriate counting method. (preprint)