Thing 19: Altmetrics

Citations, as I tell students in academic skills workshops, are a way to acknowledge where their information comes from. Any direct quotes; paraphrases or summaries, theories or arguments (or even viewpoints); or statistics, case studies, diagrams, etc; that they did not make themselves must be followed with a reference to the source of that work. In these sessions we talk about how citation is about acknowledgment, about contributing to the academic community by enabling others to discover the sources of their areas and a way to thank the people who came up with those ideas in the  first place. Because I work with arts and humanities students, we don’t talk about citation metrics at any point–either in conversations about referencing their sources or when we discuss how to decide what scholarship to cite in the first place.

It wasn’t until I started to become more interested in research librarianship and went to the brilliant conference ‘Libraries on the Move‘, as an Erasmus+ exchange participant in 2019 (an experience I wrote about here and here), that I first heard terms such as “citation metrics”, “responsible metrics”, or “altmetrics”. This post will focus on the last of these, the subject of Thing 19, but in order to understand what altmetrics are, we need to understand the wider context of how research impact is assessed.

Citation Metrics

How does a scholar know what impact their work has had? One way to measure it is by counting the number of citations. Citations can be counted on the level of an individual article, a particular journal, or a particular author. But simply counting the number of times a work is cited is insufficient: a scientific paper in a major international journal is likely to get many, many more citations than a paper on place-names published in the journal of a local history society. Similar problems appear when one tries to do a one to one comparison of number of citations of journals or authors, without taking into account disciplinary differences.

One attempt to provide a citation metric that can be compared across different fields is the impact factor (IF), which is produced by a company called Clarivate. IF measures the number of citations a journal receives over a two year period divided by the number of articles it has published in that period. A journal with a higher impact factor is usually seen to be more prestigious, though criticism of it has focused on the ways it can be manipulated by editors and authors looking to move up in the rankings; critics also focus on the fact that it does not take into account differences between disciplines and subfields, which can affect where, when, and what work is published.

Indeed, because of the importance forms of publication which don’t take place in journals, such as books and book chapters, and the fact that research remains can remain relevant for long periods of time (or be ‘rediscovered’ after a period of minimal citation),  Clarivate doesn’t include arts and humanities journals in its journal citation report.

A sampling of journal rankings by impact factor in my fields, history (here) and classics (here) is a nice demonstration of why impact factors aren’t particular useful in these fields: they data they collect just isn’t meaningful in the context of these disciplines. A qualitative approach seems to be more useful, particularly when deciding where to publish. This fantastic guide, on evaluating peer-review journals in the humanities and social sciences, made by Princeton University graduate students, provides an excellent example of how researchers in these fields might evaluate journals, especially when choosing where to publish. The Humanities Journals Wiki provides a crowdsourced assessment of the experience of publishing in particular journals, and seems worth attention from new researchers or those looking to publish in fields that are new to them.

Impact factor was originally designed to help librarians get the best value for money out of their journal subscriptions, so information and training about citations and metrics are often part of an academic librarian’s role, especially for those who work in research-intensive institutions. As information professionals who understand the limitations of citation metrics, librarians often also attempt to educate members of the academic community about different ways to measure the impact of their work. Library guides, such as this one by Christina Miskey at the University of Nevada Las Vegas, provide a good place to begin learning about the subject, and recognising the differences between disciplines, librarians often write their guides for researchers in specific fields.

Even though IF and citation metrics don’t tell an accurate story about the importance of humanities research, it is vitally important that researchers in these fields have a basic level of literacy in them. Even if metrics aren’t widely used in our fields, they are used in others. To cite a notorious recent example, managers in the department of health and life sciences at the University of Liverpool used citation metrics as one factor in deciding who to make redundant in the department of health and life science.

Enter Altmetrics

Recognition of the abuse and limitations of citation metrics has led scholars to search for other ways to assess the impact of research. An method that has gained popularity in recent years is altmetrics. If you’ve seen the logo below, you’ve been looking at the almetrics for a particular piece of research.

the word 'Altmetric' next to a rainbow circle

“File:Altmetric rgb.png” by Altmetric is licensed under CC BY 4.0

Almetrics aims to provide a more holistic assessment of the impact an article has had. Unlike citation metrics, altmetrics will take into account social media activity and news stories in assessing the impact of an article. A work is then given a weighted ‘attention score’ which measures the number of times an article is mentioned, the types of places where it is mentioned, and the sources of mentions. This score is meant to provide information about the amount of engagement a particular research output has generated, not to provide a way to judge quality. Which makes sense if you think about it–a paper provokes a strong negative response might have a high attention score, but that doesn’t make it a good paper.

Researchers can use altmetrics to answer questions about the use of their work that traditional citation metrics can’t, such where readers of their work are located and what impact their work is having in wider society. Because altmetrics don’t rely on multiple years’ worth of data, but change and develop with audience engagement, they are also available much more quickly than traditional citation metrics.

To begin exploring altmetrics, researches can install the altmetrics bookmarklet tool in a web browser. Library guides to altmetrics, such as this one by Aimee Sgourakis and this one by Robin Chin Roemer and Rachel Borchardt are a valuable tool for getting to grips with the terminology altmetrics use and what they can and can’t tell us. Altmetric, the leading company in the field, have produced a number of how-to resources focused on libraries, including Altmetrics for Librarians (here), Altmetrics for Scholarly Communications Librarians (here) and a list of ways librarians can support altmetrics (here); these also offer researchers a good overview of what altmetrics can and can’t do.

As Nick Scott points out, altmetrics are not a new and improve substitute for IF or other citation metrics. What they do well is provide visibility for non-traditional forms of research output, but at the same time there are a number of areas that they don’t fully cover, such as media citations, and in areas that social media tracking can’t follow (activities like bookmarking or emailing papers). Their assessment of social media engagement can be seen as both a strength and a weakness: on the one hand, they provide evidence of impact and the possibility of collecting and analysing data that was previously difficult to access (such as the extent to which research makes an impact outside its home discipline), but they lack of standardisation and can potentially be manipulated.

Once again, it seems like there isn’t a one-size-fits-all solution for assessing the impact of research.

Reflections

Impact factor and altmetrics are here to stay as  part of the research landscape. Both add different things to that landscape. Altmetrics provide evidence of engagement via social media and new stories, while impact factor provides concise data about the reach of a piece of research within academia. Despite all of the careful writing and educating that has been done about the importance of taking these scores in context, it does seem like it can be all too easy for those outside of a particular field or subfield to take the numbers at face value and ignore the need for a discipline-specific approach. It is clear that neither of them tells a complete story on its own.

It is a story that librarians and academics need to be talking about. If we don’t tell our own stories by contextualising metrics within our disciplines, others will do it for us.

Leave a Reply

Your email address will not be published. Required fields are marked *