What if we let social media rate research?

11/Mar/2016

With citation indexes being routinely questioned, ‘alternative metrics’ could gain ground as a new indicator of research success. But can they be trusted? By Roland Fischer

(From "Horizons" no. 108 March 2016)​​​

What were the most successful research publications of 2015? It’s a seemingly harmless question, but until recently, no one would have asked it. People have been measuring research painstakingly for decades, but the impact quantifiers used were almost always based on citations. These are notoriously slow to assess, as it can take months or even years for the first citations to surface.

But at the turn of this year, reports were coming in everywhere about the ‘10 most-talked about science stories of 2015’ or the ‘Findings that caused a stir in the (social) media in 2015’. This didn’t happen by chance. The London-based company Altmetric had sent out a report at the end of the year that included a list of the top-100 publications. In these days of digital tidbits and ‘listicles’, this science hit-parade was a welcome arrival to those working in the media.

Ebola, bankers and 3D printing

The ranking of Swiss articles according to their Altmetric score for 2014/2015 reflects the general trend in popular topics. Biomedical fields are clearly dominant, and physics also snatched a top place. All the other disciplines are in the ‘also ran’ category. One exception confirms the rule: third place in the cumulative rankings went to ‘Business culture and dishonesty in the banking industry’ from Ernst Fehr’s research group at the University of Zurich – which is not surprising, given the current zeitgeist.

It’s striking that the impact factor clearly applies here. In the upper regions of the chart we primarily find articles from the top journals such as Nature and Science. This isn’t surprising because they do a lot of media work to advertise themselves. What’s interesting are the differences between the channels, as shown by our detailed analysis (available online). On Facebook particularly, less prominent publications can still succeed. Its highest-ranking paper is "No scientific consensus on GMO safety" by the ETH Zurich researcher Angelika Hilbeck. It’s followed by papers on Ebola and early history, but also by tips about how to finish your PhD. It’s here that we get the biggest mixed discussion between amateurs and experts, whereas on Twitter, a nerd’s paper breaks the pattern: "Open labware: 3-D printing your own lab equipment".

Kathrin Altwegg scored a hit with a paper about the discovery of oxygen on the Chury comet. But she’d never heard of alternative metrics until she was told of her success. She laughed about it, but was happy for her work to be noticed beyond the confines of her own field. It certainly acts as an encouragement, she says. Just behind Altwegg in the rankings was Michel C. Milinkovitch from the University of Geneva with his paper on chameleons changing their colours. He’d known about altmetrics for longer, but he was still a little surprised to see how widely his article had been shared. "Of course I’m also happy if my results reach a broader public". But his main task, he insists, is to produce good science.

Method:

The analysis considers articles published between July 2014 and June 2015 in which at least one Swiss institution participated.

Altmetric is currently the most successful at offering so-called alternative metrics, or ‘altmetrics’ for short, hence the name of the company. These metrics go one step further than just adding up citations, by including all sorts of other kinds of freely available success indicators for research articles. They especially cover social networks. In other words, they count the numbers of downloads, tweets, Facebook entries, blog posts and media reports.

Altmetrics are supposed to be able to measure the scientific and societal impact of a research publication more precisely and more comprehensively. They’re also allegedly swifter at proving the success of a research publication. If people tweet about a new research article, for example, a discussion will get off the ground far quicker than in a specialist journal.

A digital press review

But there’s more. While altmetrics might provide an alternative to established indicators such as the impact factor and the number of citations, they could also offer relief to the peer-review system that’s under so much strain these days. In a manifesto of 2010, the pioneers of the ‘alternative metrics’ movement, headed by information scientist Jason Priem, cheekily suggested that "With altmetrics, we can crowdsource peer-review". Their idea is that large numbers of amateurs – joined by a healthy number of professionals – could use clicks and shares in their social networks to decide whether a research article is of interest. If evaluated and analysed properly, these clicks could function along the lines of an implicit peer-review process.

This approach is interesting because there are indeed far more peers for research publications than just the select few who are currently asked for their opinion. If everyone could somehow be drawn into the quality control process, then the peer-review idea would also become more efficient and less prone to mistakes. It’s already a fact that scientists are very active on social media channels. Twitter is especially popular among researchers. A survey in early 2015 among members of the American Association for the Advancement of Science (AAAS) showed that 47 percent of them use social media in order to inform themselves about research or research results.

Euan Adie, the boss of Altmetric, put it like this: "Today, scientific publications are already being subjected to critical discussion in blogs and similar channels. A new system could establish itself here that is better able to identify substandard research". But he also adds: "Altmetrics are a complement to citation analyses and peer reviews. At present I don’t see how they could actually replace the old methods".

Adie is more concerned about documenting the impact of research above and beyond the realm of science and scholarship – it’s a kind of digital press review. "Our index measures how much attention a paper has received. It’s not an indicator of quality", says Adie. Meanwhile, Stefanie Haustein, an information scientist at the University of Montreal, has been dealing intensively with altmetrics for several years and has arrived at the same opinion: "At the moment, altmetrics aren’t measuring research quality at all". And she even allows herself to indulge in a little heresy: "To say social media equals social impact is simply not true". This challenges the notion that altmetrics are somehow able to offer an elegant measurement of the social impact of a paper.

Competing measurements

So the basic question arises as to just what altmetrics actually measure. Do they measure something that’s relevant, or do they merely measure the specific volume of something that’s readily available and can be quantified by automated means? Thanks to the 2013 DORA initiative, which made manifest the general mistrust regarding the use of the classical impact factor as a quality indicator, the wheels have begun turning in the world of research politics. It seems as if the result won’t involve turning away from quantitative evaluation systems, but instead will be a move to more complex methods – such as altmetrics. A British report on the state of science evaluation back in 2015 spoke already in its title of a ‘metric tide’.

Deciding what criteria and methods should be used to evaluate the quality and impact of research is probably only going to get more complicated for the research community in future. And also more complicated for those in charge of research policy. Because metric methods are never simply objective indicators: they’re also always policy levers. They create incentive systems that can subtly transform the research landscape. So should research really be available on social media channels? What about the many research results that might be of high quality, but are less suited to the hectic everyday concerns of social media?

In a recent publication, Stefanie Haustein has raised several questions about the relevance and robustness of these new evaluation methods. She showed that scientific publications have a rather low-level presence on digital channels – while 21.5 percent of papers get a tweet, less than five percent are shared on Facebook and only two percent are mentioned in blogs. Against this we have to consider the 66.8 percent that are cited at least once in the traditional manner.

These new metric methods are still largely a kind of black box. We need to examine them better in order to understand what is being measured by them, and how the old indicators relate to the new – especially when it comes to the 64-million-dollar question of whether altmetrics should be used to substitute the old methods, or to complement them.

 

Roland Fischer is a science journalist in Bern.

For more extensive data

www.snf.ch/Ho_altmetrics