Is the era of the research paper as the centrepiece of scientific literature now drawing to a close? Jason Priem wants us to think so. In Beyond the Paper, he takes it for granted not only that other forms of online communication will increase in significance, but that they will edge the traditional research paper out from its dominant position:
“Today’s publication silos will be replaced by a set of decentralized, interoperable services that are built on a core infrastructure of open data.”
But there are problems. Most of the new online forms of communication provide no formal segregation of serious scientific information from the blogging, tweeting, liking, following, spamming and trolling of the unwashed masses. What’s a scientist to do? How to filter the signal from the noise? And another thing: academic scientists have traditionally relied upon lists of published research papers bearing their names as authors to indicate the scale of their research achievements. If the research paper is set to dissolve into a “constellation of data points”, such lists will become things of the past. How will the worth of a scientist then be demonstrated?
Priem’s solution to both problems is a system of “alternative metrics” (altmetrics) of scholarly influence that seeks to replace or amend the established standards of peer review, citation and “impact factors”. He tells his readers that “We now have a unique opportunity as scholars to guide the evolution of our tools in directions that honour our values and benefit our communities” and exhorts scientists to “resist the urge to cling to the trappings of scientific excellence rather than excellence itself”.
The distinction between excellence as a real thing and its mere trappings is an interesting one because, unless one is a self-funded gentleman scientist or the lucky beneficiary of generous personal patronage, it’s a distinction that’s rather hard to make. Excellence is whatever the paying customer says it is. If there’s a need to discuss the distinction between excellence and its trappings, there is perhaps some confusion about who the customer is or about what they are paying for.
Priem’s article focuses on scholarly communication and by implication sees science as an essentially scholarly or academic activity. This means it doesn’t necessarily apply so readily to those that work in industry. There, the measure of scientific achievement generally lies in coming up with a working solution to a specific technical problem. The customer (or employer) defines the problem, pays for the research and decides if the solution is good enough or not. The role of the scientist could be said to lie in deciding how experience gained by other people in other contexts and situations could be applied to the problem under consideration. The scientific literature is the repository of that experience. When large parts of the literature come from people one does not know, one needs to filter it not only for relevance but also for trustworthiness. This highlights one fundamental function of the institution of science – the building and maintenance of networks of trust. As in other fields of human activity, such networks are built through mutually trusted third parties who can certify or vouch for one party to another to whom that party was not previously known. No doubt, the advocates of altmetrics (and those of the ‘old’ metrics too) would claim that that is part of what they aim to achieve (for instance, strong brand scientific journals effectively act as mutually trusted third parties). However, the intended function of “metrics” is rarely if ever described that way. Rather, the driving concern appears to lie in deciding what is worthy of attention (because it’s what everyone else is paying attention to) and who is getting the most attention.
Is it not time that science metrics shift their focus to problem solving: who has a good track record of solving problems and what information in the literature can be regarded as trustworthy because successful problem solvers have successfully relied upon it?