This is part II of my personal OAI8 summary. This part concerns scholarly evaluation.
At OAI8 several talks were about or touched upon the subject of how scholarly research is evaluated. Scholarly communication is evaluated in two sorts of ways, firstly, by the peer review system and secondly, by metrics and altmetrics. Whereas the peer review system normally intervenes before publication and decides on what should be communicated and what not, metrics and altmetrics are tools used to evaluate available (communicated) content.
Thus the oldest and simplest tool of metrics, the impact factor, uses a simple formula to compare the impact on the scholarly community of journals, as a function of citation data. This is supposed to quantify the journals scientific reputation. As Johan Bollen, a computer scientist from Indiana University, rightly said in his overview, the scientific community does know the reputation of journals and does not need metrics to know a journal’s reputation. As an indicator of reputation, metrics serves outsiders like funding institutions, university administrations etc.
According to Bollen, metrics thus seeks to operationalize the social reputation. Social reputation is the foundation of the gift economy of scientific output (instead of being traded for other goods scientific work is given to the community for increase in reputation). Seen this way, the impact factor does not deserve the status it has today, since there are many possible different ways to operationalize reputation: other citation metrics, surveys, behavioral data such as usage-data statistics (e.g. downloads) and finally “attention”-data (e.g. tweets mentioning an article). Thus much more complex and combined operationalizations of reputation are now available. The term ‘alt-metrics‘ designates metrics based on behavioral data mixed with “attention”-data.
Bollen closed his talk with an interesting thought:
Scientific impact as the reputation inside the scientific community is known to the community (of course). Administrators however, need to use metrics to make funding decisions. They use mainly the impact factor, even though many alternative measures have equal claim to capture scientific impact. In general, it is not clear to the decision-makers, how well any of these metrics captures the true scientific impact, i.e. reputation. Bollen’s vision then is that the scientists, who do know the scientific impact (reputation) of articles, journals, authors etc. even without metrics, should have decision-making autonomy. This eliminates the necessity of metrics. According to Bollen, this would be better than administrators relying on metrics more or less arbitrarily.
At OAI8 opinions on the success of the peer review system varied widely, from those thinking that it is a complete failure, to those taking it to be the only viable tool to guarantee the quality of scientific output.
The talk by Jelte Wicherts from Tilburg University was a plea in favor of transparency about the peer-review process. On the one hand he investigated into transparency-criteria in open-access and traditional journals. He found that transparency about the review process corresponds quite well with the perceived quality of the journal in the community. Thus, the better the reputation of the journal, the more likely it is that it discloses more and better information about its review process. Importantly the transparency-test allows very clearly to distinguish serious open access journals from predatory OA-journals (journals with low selection standards publishing almost everything in order to generate revenue).
On the other hand he pleaded in favor of even more transparency. In particular he presented a model (already applied in some places, I believe) in which the reviews are published alongside the published paper, signed by the reviewer. Similarly lists of rejected papers (author, title) are published alongside the signed reviews leading to that decision. Reviews can be rated by the community. These ratings can impact on reviewer-profiles, which can be part of a reviewer’s CV. Experiences conducted with such a process have lead to reviews of higher quality (independently assessed by experts, of course) which tend to be shorter and more careful.
Fully electronic publishing opens up new technical possibilities for the pre-publication process: no limitation due to costs of printing on paper, powerful search-tools in texts, versions of texts and reviews. This should lead the community to rethink these processes and test novelties such as full transparency after review, or participation of a wider community of experts at some stage of the process (e.g. commenting and discussing reviews). Pre-publication evaluation could be about more things than access to the journal’s reputation: rendering the published papers better, making the reviewers efforts visible to the community, access to the publisher’s work on meta-data, machine-readability, etc. in order to generate visibility and usability in the internet. New open access journals should try to promote these values and qualities, since they often do not have the reputation of the traditional journals.
Of course, at the moment, at least in the traditional journals in the Humanities, pre-publication selection is all about access to the journal’s reputation which is then formalized on the CV. But hopefully other criteria such as the transparency of the reviewing-process are becoming more important (along such things as open access policy).