Formative peer review

Back in 2013 I wrote a long blog post on peer review, summarizing what I had heard at a few conferences on the subject of scholarly evaluation.

Since then, of course, the discussion has evolved and a few philosophers have weighted in. The topic of revisions and alternatives to peer review has also surfaced as part of the Swiss National Foundations Open Access Strategy.

Last year a new philosophy journal, the Public Philosophy Journal (PPJ) has started with a new peer review process, they call it “formative peer review“. This takes an interesting direction. (I have links to the journal and all other references below.)

In my post from 2013 I spent some time laying out the different kinds of data collected in evaluation metrics (citation data, behavioral data and attention data -the latter two often referred to as “altmetrics”) and the new technical possibilities of new peer review systems with the move to digital publications (open peer review, community participation at some level of the review process, or post-publication reviewing altogether).

I did refer to some critical opinions on the blind peer review system as well as on evaluation metrics. The starting point of the Public Philosophy Journal’s editors is clearly a critical attitude towards both metrics and blind peer review – and even academic culture as a whole.

Let’s start with metrics. As mentioned before, metrics whether based on citation data, behavioral data (downloads, time readers spend engaging with your text etc.) and attention data (e.g. twitter mentions) is a way to evaluate a work without engaging with it, without reading it. It is therefore used by funding agencies and non-peers. With, of course, there comes in another use which one might critically qualify as narcissistic, or on a lighter tone one could note how it speaks to our “gaming instincts” (and also triggers gaming behavior). Currently tries to sell you the data for a premium price you pay in order to satisfy your narcissism or play the game. But it is probably only a question of time until metrics are used by funding bodies as well. Metrics is just the technical cornerstone of the academic mirror of current neo-liberal attention-economy. Still, metrics might indicate scientific value. But only if popularity of a text or a researcher indicates scientific value more than marketing success. ( (The publication in a scientific journal with the highest score in altmetrics in 2016 was by then president of the U.S. Barack Obama on healthcare reform. So it measured the influence of the author probably more than the text’s scientific relevance – of course, this is an extreme example (taken from Martina Franzen, reference below)). To sum up the dangers with Christopher Long, “Scholarly metrics […] incentivize clickbait scholarship”.

But let’s now move to blind peer review. There have long been advocates of moving to open peer review, breaking the “veil of anonymity” (Claire Skea). And there have been implementations, especially in newer Open Access publications which did not already have a reputation to risk by moving to a new system of review.

There are some problems with anonymity, but also some advantages. Sometimes anonymity invites poor quality, hasty reviews with exclusive focus on criticism and evaluation. This is unsatisfying foremost for authors, but arguably even for reviewers who would intrinsically perhaps prefer to go into details and deliver constructive criticism. But under the veil of anonymity there is not much incentive to invest much time into the reviewing. On the positive side, anonymity might protect reviewers from retaliation and thus cancel the effects of hierarchy and power in academia for the sake of scientific quality. Imagine a graduate student reviewing a paper by one of the main scholars on her PhD-subject. Negative criticism might impede on her career-chances.

Instead of simply weighing pros and cons of blind vs. open peer review, the editors of the Public Philosophy Journal take a somewhat more holistic approach to the question. They don’t just want to find the most effective method. They want to champion a different academic culture, less based on competition and evaluation and more on collaborative relationships.

How does it work? In formative peer review an author of a draft selects a peer reviewer to publicly engage with the draft. Then starts a complicated process where another reviewer is selected and where they write two reviews, one private and one to be published with the paper. In the process both the paper and the published reviews are modified in response to mutual criticisms and replies. The direction this is meant to go is that reviewing and submitting become more interesting and more responsible intellectual activities. If I interpret the editor’s phrase correctly that “reviewers are asked to bring their best selves to the process” one of the ideas behind it is that anonymity brings out their worst selves. I am not sure about anonymity in itself, maybe hierarchy brings out our worst selves more than anonymity. And in the traditional blind peer review, reviewers and editors are given anonymous hierarchical powers over authors. So to me non-anonymous collaborative or “formative” review sounds like a very good idea.

Scientific evaluation is a complicated subject, not least because scientific quality is already a complicated subject. It is not clear whether competition and adversality is actually furthering scientific quality as much as the current institutions (of evaluation and quantification) imply. Therefore it is very good to see some criticism of academic culture being put into practice with alternative evaluation- (or collaboration) processes.

Franzen, M. ; Joy, E.; Long, C. (2018) Humane Metrics/Metrics Noir.

Long, C., (2017). ‘Practising public scholarship’, Public Philosophy Journal, Vol.1, No.1, pp.1-6.

Skea, C. (2018). ‘The veil of anonymity: the perils of peer review’, Philosophical Musings (blog).

Leave a Reply

Your email address will not be published.