Because good research needs good data

Shakespeare Quarterly "Open Peer Review" Experiment

Some thoughts on 'open peer review', after the New York Times reports on a Shakespeare Quarterly experiment in opening up their peer review process.

A Whyte | 26 August 2010

"What's in a name? That which we call a rose

By any other name would smell as sweet."

(Romeo and Juliet II, ii, 1-2)

Some might ask whether the Shakespeare Quarterly experiment in opening up the peer review process  carries the sweet smell of openness far enough.  A New York Times 24th August report has Katherine Rowe, a Renaissance specialist and media historian who edited the journal's experimental issue, commenting that it is the first traditional humanities journal to open its reviewing to the World Wide Web. The NYT goes on to say:

"Mixing traditional and new methods, the journal posted online four essays not yet accepted for publication, and a core group of experts — what Ms. Rowe called “our crowd sourcing” — were invited to post their signed comments on the Web site MediaCommons, a scholarly digital network. Others could add their thoughts as well, after registering with their own names. In the end 41 people made more than 350 comments, many of which elicited responses from the authors. The revised essays were then reviewed by the quarterly’s editors, who made the final decision to include them in the printed journal, due out Sept. 17."

The journal articles were open for comment by anyone registered on MediaCommons, though I'm not clear about whether the 41 commenters include the invited 'core group of experts'. The more general question is "how open is "Open Peer Review"? By extension from open data, open source software and open access publishing one might think 'open to anyone'. Actually, as the Wikipedia article on the subject makes clear, 'peer' is the key word here. Journals have mostly sought to widen participation in refereeing to accredited members of their community, with occasional forays to collect wider public comments lacking obvious success stories, but begging the question of what success means (e.g. Nature's 2006 experiment).

One pioneer here is the Journal of Interactive Media in Education (JIME), an open peer review practitioner since 1996. In JIME's current model reviewing begins with a 'private' phase, by reviewers who are editor-selected but publically identified. The editor then decides on acceptance of the 'pre-print', opens it to public comment, moderates these, and then may demand further changes prior to 'publication' alongside moderated comments. 

The issue in 'open peer review' appears to be less about what is made publicly accessible as when, and who can participate in decisions on what happens to it next. So for example the arXiv model for open access to e-prints in physics and mathematics disavows 'peer review' in favour of 'moderation', although the role of moderators is only to categorise rather than to weed out, as nothing is deleted provided the author meets criteria for 'endorsement'. By contrast the British Medical Journal in 2005 marked publication of its 50,000th 'rapid response' by raising the bar on submission, likening its website to a garden 'overrun with weeds'.

The 'who weeds the garden' question when applied to data (or other research materials) is, I think, not one that intermediaries like DCC can answer... if the Shakespeare Quarterly experiment comes out smelling like a rose to Shakespearian scholars then it probably is one!  What might help make sense of the question is a way to categorise the shades of grey in 'open peer review'. I've tried to do something like that in the forthcoming Open Science Case Studies report. So in similar vein some 'phases' of openness in peer review could be summarised as:

1. Transparent governance - an editorial board is in effect a 'trusted third party' whose governance confers  accountability and legitimacy to what would otherwise be the informal exchange of research material and comments. The 'transparency' here would refer to the public web accessibility of the model used, the criteria used for submission and for review, or the reviewers identities.

2. Community review - extension of the commenting or rating to others qualifying as peers, this might also involve others with a stake in the review outcomes, such as patient groups in medical research, but falling short of the general public. Openness here would refer to the public web accessibility of criteria; both the participants' 'terms of engagement' and the criteria for them to apply to the material in question. Openness could also refer to clarity of the possible outcomes, access to the means to participate, or the results themselves.

3. Public engagement-  further extension of the commenting or rating to self-selected 'interested amateurs' with some due process for moderation, and for the governing body to respond to the results. 

To return to the gardening analogy, I think 'openness' in data, as in the peer review process, is as much about who builds the walls and how high as who does the gardening. Or is it just about knocking walls down?