Adapted from messages I recently wrote in the Center for Open Science (COS) Ambassadors list chat:
A need for consensus about what will replace journals' blind peer review
It seems like most in the open science community agree about most strategies for the future:
- more preregistration
- more sharing of information
- earlier sharing of information
- continuous editing
- better statistical inference
But a gap, an area where the strategy for the future hasn’t quite been clarified, is about what’ll replace journals’ blind peer review - journal blind peer review being the main mechanism for research evaluation.
Jon Tennant's excellent paper (earlier version: https://f1000research.com/articles/6-1151/v3 ; a follow up version of it is here: https://osf.io/preprints/socarxiv/c29tm/ (post-review, just resubmitted)) sets out lots of different possible strategies, with one summary shown in this table (taken from the earlier, F1000 version).
But it seems the community hasn’t yet formed a consensus about backing one single model. (In contrast, it seems that the community has formed a consensus on & backed the RR format - and RRs have now been adopted by 100+ journals.)
For me, and I'm sure many others, I'd be quite happy to see one model be backed, and then change my research practices to align more closely with that model. Just as we're all making use of preprints, trying to share open data, trying to do RRs for confirmatory research, etc., we can also try to use the new work-flow that's proposed to replace/be a useful alternative to journals' blind peer review.
Practitioner-oriented research, for applied social science fields
As a person working in education, an applied social science, my main concern right now is: If you produce something useful, how do you know that anyone (ie relevant researchers & practitioners) will see it? According to the current system, the best way to reach and influence other researchers/practitioners is to get a paper in the SCI/SSCI journals, which means going through the journal blind review system which we all recognise as being rather flawed and rather slow.
On the other hand, with current digital tools & increased awareness of OSF/other open repositories, perhaps it’s just enough to make sure an article is tagged correctly with the right keywords? Then most people with an interest should see it. Sub-disciplinary communities generally aren’t overwhelmingly huge. If the right keywords are used in the articles & the searches, and if the open repository you use is popular enough, then you can be almost as confident that the relevant readership will see your open paper as they would see an SCI/SSCI journal paper. The reader can then do her own, tacit review/evaluation of the paper. If she sees that your paper was preregistered, and provides all the data/materials, she may even evaluate it more favourably than the non-preregistered, closed-data paper in the journals.
For practitioners in particular, when choosing among which research papers to base their practice on, they are unlikely to care as much about authors' or journals' “eminence”. They're more likely to be pragmatist and focus on evidence of effectiveness rather than eminence of the author/journal; and they're less likely to want to play the eminence game themselves (e.g. citing eminent authors in order to promote one's own research). So a robust, open-data article on the OSF could even influence practitioners' decision-making more than the average journal article does.
In which case, when thinking about practices to promote among the researcher/practitioner community generally, open science needs to make sure to prioritise:
- new literature search practices, so that open repositories are taken into account as much as e.g. Scopus,
- research evaluation training - e.g. it would be quite easy for OSF/other open respositories to provide guidance to users on the hierarchy of evidence, evaluating statistical inferences, etc., with all research articles on their site linking to that guidance