We think of new documents like a stem cell. The same piece of content can turn into functionally very different things. The work can turn into a blog post, a research paper, a thesis, a conference proceeding, or a book. The distinctions between these things are the human processes that occur around and to the document. Reviews are a big part of that.
To support the diversity of documents types, we likewise want to support a diversity of review types. We are designing reviews to be a very general item in PubPub. To do so, we identify a few key components:
- Review Policy
- Review Map
A review policy outlines the plan for a review. It may contain details such as:
- how many reviewers it will have
- what constraints will be placed on the review reports (single blind, double blind, etc)
- what metadata is expected
- how long it should last
The review policy is something that can made into a template for easy re-use and for simple communication. Often we find review procedures are written in plain language on journal or conference sites. PubPub Review Policies simply provide a structure for codifying that so it can be shared, reused, and programmatically implemented.
Reviewers are the people and machines that provide feedback to a document. In traditional paths, this is a set of 2-5 people with relevant expertise who will provide a written response and perhaps enter some form field information. In addition, we envision machine reviewers that can function as autoformatters, language checkers, spell checkers, grammar checkers. We’re excited about the places for expansion in this path: tools that help you find related but missing citations, tools that encourage best-practices around data-sharing, tools that help push you towards language that is an appropriate level of jargon.
A review map is the thing that tells you what actually happened. In an ideal case, it simplify verifies that what was stated in the Review Policy actually happened. In reality though, there are lots of reasons why a Review Policy and a Review Map could differ. The number of reviewers can change, the time taken can change, the expected metadata can change. Review maps are something we hope other publishers will begin to adopt and share, such that a common language of ‘what actually happened’ can be understood.
The challenge of building tools to cleanly manage and enable reviews is known to be exceedingly complex and difficult. Many review management tools have existed and grown into tools to be loathed. Many have simply failed because they become too bloated. One difference that we think matters here is a top-down vs bottom-up approach. Most tools that struggle seem to have been building structures that precisely solve a problem for an existing workflow or journal. In trying to then scale and adapt itself into a tool that solves the problem for a wide variety of publishers, it becomes too broad and suffers from conflicting preferences and constraints.
Our approach is to build a set of tools that let the publisher themselves craft their process. We’re not prescriptive in giving different stages names or required follow-on steps. One tradeoff this entails is that the PubPub review system will likely not be setup for your exact workflow out of the box. We’re hoping that in time a number of templates and best practices will emerge to reduce such hurdles - so please share your feedback and ideas. The benefit (we hope) from taking such an approach is that we will be able to scale a review system that remains flexible yet powerful.
We’ll be talking with many of you in the coming months and slowly rolling out Review features. To share a bit of visual thought on the matter, a set of early, unpolished mockups can be seen here.