The Science edition from the 4th of Jan had two extremely interesting short essays published in the "letters" section. You can read them both on the first page
of this PDF.
The first of those two letters, by William F. Perrin, raises a well known problem in the academic publishing area, namely the serious difficulty of finding reviewers for scientific articles submitted to journals. Next to the grave implications mentioned by Perrin in the letter (editors will often have to settle for less-knowledgeable reviewers and reviews of lesser quality), one other obvious consequence is that the refereeing period increases considerably. I've faced myself this problem for one of my papers submitted for publication, where, after an incredibly long period without hearing anything, I've contacted the editor of that journal only to find out that no less than 5 persons he had initially approached have refused to review the manuscript and worse, one of the two who had accepted, has eventually given up after a few months-- leaving the editor with no option but to search for a new referee.
How are we to approach this dilemma, what are possible solutions one can think of? Perrin capitalizes on the academic ethics: "
Doing a fair share of peer reviews should be a recognized and expected part of the job for scientific professionals; it should be written into the job descriptions of salaried scientists and be considered in evaluating junior faculty for tenure. The caution should be "Publish and review, or perish"". While I agree with Perrin's normative ideal, I think this would not be sufficient to provide the right incentives: after all in most high-level academic places, the above is implicitly understood as part of the job of professional scientists already; I doubt any of these scientists would disagree with Perrin, should they be asked. The problem is that such a rule cannot be really enforced since, given that the peer-reviewing is typically done under anonimity, there is no way to tell how many times a potential referee refuses to peer-review. And simply setting a rule concerning a minimum of reviews (e.g., one scientist should have 12-16 reviews per year, according to the back-of-the-envelope computation of Perrin) is really not going to work for related reasons (one cannot proxy how many proposals for refereeing one particular scientist receives in a given time period, not to mention that it is very likely that this variable has a high variance etc.). So, can we do anything else? And here's something I see as straightforward solution:
why not pay the referees every time? The obvious way to ensure this is sustainable is to ask a submission fee for every paper submitted (some scientific journals practice this submission-fee policy already) and to use most of that fee to pay referees,
if they provide a referee report in a requested amount of time. I am not aware of any study that compares paid-referee-reports with non-paid-referee-reports practices (they both exist nowadays, though to the best of my knowledge the former is rather exceptional) in terms of: a. the length of the period necessary to obtain two or three referee reports, the number depending on the policy of the journal; b. the quality of the reports. My intuition would tell me that the reviewing period would be shortened considerably and that the editor would find referees faster under the pay policy; I doubt the quality of the reports would increase under the pay system, but this remains an empirical question (e.g, one can expect people to deliver higher quality reports under the pay system, in order to be asked again to referee, but at the same time one could potentially expect people to accept to review even when they know they could not do a good job reviewing that particular paper because of
real lack of time, expertise etc.). There are also questions regarding the amount and form of this eventual payment etc., but those are already second-stage considerations and would not present insurmontable problems (e.g., if it turns out one has to pay the referees more, the submission fees could be raised etc.).
The second letter tackles the fact that reviews might have become
too critical and demanding (e.g., to the point of virtually demanding the entire paper be rewritten), a view shared by many others (and here see also
Preston McAfee's idea of revolutioning the reviewing policy of the journal he's been recently assigned to as editor). Robert S. Zucker makes some great points in his letter to Science (they might seem obvious, but you'd be surprised how many times they are grossly violated in actual practice), which I take ad litteram below and place in a list format-- 'a peer-review how-to':
Reviewers should highlight a paper's strengths and weaknesses, but they need not delineate strengths in very weak papers nor stress minor weaknesses in strong papers.
Reviews should be prompt and thorough and should avoid sharp language and invective.
[Do not] reflexively demand that more be done:
Suggest an additional experiment, further analysis or altered specification, but do not make publication contingent on these changes.
If the conclusions cannot stand without additional work or if the evidence does not distinguish between reasonably likely alternatives, recommend that the editor reject the manuscript.
Seek a balance among criteria in making a recommendation:
Do not reject a manuscript simply because its ideas are not original, if it offers the first strong evidence for an old but important idea.
Do not reject a paper with a brilliant new idea simply because the evidence was not as comprehensive as could be imagined.
Do not reject a paper simply because it is not of the highest significance, if it is beautifully executed and offers fresh ideas with strong evidence.
Step back from your own scientific prejudices, in order to judge each paper on its merits and in the context of the journal that has solicited your advice.