Anatomy of a Bad Review
- Mishkat Bhattacharya
- 2 days ago
- 4 min read
This post is about one of my favorite topics: academic peer review. I have written about it before, comparing different fields, and suggesting alternatives. Today I will state my analysis of some classic characteristics that I think most reviews of low quality share.
I am writing this in the hope that it will be helpful to people just entering the process. It is often not until much later in their careers that they begin to analyze the defects and recognize them and acquire the decisiveness to call them out. Too often authors are simply paralyzed by the inequality of power in the review process, and by how much they have at stake in the proceedings.
It's all wrong: This type of review provides detailed evidence that everything, absolutely everything, in the entire grant proposal/paper (which one or more competent scientists have spent more than 6 months writing) is totally wrong. In fact if the review is to be believed, not only the paper/grant that was sent in but the entire life of the author(s) is wrong.
It is of course obvious (most fair and balanced reviews have both criticism and approval mixed in them) that the referee has a conflict of interest with the author(s). They want to make sure that not a single approving word is put in, even by mistake. They want to take every pain to ensure that the grant/paper is killed completely and has no chance whatsoever of being awarded/published.
Remarkably, I have never seen any pushback from the editors on such reviews. They are happy to side with the referees on everything they say. This probably because the editors are swamped with submissions and have no time for nuance.
Moving the goalposts: This is a kind of review which in the first round has some specific criticisms. If the authors' rebuttal looks like it essentially satisfies those criticisms, the referee in the next round immediately switches to a new, unrelated, set of objections and rejects the paper on the basis of these new objections.
For example, a theoretical proposal for an experiment may first be criticized on the validity of the mathematical approximations it invokes. Once the authors show these approximations to be valid, the referee then says the proposed experiment is unrealistic (usually without saying why) and the paper cannot therefore be accepted. If the latter objection is important enough to justify rejection of the paper, should it not have been mentioned in the first round of review?
What is happening of course is that the referee has taken an intuitive dislike to the paper/has a conflict of interest, etc.; now they are now casting about for any reason to reject it. And they know they can do so. They know all too well that the editor is not going to step in and enforce fairness - the editor has no time for that: they are simply looking for a yea or no.
Denying the fundamentals: There is scarcely a better trick to reject a grant or paper than to deny that its subject does not exist. Whether it is true or not, this argument takes all the air out of the authors' bag, and ensures that the limited (2 or 3) rounds of review are all spent in philosophical arguments which are usually inconclusive enough for the editor to reject the paper (only enthusiastic approvals from the referees result in acceptance).
For example, you may write a paper on the quantum theory of the laser (about which topic many papers have been written), but your referee could claim that the laser is essentially a classical device - happy arguing.
Being/pretending to be stupid/incompetent: A damping mechanism leading to rejection is for the referee to be 'slow'. E.g. not checking the references originally provided in the paper; not even reading them when the references are pointed out in the rebuttal; responding only when relevant excerpts from these references are cut and pasted in extenso in the rebuttal. By which time you are dead meat, my friend. A variation on this theme: 'talking past' or not paying attention to what the authors argued in the rebuttal.
Blatantly false statements: "Papers already published in journal X cannot be used as an example for the type of papers that should be published in journal X in the future." Huh? Even if the luminous contradiction in this statement is not clear to the referee, did they not read the journal X masthead? It (typically) says "Authors should familiarize themselves with previous issues to see what kind of papers are published in journal X". Where else can we find papers suitable for publication in journal X?
Judging without back up evidence/arguments: Making statements like "I don't think this topic is of interest to the community." Really? Do you have the citation figures to prove it - or at least to make the criticism justifiable?
Personal insults: While several journal mastheads allow strong language for critical purposes, referees often overstep this into the personal realm, taking advantage of their incognito status. For example, one of our referees said something like: "I can understand the authors have had trouble publishing their work on this topic before".
This is not only false - a quick look at the available literature would have disabused the referee of their notion - it is irrelevant, vicious and cowardly. Not to say, unprofessional.
If it were solely up to me, I would decide after the first round of reports if I should carry on rebutting (one can usually tell if the referee is dead set against the work and no rebuttal is going to change their mind) any specific paper. Then I would just try submitting to a different journal: play, what I like to call the 'referee roulette'.
But typically I publish not just by myself but with students, postdocs and collaborators. They have their own reasons for pushing ahead with the process, and I do not interfere. There is also a chance that pulling the plug at a journal might incur the displeasure of the handling editor; essentially I would be telling them that their selection of the referee was incorrect. This might have consequences for our future submissions.
Summary
The review process is loaded against (most - maybe not if you are from a big group/famous university, though I have heard such people gripe as well) authors. The current method has no checks and balances against the inequities listed above. I have suggested alternatives elsewhere.