In 2020 I have decided to try to refine my reviews. The impetus for this is that I think I have greater clarity about what my role should be or more correctly what my role is not.
My wife once had an internship at a publishing company. Her job was to go through the bin of unsolicited submissions and be ruthless. The company could only publish a set number of books a year, and they solicited most of the books they published. Thus her role was to reject almost all submissions. I think many reviewers think they have this job too. Many reviewers also believe they are the defender of the purity of science. This is a role I used to play. I believed that my field was a disaster and only I could fix it by standing in the way of as many articles as I could. My aim was to expunge the various sins I saw my field committing. What hubris!
I no longer believe that is my role. Ultimately, I think the role of a reviewer is a) to detect fatal flaws (a flaw that no amount of revision would fix); b) to identify any fundamental issue that should prevent publishing of any type (e.g., plagiarism, etc.); and c) determine if the article would look out-of-place among other articles in the field.
Ultimately, the role of a reviewer is to catch malfeasance and monsters.
The role of determining the place of an articles position as important or impactful or paradigm shifting is held by readers.
With this refined sense of what a reviewer should be, I have aimed to introduce the following to my own reviews:
My review distribution will become increasingly bi-modal focused on either outright rejection or acceptance/ minor conditional acceptance.
When I reject, my reviews are short. I outline what I think the fatal flaw was and nothing more. If an article is unsalvageable, advising on how certain paragraphs should be phrased or how APA styling should have be handled is a waste of time and confusing to the authors. The language here should be clear. There is no “I think the authors should consider…” or “Have the authors thought of…”. I am also clear in the first sentence that I do not think the article should be accepted and that I do not believe a revision could resolve the fundamental flaws I see in the paper.
If I give a recommendation of conditional acceptance, I am careful of distinguishing between the few things upon which I believe are the conditions of acceptance and areas I think might improve the article. I am clear that the latter are suggestions and the authors are free to ignore them. I then try to phrase these points as questions rather than commands.
If an author refuses to adjust their article in relation to something I think they should adjust, and they give reasons that are not preposterous, I let it go. You have likely received a review from me if you read “I don’t agree with the authors’ position on this issue, but my job is not to make authors write the paper how I want it written. I suggest the paper move forward to publication.”
Dr Seuss wisely stated “shorth is better than length”. And it seems academia is slowly getting the message. Brief reports are here and will likely play an increasing role in Educational and Developmental Psychology. Having spent a while working with people in public health, I have been infected with their obsession with the brief. As Dr Seuss also said:
A writer who breeds more words than he needs creates a chore for the reader who readers.
With this advice in mind I have tended to target brief reports and encouraged others to do so. There is an issue I am seeing though. Unlike in public health, Ed and Dev reviewers are not really sure how to review brief reports. Today, for example, a post-doc at my institute was hauled over the coals by three reviewers all of whom said she had not done a thorougher review of the literature. The problem? The format she submitted to only allows for six references. My general experience is that reviewers are bringing across their expectations from long form articles and seem unwilling, unable, or unsure of how to adapt their reviews to the brief report format.
Some of the problem might lies with the editors. Maybe they don’t communicate expectations about brief reports to reviewers clearly enough. Maybe some of it is the fault of the publisher who don’t do a sufficient job signposting that an article is a brief report. Some of it is also likely teething problems as the Ed and Dev community starts to come to terms with the brief report format. What ever the reason, I think we need to address this if we are to embrace this format. And I think we should embrace the format. Generally my writing gets better and my ideas clearer when I am forced to whittle them down to the bare minimum.
So what should we do? In the long term I think there needs to be a rethink about the way different article types are flagged to reviewers by publishing systems and editors likely need to get better at: a) signalling to potential reviewers that a paper is a brief report and what that means for a given journal; b) providing authors clear directions on how to address reviewers who have requested changes that break with a brief report format; and c) providing reviewers with feedback. In the short term, when you review you should pay attention to the articles submission type and find out what the implications of this are (e.g., is there a limitation on the number of references allowed). As an author, I think it does not hurt to alert a reviewer to the fact you have written a brief report by using language like: “In this brief report …” rather than “In this paper…”.
There was a great blog post this week from Sara Chipps of stack overflow. She discussed the ‘pile on’ effect. The phenomena where the collective (even when constructive) criticism from many people can be crushing. I see the same thing in the review process. There are arsehole reviewers out there. But my experience has been more of the soul crushing effect of 3-5 reviews—all from reviewers that mean well and have good things to say.
It is this pile on effect that I think is so destructive for early career and doctoral researchers. This has always been the case. Many of us survived this and have the battle scars to prove it. But I think the pile on effect is even more dangerous now because of the pressure there is to publish and publish in good journals. No top 10 percentile journal articles on your CV often means no chance at a meaningful and secure career in academia. So what to do?
Individually there are some things we could all do:
Don’t be an arsehole reviewer.
If you see something, say something. If you review a paper and you see arsehole behaviour from other reviewers, let the editor know that it is not OK.
If you are a new reviewer chances are you will be an arsehole; at least in your first few reviews. Get feedback from experienced researchers. Specifically get feedback on how to write constructive reviews.
What little training in reviewing we receive encourages us to give people a shit sandwich. Generic nice first sentence, destruction, patronizing ‘good effort’ closing sentence. This is transparent. Better to spend time to find something that you genuinely learnt from the paper. Even in awful papers there is generally an interesting idea, a nice way of visualizing data, or nice turn of phrase. Point this out. Genuine compliments are best.
When your ECRs experience arsehole reviewer behaviour don’t just comfort them by saying ‘we have all been there’. Let them know that the reviewers behaviour is unacceptable. ECRs will become reviewers and they need to know what behaviour is and is not OK.
I think these are all reasonable points but it does not get around the pile on effect. For that I think there needs to be a structural change. We can do a much better job at making our field more welcoming to newcomers. This might include:
Wrap around support for early career and doctoral researchers (ECRs). Supervisors should be ready to support their people, editors should be made aware when a paper is from an ECR and curate feedback from reviewers more aggressively (i.e., edit out the mean bull shit that many reviewers for some bizarre reason think is ok to write).
Reviewers could be told that a paper is from an ECR as a nudge to be nicer. I am not suggesting ECRs get a free ride. The same standards should apply to all. But we could be more welcoming.
I have reviewed some 200ish articles. I have not once received feedback from a journal about my reviews. I KNOW I was an arsehole when I first started. No one bothered to tell me. The lack of feedback from journals to reviewers is unforgiveable.
Post graduate training should include training courses on how to review and what the review process will be like.
While I think acting on these suggestions would make things better, it won’t completely fix the feeling of being ganged up on. To this I would only say to my ECR friends, I am truly sorry.
You are not a proofreader. Chances are also high that the rules you are certain are correct, are anything but. Split infinitives? Turns out they’re fine. So don’t waste your valuable time on something you are unlikely to be good at. Even if you are good at it, it is still a waste of your time. Academic publishing make ludicrous profits from free academic labor. They can afford to pay for proofreading. And they should.
You are not a typesetter. Reviewers have spilt rivers of ink demanding that authors follow a particular system (e.g., APA 6th). Worse, reviewers almost always demand that authors follow their own idiosyncratic interpretation of these rules. They shouldn’t bother. The publisher will strip an academic paper of these style and apply their own. They pay people to do this. Don’t waste your time. Does the author’s style, or lack of it, intrude on your ability to read a paper? Fine, say something. But otherwise leave it to the pros who will get the accepted manuscript.
You are not an acquisitions editor. That is the editor’s job. Your job is to determine if the article has sufficient scientific merit to justify publication. Your job is not to decide whether a paper will be highly cited, be a pivotal piece in the field, or be ‘important’.
You are not a co-author. Your job is not to make the author write a paper the way you would have written it. Your job is to determine whether a paper would not be out-of-place sitting next to the existing literature in the field. You can suggest stuff. But if the author does not want to do it, and it does not affect the merit of the paper, then back-off. Better yet, after you write a comment, ask yourself: “am I imposing my style on the author, or does my comment address an issue of scientific merit”? If it’s the former, it’s better not to include the comment at all.
I enjoy being a reviewer. It is my chance to be anonymously self-righteous. One of my pet peeves is researchers that motivate their writing by academic circle jerking. This includes opening sentences that start with “researchers have yet to consider”, “we aim to resolve a tension in the literature”, “we are the first to”, or “we aim to integrate”. Such openings almost guarantee the remaining paper will focus on esoteric issues there will be precious little of substance on how actual people think, feel, or behave.
So you can imagine my surprise when a reviewer proclaimed that is exactly what I was doing. On reflection they were right. I concentrated my whole opening on winning theoretical points—researchers were focusing on the wrong thing and were making false assumptions and I would put them right. This was interesting to me. But it wasn’t person centred nor do I think it would be interesting to more than maybe a handful of people. My focus was on proving researchers wrong, rather than focusing on the main issues:
Scientists, and thus policy makers and not-for-profits, assume that poor kids are deficit in academic motivation, interests, and self-beliefs. That make policy and develop interventions based on this assumption.
A whole pile of money is being wasted on running motivation, interest, and self-belief interventions for disadvantaged children. This is money that could be spent on advocating for better educational policy that really serves poor children.
This was a good reminder that applied research should always start with why. But that ‘why’ should be for a broad audience—people that could use the research in practical and theoretical ways. In my case, my ‘why’ should have been focused on policy makers. Policy makers need empirical evidence to guide them when deciding how to use a limited budget to create an education system that works for all. They need to know what to focus on. But equally, they need research that tells them what to avoid if they want to make best use of their limited resources. I should have written my research with that as the most important concern.