I was a track change jerk last week. Someone did something minor that I didn’t like. So I showed my distaste via the comment function in Word. I know better.
Like most people, when working on a collaboration with a big team I eagerly await people’s comments on my comments. And while most you won’t admit it, I too get a thrill out of seeing some change I made via track changes accepted by the lead author.
This means collaborative track changes are not low stakes. And yet we treat them like they are. I add comments on papers that are as dismissive as they are uninformative (“awkward sentence”, “this makes no sense”). I change whole sentences or paragraphs without once explaining why I thought what they had was wrong. I treat the comments section as if it is a conversation between me and the person without acknowledging that these comments will be visible to the whole team.
This is not a blog post aimed at self-flagellation. It is more a call for discussion. Do we need track change etiquette? And if we do what should it be? A few thoughts:
Acknowledge that track changes in big teams are public documents and that it doesn’t hurt to be nice.
Acknowledge that you are not a professional proof reader (yes that means you). So if you change something put a comment explaining why. I had a great colleague this week point out a split infinitive via comments but also acknowledged that he was not sure if they mattered anymore.
Point out the superb bits. The academic mentality is so optimised around criticism we find it really hard to acknowledge good work. Recognising good work is as much a critical skill as is recognising bad work.
If you have something controversial or sensitive to say, do it in person, via Skype, or—if you really really have to—via email. Don’t do it in track changes or comments.
Before changing something ask “is this just my personal preference?”.
My commitment is to be a better track change colleague from here out.
I came across a tweet last week by an academic at a conference (I can’t remember who). They were indignant that presenters were using the word ‘predict’ to describe a correlations. My first reaction was to sigh. Prediction has no causal connotation. When you go to the fair and some huckster offers to guess your age for a price, they are making a prediction based on your physical appearance. This prediction does not require a belief that your physical appearance caused your age. Such a belief is absurd. Yet prediction is still the right word.
This was my first reaction. My second, was to reflect on my use of prediction in reporting research results. While I believe ‘to predict’ requires no causal beliefs, it implies a certain level of accuracy. I have used the word predict to describe a correlation of .20. On reflection this seems wrong. Not because I am implying causation. I am not. But because I am implying a level of accuracy in predicting x from y. The implication is that by knowing a person’s value on x I can make a good guess at their value of y. But a correlation of .20 or even .40 as a basis of such a prediction would be atrociously inaccurate. Reporting such weak results using the word predict leads the public, who rightly read ‘predict’ as ‘predict accurately’, to overstate vastly the significance of the finding.
The social sciences are well known for being terrible at prediction.
Predictive accuracy is often woeful, even on the data that the researchers used to build the statistical model (see here for example). The social sciences often seem to not know about the importance of, let alone test, the predictive accuracy of a model on unseen data. Really, the only metric of predictive accuracy that matters.
In his fantastic paper, Shmueli argues that the social sciences have neglected the prediction component of science in favor of a complete focus on explanation. Mostly this is because of the mistaken belief that explanation is synonymous with prediction.
And here lies the problem. The social sciences are scathing on anyone who uses the word prediction outside of RCT research. But this fit of pique is misdirected. The willing skeptic at the fair may say “I have $10 with your name on it, if you can guess my age to within a year”. So too we should call authors on their use of “predict “ when their models are scarcely better than chance.
Correlation is not causation. So comes the inevitable refrain in response to anyone who presents a correlational study as evidence in a debate. There is good reason for this. People have long extrapolated from correlation to causation. Bad science and often bad policy follows. But a healthy respect for what can what we can claim about causality has given way to abject fear of any language that even hints at causality.
There is no danger in being overly cautious I hear you say.
But there have been unintentional consequences that have resulted from the barring of causal language. First, few social scientists now understand much about causality, mistakenly thinking it is that which comes from an RCT. Second, theory has become sloppy. Why waste time constructing a detailed theory of why x leads to y when a reviewer will make you tear it up.
Evidence that something has gone wrong
The biggest evidence I see that something is amiss is how reviewers and writers now interact. It is not uncommon to have a reviewer demand a writer remove all causal language from their manuscript. I have seen this include purging the word ‘effect’ from a manuscript entirely; even named theories are not immune (the Big-Fish-Little-Pond effect becomes the Big-Fish -Little-Pond association). But authors should advance causal theories in introductions!
Reviewers also display a lack of understanding about causation when they claim only an RCT can provide evidence of causality. RCTs neither provide definitive evidence of causation nor are they only way of providing evidence of causality.
Writers also make mistakes. Writers of papers I have reviewed refuse to explain how x leads to y because they didn’t do an RCT. One wonders, if they think this way, why they bothered to do the study at all. And if they are so scared of advancing a causal explanation why use a regression model that so strongly suggests that x leads to y?
Setting the record straight
In Hunting causes and using them Nancy Cartwright emphasizes causality is not a single thing (it is also worth reading her book Evidence Based Policy). So heterogeneous are the things we call causality that we might be better to abandon the term entirely. We likely need to match method and evidence to the type of causality we are chasing.
Judea Pearl in The book of why claims science would become worse not better if we were to believe that only RCTs have the power to provide evidence of causation. In such a world how would we know that smoking causes cancer?
The danger in the current social science landscape comes from a belief that causation is a dichotomy. If you did an RCT, you can advance causal claims. If you didn’t, you can’t. But causality is not a dichotomy. RCTs often can’t provide evidence of causation and sometimes provide poor evidence. RCTs are critical but we both need to be more conservative—RCTs provide some evidence sometimes—and be more liberal in allowing other designs (regression of discontinuity, instrumental variables, Granger causality) to provide evidence of causality.
What to do about it
Treat causality as a spectrum where researchers can marshal evidence that push the needle toward a causal interpretation or away from it.
View no single piece of evidence an inconvertible evidence of causality.
Write about clear and simple causal mechanisms in introductions and literature reviews.
In the method section, social science should have a section on the degree to which the results can provide evidence of causation; perhaps also the type of causation they have in mind. This should include a discussion of design, context, strength of theory, and methodology. In other words, researchers should have to make a case for their specific research rather than relying on general social science tropes.
As Cartwright suggests, we should replace general terms like cause with more specific terms like repel, pull, excite, or suppress that give a better idea of what is being claimed.
I hate academic conferences. What seems like a chance for free travel to an exotic location turns out to be an endless bore in a stuffy room. For an introvert, the need to be constantly ‘on’ when talking to students, peers, and that big name you are desperate to collaborate with is tiring. The point being, I am not usually in the best of moods when at conferences. Which is probably why I found a particular presentation so irksome.
Why so much Person-centered Seems so Hollow to Me
The presenter at this hot and stuffy conference gets up and smugly states that previous, crappy, social science has used a variable-centered approach to research. He, however, would use a person-centered approach. The motivation was, I confess, solid.
Person-centered analysis starts with the assumption that within any large group of people there are likely smaller distinct groups (within a school there are jocks, goths, nerds, etc). Too much research treats humans as a bunch of mini clones that are driven by the same processes and differ only in degree. I can get behind this sentiment.
I was surprised then to not hear mention of a single person for the rest of the presentation. No explanation was given of how people in the different groups think, believe, feel, or act differently from each other. Nor was there discussion about whether people chose to be a member of their group or whether they were forced into it. Did they jump or were they pushed? Instead, the entire presentation focused on various configurations of variables. This was not, to me at least, person-centered.
This is a disturbing trend in person-centered research. The almost total absence of people.
The overall impression I get from most person-centered analysis is that people believe human diversity has been ill-treated by regression like approaches. But many researchers assume that by applying things like cluster analysis or similar they will magically fix this problem. In my experience, researchers don’t seem to put in a lot of thought into how these approaches better represent real people or what the results are really saying about them. Researchers tend not to describe a prototypical human from each of the researcher’s groups. Researchers also apply little imagination to what drives people in different groups.
Greater attention to this could truly transform the social sciences. A truly person-centered ontology and epistemology could serve disadvantaged groups better. Researchers could better acknowledge that the experience of, say, an Indigenous girl is qualitatively different from a South East Asian Australian boy. But to do this, person-centeredness needs to be about more than methods. And it needs to be motivated less by an appeal to what it isn’t (e.g., “Unlike previous research we use person-centered approaches” is not a convincing rationale).
Give me person-centered person-centered analysis
A move in the right direction would be to consider what Rob Brockman and I have recently called the four S’s of person-centered analysis:
Specificity. Once you have your groups, can you describe what makes these groups distinct? By this I don’t mean a profile graph of variables used to create the groups. I mean a deeper insight to what these groups of people are like. What do their members do? How do they think? What do they want?
Selectivity. How do people end up in these groups? By what process does a person end up in group A and not group B? Were people born into different groups? Did some person, institution, or cultural practice push them into their group? Or is their group membership their choice?
Sensitivity. Do these same groups occur in different samples? If not, why not? Do differences in groupings across—for example—countries illuminate how people’s context shapes grouping, or do differences just reflect unreliable research findings?
Superiority. The beauty of cluster analysis is that it will always return the number of groups you asked for. And like a Rorschach test, it is easy to make something out of whatever the computer gives you. Researchers should attempt to show that their groups tell us something we did not already know. And researchers need to show us that groups really differ from each other qualitatively rather than merely quantitatively.
There was a great blog post this week from Sara Chipps of stack overflow. She discussed the ‘pile on’ effect. The phenomena where the collective (even when constructive) criticism from many people can be crushing. I see the same thing in the review process. There are arsehole reviewers out there. But my experience has been more of the soul crushing effect of 3-5 reviews—all from reviewers that mean well and have good things to say.
It is this pile on effect that I think is so destructive for early career and doctoral researchers. This has always been the case. Many of us survived this and have the battle scars to prove it. But I think the pile on effect is even more dangerous now because of the pressure there is to publish and publish in good journals. No top 10 percentile journal articles on your CV often means no chance at a meaningful and secure career in academia. So what to do?
Individually there are some things we could all do:
Don’t be an arsehole reviewer.
If you see something, say something. If you review a paper and you see arsehole behaviour from other reviewers, let the editor know that it is not OK.
If you are a new reviewer chances are you will be an arsehole; at least in your first few reviews. Get feedback from experienced researchers. Specifically get feedback on how to write constructive reviews.
What little training in reviewing we receive encourages us to give people a shit sandwich. Generic nice first sentence, destruction, patronizing ‘good effort’ closing sentence. This is transparent. Better to spend time to find something that you genuinely learnt from the paper. Even in awful papers there is generally an interesting idea, a nice way of visualizing data, or nice turn of phrase. Point this out. Genuine compliments are best.
When your ECRs experience arsehole reviewer behaviour don’t just comfort them by saying ‘we have all been there’. Let them know that the reviewers behaviour is unacceptable. ECRs will become reviewers and they need to know what behaviour is and is not OK.
I think these are all reasonable points but it does not get around the pile on effect. For that I think there needs to be a structural change. We can do a much better job at making our field more welcoming to newcomers. This might include:
Wrap around support for early career and doctoral researchers (ECRs). Supervisors should be ready to support their people, editors should be made aware when a paper is from an ECR and curate feedback from reviewers more aggressively (i.e., edit out the mean bull shit that many reviewers for some bizarre reason think is ok to write).
Reviewers could be told that a paper is from an ECR as a nudge to be nicer. I am not suggesting ECRs get a free ride. The same standards should apply to all. But we could be more welcoming.
I have reviewed some 200ish articles. I have not once received feedback from a journal about my reviews. I KNOW I was an arsehole when I first started. No one bothered to tell me. The lack of feedback from journals to reviewers is unforgiveable.
Post graduate training should include training courses on how to review and what the review process will be like.
While I think acting on these suggestions would make things better, it won’t completely fix the feeling of being ganged up on. To this I would only say to my ECR friends, I am truly sorry.
You are not a proofreader. Chances are also high that the rules you are certain are correct, are anything but. Split infinitives? Turns out they’re fine. So don’t waste your valuable time on something you are unlikely to be good at. Even if you are good at it, it is still a waste of your time. Academic publishing make ludicrous profits from free academic labor. They can afford to pay for proofreading. And they should.
You are not a typesetter. Reviewers have spilt rivers of ink demanding that authors follow a particular system (e.g., APA 6th). Worse, reviewers almost always demand that authors follow their own idiosyncratic interpretation of these rules. They shouldn’t bother. The publisher will strip an academic paper of these style and apply their own. They pay people to do this. Don’t waste your time. Does the author’s style, or lack of it, intrude on your ability to read a paper? Fine, say something. But otherwise leave it to the pros who will get the accepted manuscript.
You are not an acquisitions editor. That is the editor’s job. Your job is to determine if the article has sufficient scientific merit to justify publication. Your job is not to decide whether a paper will be highly cited, be a pivotal piece in the field, or be ‘important’.
You are not a co-author. Your job is not to make the author write a paper the way you would have written it. Your job is to determine whether a paper would not be out-of-place sitting next to the existing literature in the field. You can suggest stuff. But if the author does not want to do it, and it does not affect the merit of the paper, then back-off. Better yet, after you write a comment, ask yourself: “am I imposing my style on the author, or does my comment address an issue of scientific merit”? If it’s the former, it’s better not to include the comment at all.
“Those who know that they are profound strive for clarity. Those who would like to seem profound to the crowd strive for obscurity. For the crowd believes that if it cannot see to the bottom of something it must be profound.”