Thingification: Bad Writing Leads to Bad Theory

Repression, according to Freud, is a common phenomenon. Note repression is a noun here. People don’t repress. Rather repression is the name of a state that seems to happen all on its own. This is the point that Michael Billig makes in his book Learn to Write Badly: How to Succeed in the Social Sciences. Billig points out that Freud’s theory turns the verb to repress into the noun repression. Freud does this to make his theory sound scientific. But in doing so we lose a heap of important theoretical information. Who does the repression? How does it happen? What processes result in it?

My colleague in my writing circle calls this thingification. But in keeping with the theme of this post I suppose I should talk about when researchers thingify an abstract process.

I see this is my field a lot. “Growth mindset is associated with persistence”. This sounds like a sufficiently science like expression that we let it go by unexamined. We then expect the scientist to collect survey data on growth mindset and persistence and then test their relationship to determine the strength and direction of the association. But the statement “growth mindset is associated with persistence” leaves so much unsaid. 

A better statement would be: children who believe their ability is fixed, and thus cannot change, are unlikely to persist in overcoming obstacles to their learning. Notice how I replace the weak term associated with richer causal language? See how this language specifies a process for the association? The child is now an actor in the sentence who believes things and acts accordingly. 

In the original statement, it is the variables doing things to each other. But as John Goldthorpe states “variables don’t do things, people do”. Notice further that this sentence invites further specification? We can now ask:

  • How did the child come to believe this? 
  • Do they believe only their ability is fixed or everyones? 
  • What do such children make of a school system that demands they practice, practice, practice?

Social scientists thingify to sound more scientific. But in doing so we have created a myriad of under-specified theories and a science about people that is almost entirely absent of people.

Person Centered Analysis: Where are all the People?

I hate academic conferences. What seems like a chance for free travel to an exotic location turns out to be an endless bore in a stuffy room. For an introvert, the need to be constantly ‘on’ when talking to students, peers, and that big name you are desperate to collaborate with is tiring. The point being, I am not usually in the best of moods when at conferences. Which is probably why I found a particular presentation so irksome.

Why so much Person-centered Seems so Hollow to Me

The presenter at this hot and stuffy conference gets up and smugly states that previous, crappy, social science has used a variable-centered approach to research. He, however, would use a person-centered approach. The motivation was, I confess, solid.

Person-centered analysis starts with the assumption that within any large group of people there are likely smaller distinct groups (within a school there are jocks, goths, nerds, etc). Too much research treats humans as a bunch of mini clones that are driven by the same processes and differ only in degree. I can get behind this sentiment.

I was surprised then to not hear mention of a single person for the rest of the presentation. No explanation was given of how people in the different groups think, believe, feel, or act differently from each other. Nor was there discussion about whether people chose to be a member of their group or whether they were forced into it. Did they jump or were they pushed? Instead, the entire presentation focused on various configurations of variables. This was not, to me at least, person-centered.

This is a disturbing trend in person-centered research. The almost total absence of people.

The overall impression I get from most person-centered analysis is that people believe human diversity has been ill-treated by regression like approaches. But many researchers assume that by applying things like cluster analysis or similar they will magically fix this problem. In my experience, researchers don’t seem to put in a lot of thought into how these approaches better represent real people or what the results are really saying about them. Researchers tend not to describe a prototypical human from each of the researcher’s groups. Researchers also apply little imagination to what drives people in different groups.

Greater attention to this could truly transform the social sciences. A truly person-centered ontology and epistemology could serve disadvantaged groups better. Researchers could better acknowledge that the experience of, say, an Indigenous girl is qualitatively different from a South East Asian Australian boy. But to do this, person-centeredness needs to be about more than methods. And it needs to be motivated less by an appeal to what it isn’t (e.g., “Unlike previous research we use person-centered approaches” is not a convincing rationale).

Give me person-centered person-centered analysis

A move in the right direction would be to consider what Rob Brockman and I have recently called the four S’s of person-centered analysis:

  1. Specificity. Once you have your groups, can you describe what makes these groups distinct? By this I don’t mean a profile graph of variables used to create the groups. I mean a deeper insight to what these groups of people are like. What do their members do? How do they think? What do they want?
  2. Selectivity. How do people end up in these groups? By what process does a person end up in group A and not group B? Were people born into different groups? Did some person, institution, or cultural practice push them into their group? Or is their group membership their choice?
  3. Sensitivity. Do these same groups occur in different samples? If not, why not? Do differences in groupings across—for example—countries illuminate how people’s context shapes grouping, or do differences just reflect unreliable research findings?
  4. Superiority. The beauty of cluster analysis is that it will always return the number of groups you asked for. And like a Rorschach test, it is easy to make something out of whatever the computer gives you. Researchers should attempt to show that their groups tell us something we did not already know. And researchers need to show us that groups really differ from each other qualitatively rather than merely quantitatively.

STEM Gender Gaps in Motivation, Interest, and Self-belief are Huge Right?

We recently had a meta-analysis on STEM gender differences in motivation, interest, and self-belief in Educational Psychology Review. We could not be more thrilled. And a big thank you to my former PhD student Brooke for all her work on this. The results are in the paper poster download below. But first some context for why there is a download in the first place.

I have been thinking about using Kudos for new papers and this seemed like a good paper to give it a try. I spent longer than I like setting up a design brief for this. But now it is done, I have a template in In Design I can use for all new papers as well as themes for ggplot and a standard color pallet. My design choices were:

  1. Use of three colors only; all blues. I think this is elegant but is also advantageous for me as I am color blind.
  2. For plots I have modified the economist white theme from ggthemes. So here on out all my plots will be consistent.
  3. I used a combination serif and san-serif set of fonts the work nicely together. I chose Avenir book and EB Garamond. I am not super happy with these but I don’t like the idea of paying $400 for the fonts I really want. I may want to swap out EB Garamond for Nanum Myeong to have a more crisp feel. Not sure yet.

Anyway, you can see the result here:

Comments welcome; particularly on fonts, general look, and plot theme as I will want to role these out for other papers. I still need a lot of work on distilling the message of my papers down to 100 or so Sticky words. And my In Design skills are weak (though I think I am getting better with my R to Illustrator workflow).

The Pile On Effect: Why Recieving Constructive Reviews Still Sucks

There was a great blog post this week from Sara Chipps of stack overflow. She discussed the ‘pile on’ effect. The phenomena where the collective (even when constructive) criticism from many people can be crushing. I see the same thing in the review process. There are arsehole reviewers out there. But my experience has been more of the soul crushing effect of 3-5 reviews—all from reviewers that mean well and have good things to say.

It is this pile on effect that I think is so destructive for early career and doctoral researchers. This has always been the case. Many of us survived this and have the battle scars to prove it. But I think the pile on effect is even more dangerous now because of the pressure there is to publish and publish in good journals. No top 10 percentile journal articles on your CV often means no chance at a meaningful and secure career in academia. So what to do?

Individually there are some things we could all do:

  1. Don’t be an arsehole reviewer.
  2. If you see something, say something. If you review a paper and you see arsehole behaviour from other reviewers, let the editor know that it is not OK.
  3. If you are a new reviewer chances are you will be an arsehole; at least in your first few reviews. Get feedback from experienced researchers. Specifically get feedback on how to write constructive reviews.
  4. What little training in reviewing we receive encourages us to give people a shit sandwich. Generic nice first sentence, destruction, patronizing ‘good effort’ closing sentence. This is transparent. Better to spend time to find something that you genuinely learnt from the paper. Even in awful papers there is generally an interesting idea, a nice way of visualizing data, or nice turn of phrase. Point this out. Genuine compliments are best.
  5. When your ECRs experience arsehole reviewer behaviour don’t just comfort them by saying ‘we have all been there’. Let them know that the reviewers behaviour is unacceptable. ECRs will become reviewers and they need to know what behaviour is and is not OK.

I think these are all reasonable points but it does not get around the pile on effect. For that I think there needs to be a structural change. We can do a much better job at making our field more welcoming to newcomers. This might include:

  1. Wrap around support for early career and doctoral researchers (ECRs). Supervisors should be ready to support their people, editors should be made aware when a paper is from an ECR and curate feedback from reviewers more aggressively (i.e., edit out the mean bull shit that many reviewers for some bizarre reason think is ok to write).
  2. Reviewers could be told that a paper is from an ECR as a nudge to be nicer. I am not suggesting ECRs get a free ride. The same standards should apply to all. But we could be more welcoming.
  3. I have reviewed some 200ish articles. I have not once received feedback from a journal about my reviews. I KNOW I was an arsehole when I first started. No one bothered to tell me. The lack of feedback from journals to reviewers is unforgiveable.
  4. Post graduate training should include training courses on how to review and what the review process will be like.

While I think acting on these suggestions would make things better, it won’t completely fix the feeling of being ganged up on. To this I would only say to my ECR friends, I am truly sorry.

Want to be a good reviewer? Learn what you job isn’t.

  1. You are not a proofreader. Chances are also high that the rules you are certain are correct, are anything but. Split infinitives? Turns out they’re fine. So don’t waste your valuable time on something you are unlikely to be good at. Even if you are good at it, it is still a waste of your time. Academic publishing make ludicrous profits from free academic labor. They can afford to pay for proofreading. And they should.
  2. You are not a typesetter. Reviewers have spilt rivers of ink demanding that authors follow a particular system (e.g., APA 6th). Worse, reviewers almost always demand that authors follow their own idiosyncratic interpretation of these rules. They shouldn’t bother. The publisher will strip an academic paper of these style and apply their own. They pay people to do this. Don’t waste your time. Does the author’s style, or lack of it, intrude on your ability to read a paper? Fine, say something. But otherwise leave it to the pros who will get the accepted manuscript.
  3. You are not an acquisitions editor. That is the editor’s job. Your job is to determine if the article has sufficient scientific merit to justify publication. Your job is not to decide whether a paper will be highly cited, be a pivotal piece in the field, or be ‘important’.
  4. You are not a co-author. Your job is not to make the author write a paper the way you would have written it. Your job is to determine whether a paper would not be out-of-place sitting next to the existing literature in the field. You can suggest stuff. But if the author does not want to do it, and it does not affect the merit of the paper, then back-off. Better yet, after you write a comment, ask yourself: “am I imposing my style on the author, or does my comment address an issue of scientific merit”? If it’s the former, it’s better not to include the comment at all.

Motivating Research

I enjoy being a reviewer. It is my chance to be anonymously self-righteous. One of my pet peeves is researchers that motivate their writing by academic circle jerking. This includes opening sentences that start with “researchers have yet to consider”, “we aim to resolve a tension in the literature”, “we are the first to”, or “we aim to integrate”. Such openings almost guarantee the remaining paper will focus on esoteric issues there will be precious little of substance on how actual people think, feel, or behave.

So you can imagine my surprise when a reviewer proclaimed that is exactly what I was doing. On reflection they were right. I concentrated my whole opening on winning theoretical points—researchers were focusing on the wrong thing and were making false assumptions and I would put them right. This was interesting to me. But it wasn’t person centred nor do I think it would be interesting to more than maybe a handful of people. My focus was on proving researchers wrong, rather than focusing on the main issues:

  1. Scientists, and thus policy makers and not-for-profits, assume that poor kids are deficit in academic motivation, interests, and self-beliefs. That make policy and develop interventions based on this assumption.
  2. A whole pile of money is being wasted on running motivation, interest, and self-belief interventions for disadvantaged children. This is money that could be spent on advocating for better educational policy that really serves poor children.

This was a good reminder that applied research should always start with why. But that ‘why’ should be for a broad audience—people that could use the research in practical and theoretical ways. In my case, my ‘why’ should have been focused on policy makers. Policy makers need empirical evidence to guide them when deciding how to use a limited budget to create an education system that works for all. They need to know what to focus on. But equally, they need research that tells them what to avoid if they want to make best use of their limited resources. I should have written my research with that as the most important concern.