STEM Gender Gaps in Motivation, Interest, and Self-belief are Huge Right?

We recently had a meta-analysis on STEM gender differences in motivation, interest, and self-belief in Educational Psychology Review. We could not be more thrilled. And a big thank you to my former PhD student Brooke for all her work on this. The results are in the paper poster download below. But first some context for why there is a download in the first place.

I have been thinking about using Kudos for new papers and this seemed like a good paper to give it a try. I spent longer than I like setting up a design brief for this. But now it is done, I have a template in In Design I can use for all new papers as well as themes for ggplot and a standard color pallet. My design choices were:

  1. Use of three colors only; all blues. I think this is elegant but is also advantageous for me as I am color blind.
  2. For plots I have modified the economist white theme from ggthemes. So here on out all my plots will be consistent.
  3. I used a combination serif and san-serif set of fonts the work nicely together. I chose Avenir book and EB Garamond. I am not super happy with these but I don’t like the idea of paying $400 for the fonts I really want. I may want to swap out EB Garamond for Nanum Myeong to have a more crisp feel. Not sure yet.

Anyway, you can see the result here:

Comments welcome; particularly on fonts, general look, and plot theme as I will want to role these out for other papers. I still need a lot of work on distilling the message of my papers down to 100 or so Sticky words. And my In Design skills are weak (though I think I am getting better with my R to Illustrator workflow).

The Pile On Effect: Why Recieving Constructive Reviews Still Sucks

There was a great blog post this week from Sara Chipps of stack overflow. She discussed the ‘pile on’ effect. The phenomena where the collective (even when constructive) criticism from many people can be crushing. I see the same thing in the review process. There are arsehole reviewers out there. But my experience has been more of the soul crushing effect of 3-5 reviews—all from reviewers that mean well and have good things to say.

It is this pile on effect that I think is so destructive for early career and doctoral researchers. This has always been the case. Many of us survived this and have the battle scars to prove it. But I think the pile on effect is even more dangerous now because of the pressure there is to publish and publish in good journals. No top 10 percentile journal articles on your CV often means no chance at a meaningful and secure career in academia. So what to do?

Individually there are some things we could all do:

  1. Don’t be an arsehole reviewer.
  2. If you see something, say something. If you review a paper and you see arsehole behaviour from other reviewers, let the editor know that it is not OK.
  3. If you are a new reviewer chances are you will be an arsehole; at least in your first few reviews. Get feedback from experienced researchers. Specifically get feedback on how to write constructive reviews.
  4. What little training in reviewing we receive encourages us to give people a shit sandwich. Generic nice first sentence, destruction, patronizing ‘good effort’ closing sentence. This is transparent. Better to spend time to find something that you genuinely learnt from the paper. Even in awful papers there is generally an interesting idea, a nice way of visualizing data, or nice turn of phrase. Point this out. Genuine compliments are best.
  5. When your ECRs experience arsehole reviewer behaviour don’t just comfort them by saying ‘we have all been there’. Let them know that the reviewers behaviour is unacceptable. ECRs will become reviewers and they need to know what behaviour is and is not OK.

I think these are all reasonable points but it does not get around the pile on effect. For that I think there needs to be a structural change. We can do a much better job at making our field more welcoming to newcomers. This might include:

  1. Wrap around support for early career and doctoral researchers (ECRs). Supervisors should be ready to support their people, editors should be made aware when a paper is from an ECR and curate feedback from reviewers more aggressively (i.e., edit out the mean bull shit that many reviewers for some bizarre reason think is ok to write).
  2. Reviewers could be told that a paper is from an ECR as a nudge to be nicer. I am not suggesting ECRs get a free ride. The same standards should apply to all. But we could be more welcoming.
  3. I have reviewed some 200ish articles. I have not once received feedback from a journal about my reviews. I KNOW I was an arsehole when I first started. No one bothered to tell me. The lack of feedback from journals to reviewers is unforgiveable.
  4. Post graduate training should include training courses on how to review and what the review process will be like.

While I think acting on these suggestions would make things better, it won’t completely fix the feeling of being ganged up on. To this I would only say to my ECR friends, I am truly sorry.

Want to be a good reviewer? Learn what you job isn’t.

  1. You are not a proofreader. Chances are also high that the rules you are certain are correct, are anything but. Split infinitives? Turns out they’re fine. So don’t waste your valuable time on something you are unlikely to be good at. Even if you are good at it, it is still a waste of your time. Academic publishing make ludicrous profits from free academic labor. They can afford to pay for proofreading. And they should.
  2. You are not a typesetter. Reviewers have spilt rivers of ink demanding that authors follow a particular system (e.g., APA 6th). Worse, reviewers almost always demand that authors follow their own idiosyncratic interpretation of these rules. They shouldn’t bother. The publisher will strip an academic paper of these style and apply their own. They pay people to do this. Don’t waste your time. Does the author’s style, or lack of it, intrude on your ability to read a paper? Fine, say something. But otherwise leave it to the pros who will get the accepted manuscript.
  3. You are not an acquisitions editor. That is the editor’s job. Your job is to determine if the article has sufficient scientific merit to justify publication. Your job is not to decide whether a paper will be highly cited, be a pivotal piece in the field, or be ‘important’.
  4. You are not a co-author. Your job is not to make the author write a paper the way you would have written it. Your job is to determine whether a paper would not be out-of-place sitting next to the existing literature in the field. You can suggest stuff. But if the author does not want to do it, and it does not affect the merit of the paper, then back-off. Better yet, after you write a comment, ask yourself: “am I imposing my style on the author, or does my comment address an issue of scientific merit”? If it’s the former, it’s better not to include the comment at all.

Motivating Research

I enjoy being a reviewer. It is my chance to be anonymously self-righteous. One of my pet peeves is researchers that motivate their writing by academic circle jerking. This includes opening sentences that start with “researchers have yet to consider”, “we aim to resolve a tension in the literature”, “we are the first to”, or “we aim to integrate”. Such openings almost guarantee the remaining paper will focus on esoteric issues there will be precious little of substance on how actual people think, feel, or behave.

So you can imagine my surprise when a reviewer proclaimed that is exactly what I was doing. On reflection they were right. I concentrated my whole opening on winning theoretical points—researchers were focusing on the wrong thing and were making false assumptions and I would put them right. This was interesting to me. But it wasn’t person centred nor do I think it would be interesting to more than maybe a handful of people. My focus was on proving researchers wrong, rather than focusing on the main issues:

  1. Scientists, and thus policy makers and not-for-profits, assume that poor kids are deficit in academic motivation, interests, and self-beliefs. That make policy and develop interventions based on this assumption.
  2. A whole pile of money is being wasted on running motivation, interest, and self-belief interventions for disadvantaged children. This is money that could be spent on advocating for better educational policy that really serves poor children.

This was a good reminder that applied research should always start with why. But that ‘why’ should be for a broad audience—people that could use the research in practical and theoretical ways. In my case, my ‘why’ should have been focused on policy makers. Policy makers need empirical evidence to guide them when deciding how to use a limited budget to create an education system that works for all. They need to know what to focus on. But equally, they need research that tells them what to avoid if they want to make best use of their limited resources. I should have written my research with that as the most important concern.

User Stories

 

I presented my blog to my writing circle last week. The feedback; who is this blog actually for? They challenged me to write a set of user stories to make this clear. After much procrastination I realised that the blog was, more of less, for me. A chance to yell at the clouds. And there is little point to that. But I think I have something to say and I think there are people that might find what I have to say useful—maybe even interesting. Here I present to you, dear reader, my user stories.

Brief Interlude: What are you Talking About?

But first, as this is academia and not software development, a brief interlude on what user stories are. There is a movement in software development called agile or scrum. I won’t go into the messy details here other than to say this is a way we run many of our teams at Institute for Positive Psychology and Education. The bit I want to talk about is the dedicated focus on end users of the content we produce. To do this, we write short (1-2 sentence) stories about a particular person and what problem they would like solved. The team then sets about solving that problem. For example, we might consider the problem of a education minister who is unsure whether to increase the number of selective high-schools. We then go about conducting research that could inform that decision.

My User Stories

  1. My reader is a social scientist who worries they aren’t smart enough. They read their bosses impenetrable prose and worry that their simple writing will never achieve this level of ‘elevation’ (good, I hope it never does!).
  2. My reader wants their work to impact people. They want to do research that people can use, not research that merely sits in some journal few will ever read.

This is me. My writing still drips with false complexity, with affected sophistication. And I wonder if any of the people I research could read what I write and apply it to their life in some tangible way. Maybe as I try to wake up from this social science stupor, I might have something interesting to share with you along the way.

Participant Focused Research

 

Battle black blur 260024

What do we owe our research participants? I know we often pay participants, but doesn’t justice require more? A few bucks and a thank you do not seem sufficient. Participants don’t participate for the Benjamins. Part of the problem is research result are locked away behind pay walls. A point well made by the remarkable documentary The Internet’s Own Boy. But I mean more than this. Even when research is free, isn’t it a problem that most research is hidden within an impenetrable fog of flabby prose, timid hedging, and needless abstractions? Shouldn’t the research we produce be participant focused? Shouldn’t it exist to serve the people we survey? Don’t we owe people more?

What do we owe our research participants? I know we often pay participants, but doesn’t justice require more? A few bucks and a thank you do not seem sufficient. Participants don’t participate for the Benjamins. Part of the problem is research result are locked away behind pay walls. A point well made by the remarkable documentary The Internet’s Own Boy. But I mean more than this. Even when research is free, isn’t it a problem that most research is hidden within an impenetrable fog of flabby prose, timid hedging, and needless abstractions? Shouldn’t the research we produce be participant focused? Shouldn’t it exist to serve the people we survey? Don’t we owe people more?

Participant centered research

I believe the answer to these questions is participant centered research. Participant centered research builds on the way healthcare has been made more compassionate through patient centered care. In healthcare, patient centered care treats patients as persons. Each person has inherent dignity and deserves respect. This means patients should be informed, listened to, be treated as a whole person rather than a disease, and be treated as a partner in their own care. Patient centered care is not just about the right to be treated with dignity by your doctor but to also be treated as a person by the healthcare system itself.

The following are the areas I think need to be improved within social sciences in order to provide participant centered care.

Giving Participants a Voice

It goes almost without saying that our research should be in the public domain. But I am more and more convinced that most research should also be put on pre-print servers before publications. I think this for a few reasons. First, publication is a glacial process and thus participants may have to waiting years before being able to read the fruits of their labor. Second, while publicly accessible research is nice, it is a finished product. The participants have no avenue for input over how their contributions to the research project are being used. Pre-prints provide an avenue for participants to have their say. This does not mean participants should be able to veto research they don’t like. But it does mean that we can give participants a meaningful avenue to contribute to the process. I believe well-advertised pre-prints that provide participants with a clear channel to be heard provide on such avenue.

Justice Demands Plain English

Social science writing is notoriously bad. So much so that bad writing may be a job requirement; see the brilliant book Learn to write badly: How to succeed in the social sciences. It is a problem that so much social science writing is so impenetrable that even some reviewers and editors cannot tell the difference between densely written nonsense and actual science. I believe that such impenetrable writing is unjust to the participants who have given their time to take part in our research. It is unfair that such poor writing excludes most of the participants we seek to serve.

The argument against plain English is that social science is complex and simple writing would miss the nuances required to do the subject matter justice. I agree. People are complex. And even more so when there are many of them, interacting with each other, as well as with banks, schools, churches, colleges, and cultures. But I do not agree that this requires overly complex writing.

Of course, who am I to speak? My writing is far from perfect and in the past I have often written in the same impenetrable manner as I am criticizing here. Which is I am working on it. I have read about half a dozen books on the topic in the last 12 months—do yourself a favor and read everything Helen Sword has ever written. I also attend a weekly writing circle. I will never write with as much impact as Matthew Desmond or with the clarity of Richard Dawkins, but I think justice demands that I try.

People Must be at the Center of People Science

A good case can be made, and in fact has been made by Michael Billig and John Goldthorpe, that our bad writing—and in particular our love affair with abstract nouns—leads to woolly thinking. John Goldthorpe, in his fantastic book On Sociology makes a simple point; “variables don’t do things, people do”. Meaning that social science must translate theory into how human actually act, think, and behave. Likewise, Michael Billig laments that ‘elevated’ social science prose is almost unpopulated by people doing things. The argument in favor of the impenetrable social science writing is that this is what technical science requires. I do not believe this is true. In fact, I think the removal of people from social science writing leads to crappy research and poor theory. This is a matter for another blog post. For now, I want to ask, what are our participants to make of this sort of writing? What possible relevance could our abstract writing possibly have for their lives?

We try to survey and interview participants who we believe have troubles we think our research could help alleviate. I research the beliefs that people have about themselves because I believe I can help people evaluate themselves better. I study educational inequality because I want more people to have a chance to experience the life changing magic of a university education. If the people I wish my research to serve cannot read it, let along gain some useful insight from it, then something is wrong. It is worse if participants cannot see themselves as thinking, acting, and believing human being in the pages of our prose. This does not mean dumbing down research. It means writing with participants in mind. It means writing research that is populated by humans doing human things rather than merely packing our prose with variables, factors, and theoretical concepts. It means plainly stating what the implications of your research are. It means repaying participants time and energy with a clear statement about what can be done to improve participants lives and about what changes they could agitate for. It also means being honest when our results don’t turn out how we hoped so we don’t mislead people about important parts of their lives.

Uncategorized