The Research Files Episode 23: Bias in grading

Thank you for downloading this episode of The Research Files podcast series, brought to you by Teacher magazine – I'm Rebecca Vukovic. Are teachers ever influenced by bias when it comes to grading student work? This is the focus of a new report [published in the Australian Journal of Education] from researchers at the University of New England, which looks at different types of potential bias in grading including gender, race and physical attractiveness. Associate Professor John Malouff joins me in this episode to discuss the findings from his research and what schools can do to minimise bias in the classroom.

Rebecca Vukovic: Professor John Malouff, thanks for joining Teacher magazine.

John Malouff: Hi, I'm glad to be with you.

RV: OK to start off, could you talk me through the key findings from your research into bias in grading?

JM: Well we looked for every well done study that examined whether markers are affected by bias and we found 20 experimental studies from all over the world and they looked at different types of potential bias – whether the student was a female, whether the student was attractive, whether the student had an educational label (which could be a good one or a negative one), prior performance of the student – lots of factors that ought not to affect the mark we assign. We want to just mark the quality of the work. And what these studies found is yes, there is an influence, it's a significant influence, it's not huge but in a usual task where we're assigning grades, it might amount to four or five points on a zero to 100 scale.

RV: And so could you talk me through in some more detail how you went about undertaking this research?

JM: Well we just did a search, like bloodhounds, for every potential study. We searched every database and then we contacted authors if they did a study recently that we found and asked them ‘do you have anything else in press?' We did it pretty comprehensively the search for the studies.

Now the typical study was set up like this, almost every one got maybe several different essays from students – some were good, some were medium, some were bad – and then every marker marked all the essays. But some of the markers randomly assigned were given this extraneous information – either a photograph of the student or some educational classification or the sex or ethnic background and the other ones either weren't told this information or were given information about the student who supposedly submitted the work, putting the person in the more favourite category, which might be the dominant ethnic background of the culture, it might be males, or it might be say the student is a gifted student in primary school. All of the studies, as it turned out, were either marking primary school work or university student work.

RV: So you did all this reading and research, and then could you talk me through exactly what you found through all your research?

JM: Well a typical study would find that the people, the students whose work was in the disfavoured category received lower scores, statistically, significantly, lower scores than the students who were in the favoured group. Now you've got to keep in mind this is exactly the same work, that's the beauty of this experimental method, everybody was marking the same papers.

And so what would explain the difference? The only difference was this extra bit of information that some markers had that may have either helped or hurt a student depending on what exactly it was. And so our conclusion was there is a potential for bias. Only certain types of bias were actually examined, such as what I've mentioned – sex, ethnic background, prior performance. Others weren't examined – how nice the student has been, how tall the student has been, there are lots of other things that could potentially bias a marker, those weren't studied but we would just guess or speculate from our findings that other factors could also influence marking if the marker is aware of them.

And that's the crucial thing. Our recommendation, based on our findings, is that markers and teachers, whoever is doing the marking, keeps the students anonymous wherever that's possible.

RV: Interesting. And John I was hoping you could tell me a little bit more about what you mean when you talk about ‘the halo effect'?

JM: Well, we wondered why are people affected, why are these markers affected by this extraneous information and we think there's a ‘halo effect' which usually people think of a positive halo effect like an angel but scientists also talk about a negative halo effect, I don't know, maybe that would be like horns, like the devil – that we have expectations about people based on how attractive they are perhaps, their ethnic background, their sex.

More obvious for us is how a student has done before. I had an experience with a student who had done very well on one of my units and then he went on to fourth year where we have two markers independently mark their thesis and I gave him a very high score initially and then I talked to the other marker who pointed out all these errors I hadn't noticed and I thought ‘well I gave him too high of a score because I expected him to do well!'

That's not what we want to do, we don't want to grade on our expectation and that's ‘the halo effect'. So, he had a benefit from me of the positive halo effect. Now, we had a relatively good system there where we had two independent markers and then we had to agree on something that made the final mark. A better system, which we now have at our university, in Psychology, is we don't know who the student is. So if it was from a student who has done very well in my unit before or had been exceptionally nice or helpful to me, I don't know so how can I be influenced by that?

RV: That's interesting too because do you think that that's what school should be doing, that they should all be doing blind marking when it comes to tests and exams?

JM: I think that's a good idea! Some universities have that policy, some schools have that policy and there are many reasons to use that, I mean individual instructors can obviously do it themselves but it's also good to have a policy so it's across the board.

It's sometimes hard to convince teachers about this because they would swear in a court of law that they're fair, that they're unbiased, and they may think that but it may not be true because these biases are unconscious and there are a lot of studies outside of what we covered that show that when you do any sort of an assessment, whether it's who you're going to vote for in an election or who you're going to hire or who you would favour hiring, there are biases that creep in that we don't know about, that we're not aware of. And so the fairest way to treat students is to not let these biases creep in, not let them affect us, and the surest way to do that is to keep the students anonymous.

There may be other things like using a rubric or a very good marking system that makes the marking more objective. So, for instance, we don't recommend keeping students anonymous if their multiple choice test is going to be scored by a computer. I mean there might be at the end point some possibility for bias if there's some controversy how the students respond on this item and that item, but it's a very, very small, perhaps trivial. Not so trivial when there's a subjective judgement about the quality of the students' performance.

RV: So John are there any other strategies that teachers could employ to minimise the impact of bias?

JM: Well, the less you know about the individual the better. Also another thing to do is try to make the marking as objective as possible by using a rubric, where you have clear indicators of what sort of factors, where the student fits in on this criterion and that criterion, or something similar you can have is detailed marking criteria. Those are probably always good strategies if you can set them up.

They may be particularly helpful with regard to reducing marking but also just other subjective elements that we want to reduce as much as possible to not only make the marking fairer but to make it look fairer. Some students don't often think deeply about this but some do, and they prefer to have just their work marked. That seems to them to be fairer, that's what they're going toward, they're putting the effort into their work, not into their appearance or how helpful they are in class, or how pleasant or how much they smile and so on. They want their work to be graded and so this, to them, gives the presentation of ‘we're doing this as fairly as we can and we're marking what you submit'.

RV: Fantastic. Well, John Malouff, thank you for sharing your work with The Research Files.

You've been listening to an episode of The Research Files, from Teacher magazine. To download all of our podcasts for free, visit acer.ac/teacheritunes or www.soundcloud.com/teacher-acer. To find out more about the research discussed in this podcast, and to access the latest articles, videos and infographics visit www.teachermagazine.com.au

Research links

Malouff, J. M., & Thorsteinsson, E. B. (2016). Bias in grading: A meta-analysis of experimental research findings. Australian Journal of Education, DOI: 10.1177/0004944116664618

In what ways do you currently avoid bias when grading student work?

Which strategies from this discussion with John Malouff could you attempt to employ in your own school setting?