IMPLICIT BIAS IN STUDENT EVALUATIONS OF INSTRUCTION
A growing body of research documents systematic differences in how students evaluate college instructors, with women, non-native English speakers, and minorities receiving systematically lower ratings. This holds true even when course and learning experiences are otherwise identical —including when instructor gender and race (as perceived by the students) are experimentally manipulated. Given the weight placed on student evaluations in high-stakes reappointment, tenure and promotion decisions, such biases in student evaluations could result in significant downstream disparities in the employment opportunities and career progression paths for members of these historically underrepresented groups.
The purpose of this study is to assess the efficacy of utilizing modified introductory language to mitigate implicit bias in student evaluations of instruction. A prior study conducted at Iowa State University and published in PLOS One found that students assigned to treatment group (3) provided significantly higher ratings of female instructors compared to other students taught by the same instructors who did not receive prompt, with no impact on the ratings of male faculty.
The study is being conducted at The Ohio State University during the Spring 2021 semester by Joyce Chen, Brandon Genetin, Vladimir Kogan, and Alan Kalish with support from the Bluenotes Group and The Ohio State University's Office of Diversity and Inclusion. Click here for more information.
This study has now closed. Please check back here for findings, expected in Fall 2021. AEA RCT Registry information here.
**If you would like to update your gender identity and/or race/ethnicity information from what is included in HR's Workday system, please contact Brandon Genetin.**