June 2024

IZA DP No. 17077: Decoding Gender Bias: The Role of Personal Interaction

Subjective performance evaluation is an important part of hiring and promotion decisions. We combine experiments with administrative data to understand what drives gender bias in such evaluations in the technology industry. Our results highlight the role of personal interaction. Leveraging 60,000 mock video interviews on a platform for software engineers, we find that average ratings for code quality and problem solving are 12 percent of a standard deviation lower for women than men. Half of these gaps remain unexplained when we control for automated measures of coding performance. To test for statistical and taste-based bias, we analyze two field experiments. Our first experiment shows that providing evaluators with automated performance measures does not reduce the gender gap. Our second experiment removed video interaction, and compared blind to non-blind evaluations. No gender gap is present in either case. These results rule out traditional economic models of discrimination. Instead, we show that gender gaps widen with extended personal interaction, and are larger for evaluators educated in regions where implicit association test scores are higher.