November 2024

IZA DP No. 17488: We Need to Talk: Audio Surveys and Information Extraction

Understanding individuals' beliefs, preferences, and motivations is essential in social sciences. Recent technological advancements—notably, large language models (LLMs) for analyzing open-ended responses and the diffusion of voice messaging— have the potential to significantly enhance our ability to elicit these dimensions. This study investigates the differences between oral and written responses to open-ended survey questions. Through a series of randomized controlled trials across three surveys (focused on AI, public policy, and international relations), we assigned respondents to answer either by audio or text. Respondents who provided audio answers gave longer, though lexically simpler, responses compared to those who typed. By leveraging LLMs, we evaluated answer informativeness and found that oral responses differ in both quantity and quality, offering more information and containing more personal experiences than written responses. These findings suggest that oral responses to open-ended questions can capture richer, more personal insights, presenting a valuable method for understanding individual reasoning.