Understanding individuals' beliefs, preferences, and motivations is essential in social sciences. Recent technological advancements—notably, large language models (LLMs) for analyzing open-ended responses and the diffusion of voice messaging— have the potential to significantly enhance our ability to elicit these dimensions. This study investigates the differences between oral and written responses to open-ended survey questions. Through a series of randomized controlled trials across three surveys (focused on AI, public policy, and international relations), we assigned respondents to answer either by audio or text. Respondents who provided audio answers gave longer, though lexically simpler, responses compared to those who typed.
By leveraging LLMs, we evaluated answer informativeness and found that oral responses differ in both quantity and quality, offering more information and containing more personal experiences than written responses. These findings suggest that oral responses to open-ended questions can capture richer, more personal insights, presenting a valuable method for understanding individual reasoning.
We use cookies to provide you with an optimal website experience. This includes cookies that are necessary for the operation of the site as well as cookies that are only used for anonymous statistical purposes, for comfort settings or to display personalized content. You can decide for yourself which categories you want to allow. Please note that based on your settings, you may not be able to use all of the site's functions.
Cookie settings
These necessary cookies are required to activate the core functionality of the website. An opt-out from these technologies is not available.
In order to further improve our offer and our website, we collect anonymous data for statistics and analyses. With the help of these cookies we can, for example, determine the number of visitors and the effect of certain pages on our website and optimize our content.