- Home  /  
- Research  /  
- Academic Research  /  
- Large Language Models Can Be Used to Estimate the Latent Positions of Politicians
Large Language Models Can Be Used to Estimate the Latent Positions of Politicians
In this working study of the large language model (LLM) ChatGPT, we find that it can be used to scale the ideology of the senators of the 116th U.S. Congress.
Citation
Wu, Patrick, Jonathan Nagler, Joshua A. Tucker, and Sol Messing. “Large Language Models Can Be Used to Estimate the Latent Positions of Politicians.” arXiv, (2023): https://doi.org/10.48550/arXiv.2303.12057
Date Posted
Sep 26, 2023
Authors
Area of Study
Tags
Abstract
Existing approaches to estimating politicians' latent positions along specific dimensions often fail when relevant data is limited. We leverage the embedded knowledge in generative large language models (LLMs) to address this challenge and measure lawmakers' positions along specific political or policy dimensions. We prompt an instruction/dialogue-tuned LLM to pairwise compare lawmakers and then scale the resulting graph using the Bradley-Terry model. We estimate novel measures of U.S. senators' positions on liberal-conservative ideology, gun control, and abortion. Our liberal-conservative scale, used to validate LLM-driven scaling, strongly correlates with existing measures and offsets interpretive gaps, suggesting LLMs synthesize relevant data from internet and digitized media rather than memorizing existing measures. Our gun control and abortion measures -- the first of their kind -- differ from the liberal-conservative scale in face-valid ways and predict interest group ratings and legislator votes better than ideology alone. Our findings suggest LLMs hold promise for solving complex social science measurement problems.
Updated version of pre-print originally published on arXiv in March 2023.