Measuring Political Bias in Large Language Models
Abstract: Large language models (LLMs) assist millions of users in learning and writing about diverse topics. While they offer exposure to new perspectives, they may also reinforce existing opinions—raising concerns about political bias. In this talk, Dr. Röttger will outline the challenges in evaluating political bias in LLMs and introduce IssueBench, a newly developed dataset designed for robust and realistic bias assessment. He will present key findings on issue-level bias across models, including evidence of consistent leanings and political alignments.

Bio: Dr. Paul Röttger is a postdoctoral researcher at Bocconi University's MilaNLP Lab. His work focuses on evaluating and improving the alignment and societal impact of large language models. He has received prestigious accolades including the Outstanding Paper Award at ACL and Best Paper at NeurIPS D&B. Dr. Röttger earned his PhD from the University of Oxford and co-founded Rewire, an AI startup for content moderation, acquired in 2023.