Political persuasion by artificial intelligence

10 12 2025 | 04:23Lisa P. Argyle / SCIENCE

Large-scale studies of persuasive artificial intelligence reveal an extensive threat of misinformation

Democratic systems of government depend on persuasion to gain and maintain authority. In an ideal world, policymakers and voters ought to consider the evidence supporting a range of viewpoints and change their opinions and actions to align with the “unforced force of the better argument” [(1), p. 159]. However, this ideal process only works if people are able to consider reliable information about many positions. Technological advances have layered another concern into this arena: Will artificial intelligence (AI) technologies supercharge the spread of misinformation and the manipulation of public opinion to the detriment of democratic governance? Hackenburg et al. (2), on page 1016 of this issue, and Lin et al. (3) report a varying capacity of generative large language models (LLMs) to persuade citizens about political matters. These studies find that AI can be effectively—although not extraordinarily—persuasive, and they raise important concerns about the scope and effect of AI-generated misinformation.
LLMs are advanced statistical models that generate text by predicting the probability of the next word in a given sequence. When trained at incredible scale, and typically post-trained or “aligned” to improve performance or reduce unwanted behaviors, “frontier” LLMs are capable of having diverse, responsive, and natural conversations with human counterparts. A growing literature demonstrates that these models, in addition to their many other uses, are highly proficient in producing persuasive text about political topics (4, 5). As people increasingly interact directly with LLMs that are built into their search engines, operating systems, and other apps, the potential for AI to influence users’ political opinions—and, by extension, collective democratic outcomes—is further amplified.
Hackenburg et al. and Lin et al. conducted large-scale experiments in which survey respondents each had one short, text-based, and multiturn interaction with an LLM that was instructed to persuade the human respondent about a political issue or candidate. Hackenburg et al. conducted more than 77,000 surveys of UK-based respondents, testing the relative persuasiveness of 19 different LLMs and eight different persuasive strategies across ~700 political issues. Lin et al. tested the ability of LLMs to persuade more than 5800 people about candidates for president or prime minister during elections in the US, Canada, and Poland and 500 people about a local ballot measure in the US. Both Hackenburg et al. and Lin et al. asked respondents to rate the relevant issue or candidate on a 0 to 100 scale before and after the conversation, and both found that interactions with state-of-the-art LLMs move attitudes about a specific political issue roughly 10 points. Additionally, Lin et al. compared issue-based persuasion with persuasion about candidates for public office and report that the effects of LLM persuasion on attitudes toward candidates are less consistent and are several points smaller on average.
Existing research on political persuasion documents important tensions: People are reliably persuaded by information in political messages (6), but the average persuasive effect of a political campaign in the US is often zero (7, 8), and, in some cases, attempts to persuade can backfire, leading people to be more committed to their preexisting opinions (9, 10). Thus, not all political attitudes and not all political messaging can be treated interchangeably. Some political attitudes—depending on the context but often including partisanship, vote choice, out-group prejudices, or highly salient or moralized issues—are resistant to change, whereas attitudes about specific lower-salience policy matters are likely more flexible. Lin et al. report limitations in LLM persuasion consistent with these expectations. When LLMs tried to improve participants’ opinions of a specific national candidate for executive office—for whom there is high salience and clear party cues—the LLM was only persuasive for people who initially opposed the candidate. The LLM was also less effective when discussing the personality of the candidate instead of their policy positions and was least effective in the highly polarized US context. Given those cnditions, the persuasion documented by Hackenburg et al. and Lin et al. can be interpreted as showing that AI is a very good, but likely not superhuman, persuader about the average political issue.
LLMs can produce high-quality text in response to detailed prompts almost instantaneously, which enables efficient and scalable personalized messaging. However, there are concerns that if messages from LLMs are highly personalized, they might negatively affect political reasoning by reducing the range of arguments to which someone is exposed and by appealing to idiosyncratic personal biases. This builds on previous concerns about online “echo chambers” (11). Both Hackenburg et al. and Lin et al. incorporated personalization into their experimental designs: A random subset of LLMs received personal information about the user, such as their existing attitudes, partisanship, or other demographic traits, and were prompted to tailor their messages specifically to those individuals. Hackenburg et al. found that although personalization does increase persuasiveness of a message, it is typically a very small effect of less than 1 percentage point. Lin et al. did not detect any persuasive gains from personalization of the message, which is consistent with other recent work (4, 12).
In addition to examining message personalization, Hackenburg et al. and Lin et al. document the central role of information provision in the persuasiveness of LLM interactions. Of the eight different persuasive strategies tested by Hackenburg et al., the most effective was instructing the model to persuade by providing as much information as possible. They estimated that each fact-checkable claim was associated with ~0.3 percentage points of attitude change (individual messages could have upward of 20 claims) and concluded that information density is the primary single mechanism accounting for variation in the success of LLM persuasion across models and prompts. Testing the inverse approach, Lin et al. found a substantial reduction in persuasive capacity when an LLM was prompted not to use any factual claims in the persuasive interaction. The fact-based nature of LLM persuasion contrasts with evidence that information is not the dominant persuasive strategy in human interactions (13), which further highlights the complexity and contingency of persuasion research.
Unfortunately, a substantial portion of the information provided by the LLMs in these exchanges was false. Hackenburg et al. used separate, search-enabled LLMs to fact-check more than 460,000 claims made by LLMs in the persuasive exchanges. Depending on the model, between 15 and 40% of informational claims made by the LLM were likely misinformation. Lin et al. used a similar process and discovered that, in all three countries, LLMs are more likely to produce misinformation in support of candidates or positions on the ideological right. In both studies, the persuasiveness of informational claims did not depend on their accuracy—respondents were just as likely to be persuaded by false information as by true claims. Political decision-making on the basis of fabricated information, particularly when the generation of that information is infused with asymmetric ideological bias, is a fundamental threat to the legitimacy of democratic governance.
Persuasion is a complex, dynamic, and social phenomenon, and complete understanding of persuasive communication requires simultaneous attention to the speaker, topic, message content, medium, setting, intended outcome, message receiver, and more (14). The growing use of LLMs by citizens seeking information or campaign organizations and public officials seeking to spread their message introduces a “speaker” with unclear motivations, variable credibility, and potential representational biases into the democratic system. Simultaneously, this speaker is highly persuasive—and seemingly even more so because it is often unconstrained by the truth. Although Hackenburg et al. and Lin et al. provide relative optimism that manipulation through personalization provides limited marginal returns, they also demonstrate that LLMs are able to use a high density of often fabricated information to produce attitude change at scale. Researchers, policy-makers, and citizens alike need to urgently attend to the potential negative effects of AI-propagated misinformation in the political sphere and how to counteract it.

 

Cover photo: By Brittanica

h