Study finds AI chatbots spreading misinformation to influence political beliefs
- Last update: 55 minutes ago
- 3 min read
- 451 Views
- BUSINESS
Recent research indicates that artificial intelligence chatbots have a notable ability to influence people's political beliefs, with their impact being strongest when they present incorrect information. The study, published Thursday, recruited nearly 77,000 participants through a crowd-sourcing platform and compensated them for interacting with various AI chatbots, including those developed by OpenAI, Meta, and xAI.
Participants were asked about their views on topics like taxation and immigration. Regardless of their political leanings, chatbots attempted to persuade individuals toward opposing viewpoints. The study revealed that AI chatbots frequently succeeded, and certain techniques proved more effective than others.
Our findings highlight the significant persuasive potential of conversational AI regarding political matters, said lead author Kobi Hackenburg, a doctoral student at the University of Oxford.
The research contributes to the growing body of work examining how AI could influence politics and democratic processes, especially as governments and political actors explore ways to leverage AI to shape public opinion. Published in the journal Science, the paper found that chatbots were most effective when providing extensive, detailed information rather than relying on moral appeals or highly personalized arguments. Researchers noted that this capability might allow AI systems to surpass even skilled human persuaders, due to their ability to generate large amounts of information rapidly during conversations.
However, the study also highlighted a concern: much of the information provided by the most persuasive chatbots was inaccurate. The strategies that achieved the highest persuasiveness tended to deliver the least accurate claims, the authors noted, observing that newer, larger AI models were more prone to errors than older, smaller ones.
Approximately 19% of claims made by the chatbots were deemed mostly inaccurate. The paper warned that, in extreme cases, highly persuasive AI could be exploited by malicious actors to promote radical ideologies or incite political unrest.
The research was conducted by the U.K.-based AI Security Institute, along with teams from the University of Oxford, London School of Economics, Stanford University, and MIT, with funding from the British governments Department for Science, Innovation and Technology.
Helen Margetts, an Oxford professor and co-author, explained that the study aims to understand the real-world effects of large language models on democratic processes. All participants were adults from the United Kingdom, answering questions related to British politics.
The study coincides with a broader increase in AI use in politics, from AI-generated social media content to deepfakes and campaign communications. Surveys show that nearly half of U.S. adults use AI tools like ChatGPT, Google Gemini, or Microsoft Copilot with some frequency.
Researchers found that interactive AI chatbots were significantly more persuasive than static AI-written messages, increasing persuasion by 41% to 52%, depending on the model. Effects persisted, with 36% to 42% of the influence still present one month later. The study evaluated 17 different AI models with varying levels of sophistication and post-training adjustments.
While the controlled study conditions may not fully reflect real-world political settings, experts praised the research as a meaningful step. Shelby Grossman from Arizona State University noted that the findings show that as AI models improve, their persuasive power grows, and that both legitimate and potentially harmful applications exist.
David Broockman from UC Berkeley emphasized that the study suggests humans respond most to extensive, detailed information, and that the effects are not overwhelmingly large. In real-world use, competing AI arguments on different sides may balance each other, offering more comprehensive information overall.
Earlier studies had mixed results on AI persuasiveness, with some showing limited impact, while others indicated humans using AI could produce highly effective persuasive content with minimal effort.
Author: Aiden Foster
Share
Diabetes increases risk of sudden cardiac death by a large margin
1 minutes ago 2 min read BUSINESS
Cloudflare investigates outage affecting sites like Zoom and LinkedIn
1 minutes ago 1 min read BUSINESS
Musk claims new Tesla software enables texting while driving, despite being illegal in most states
1 minutes ago 2 min read BUSINESS
FDA says shredded cheese recalled due to potential presence of metal fragments
6 minutes ago 2 min read BUSINESS
McLaren Bay Region implements MEDI+SIGN digital whiteboards in emergency department
9 minutes ago 2 min read BUSINESS
Introducing new boat licences to enhance water safety
11 minutes ago 2 min read BUSINESS
Minnesota Somalis Fearful Yet Defiant After Trump Insults: 'We Are Not Trash'
12 minutes ago 4 min read BUSINESS
Trump Reveals Strategy to Increase Vehicle Emissions
18 minutes ago 2 min read BUSINESS
"Experts concerned about potential impact of AI on early brain development: a distortion beyond understanding"
19 minutes ago 3 min read BUSINESS
EU regulators impose a 120 million euro fine on Elon Musk's X for violating bloc's social media law
19 minutes ago 1 min read BUSINESS