The AI landscape is changing extremely fast. ChatGPT was first released to the public only a few months ago. We did an episode about it fairly early on.
This new AI is being used already across many industries and applications, including writing code, policies, legal briefs, plans, job descriptions, analysis, and many others.
Researchers realized very quickly the potential of Large language models like ChatGPT to play a role in political communication, such as online commenting, texting voters, and writing to legislators.
So how effective is AI at influencing humans’ political attitudes? Some researchers at Stanford set out to answer that question.
The AI has had upgrades since the research was conducted. But it’s important to note that even using this early release of the AI, this research found that messages created by AI could be as persuasive as those created by humans in changing positions on issues, even polarizing issues like gun control and carbon tax. This has potentially significant ramifications.
The researchers write:
“Due to the availability of LLMs, anyone can now “write” unlimited amounts of persuasive messages. It is now much easier to create misinformation campaigns targeting voters and legislators, threatening accurate perceptions of politicized events.
These can ultimately undermine “shared reality” in the US and beyond. Our results call for immediate attention to potential regulation of AI’s use in political activities.”
The research is titled “Artificial Intelligence Can Persuade Humans on Political Issues.”
After growing up in Beijing, he went to The University of Minnesota – Twin Cities in 2009 for his undergraduate education and continued his academic career at The University of Minnesota – Twin Cities as a social psychology Ph.D. student.
Links and References
Polarization and Social Change Lab:
Stanford Impact Labs:
Original Research: Open access.
“Artificial Intelligence Can Persuade Humans on Political Issues” by Hui “Max” Bai et al. OSF PrePrints