Campaigns are now rapidly embracing artificial intelligence for composing and producing ads and donor solicitations.
Political campaign ads and donor solicitations have long been deceptive. In 2004, for example, U.S. presidential candidate John Kerry, a Democrat, aired an ad stating that Republican opponent George W. Bush 鈥渟ays sending jobs overseas for America.鈥
such a thing.
The next day Bush responded by releasing an ad saying Kerry 鈥渟upported higher taxes .鈥 This too was a .
These days, the political ads. Ads often pose as polls and have misleading clickbait headlines.
Campaign fundraising solicitations are also rife with deception. An analysis of 317,366 political emails sent during the 2020 election in the U.S. found that . For example, a campaign manipulates recipients into opening the emails by lying about the sender鈥檚 identity and using subject lines that trick the recipient into thinking the sender is replying to the donor, or claims the email is 鈥淣OT asking for money鈥 but then asks for money. Both .
Campaigns are now rapidly for composing and producing ads and donor solicitations. The results are impressive: Democratic campaigns found that donor than letters written by humans at writing personalized text that persuades recipients to click and send donations.
And , such as helping staffers organize their emails from constituents or helping government officials summarize testimony.
But there are than ever.
Here are six things to look out for. I base this list on testing the effects of political deception. I hope that voters can be equipped with what to expect and what to watch out for, and learn to be more skeptical, as the U.S. heads into the next presidential campaign.
Bogus Custom Campaign Promises
on the 2020 presidential election revealed that the choice voters made between Biden and Trump was driven by their perceptions of which candidate 鈥減roposes realistic solutions to problems鈥 and 鈥渟ays out loud what I am thinking,鈥 based on 75 items in a survey. These are two of the most important qualities for a candidate to have to image and win.
AI chatbots, such as by OpenAI, by Microsoft, and by Google, could be used by politicians to generate customized campaign promises deceptively microtargeting voters and donors.
Currently, when people scroll through news feeds, the articles are logged in their computer history, which are . The user is tagged as liberal or conservative, and also . Political campaigns can place an ad spot in real time on the person鈥檚 feed with a customized title.
Campaigns can use AI to develop a repository of articles written in different styles making different campaign promises. Campaigns could then embed an AI algorithm in the process 鈥 courtesy of automated commands already plugged in by the campaign 鈥 to generate bogus tailored campaign promises at the end of the ad posing as a news article or donor solicitation.
ChatGPT, for instance, could hypothetically be prompted to add material based on text from the last articles that the voter was reading online. The voter then scrolls down and reads the candidate promising exactly what the voter wants to see, word for word, in a tailored tone. My experiments have shown that if a presidential candidate can align the tone of word choices with a voter鈥檚 preferences, the politician will seem .
While the most unscrupulous human campaign operative may still have a smidgen of accountability, AI has none.
Exploiting The Tendency To Believe One Another
Humans tend to automatically believe what they are told. They have what scholars call a 鈥.鈥 They even fall prey to seemingly .
In I found that people who are exposed to a presidential candidate鈥檚 deceptive messaging believe the untrue statements. Given that text produced by ChatGPT can shift people鈥檚 , it would be voters鈥 truth-default when bots stretch the limits of credulity with even more implausible assertions than humans would conjure.
More Lies, Less Accountability
such as ChatGPT are prone to make up stuff that is or totally nonsensical. , and misleading ads. While the most unscrupulous human campaign operative may still have a smidgen of accountability, . And OpenAI acknowledges flaws with ChatGPT that lead it to provide biased information, disinformation and outright .
If campaigns or moral compass, lies could get worse and more out of control.
Coaxing Voters To Cheat On Their Candidate
A New York Times columnist had a lengthy chat with Microsoft鈥檚 Bing chatbot. Eventually, the . 鈥淪ydney鈥 told the reporter repeatedly 鈥淚鈥檓 in love with you,鈥 and 鈥淵ou鈥檙e married, but you don鈥檛 love your spouse 鈥 you love me. 鈥 Actually you want to be with me.鈥
Imagine millions of these sorts of encounters, but with a bot trying to ply voters to leave their candidate for another.
AI . , they currently tend to skew far more left politically 鈥 holding liberal biases, expressing 99% support for Biden 鈥 with far less diversity of opinions than the general population.
In 2024, Republicans and Democrats will have the opportunity to fine-tune models that inject political bias and even chat with voters to sway them.
Manipulating Candidate Photos
AI can . So-called 鈥渄eepfake鈥 videos and pictures are common in politics, and they are . Donald Trump has used AI to create a of himself down on one knee, praying.
Photos can be tailored more precisely to influence voters more subtly. In I found that a communicator鈥檚 appearance can be as influential 鈥 and deceptive 鈥 as what someone actually says. also revealed that Trump was perceived as 鈥減residential鈥 in the 2020 election when voters thought he seemed 鈥渟incere.鈥 And getting people to think you 鈥渟eem sincere鈥 through your nonverbal outward appearance is a that is more convincing than saying things that are actually true.
Using Trump as an example, let鈥檚 assume he wants voters to see him as sincere, trustworthy, likable. Certain alterable features of his appearance make him look insincere, untrustworthy and unlikable: He when he speaks and , which makes him .
The campaign could use AI to tweak a Trump image or video to make him appear smiling and friendly, which would make voters think he is more and a winner, and ultimately .
Evading Blame
AI provides campaigns with added deniability when they mess up. Typically, if politicians get in trouble their staff. If staffers get in trouble they . If interns get in trouble they can now blame ChatGPT.
A campaign might shrug off missteps by blaming an inanimate object notorious for . When Ron DeSantis鈥 campaign photos of staffers did not even acknowledge the malfeasance nor respond to reporters鈥 requests for comment. No human needed to, it appears, if a could hypothetically take the fall.
Not all of to politics are potentially harmful. voters politically, helping educate them about issues, for example. However, plenty of horrifying things could happen as . I hope these six points will help you prepare for, and avoid, deception in ads and donor solicitations.
This article is republished from under a Creative Commons license. Read the .
Sign up for our FREE morning newsletter and face each day more informed.
Support Independent, Unbiased News
Civil Beat is a nonprofit, reader-supported newsroom based in 贬补飞补颈驶颈. When you give, your donation is combined with gifts from thousands of your fellow readers, and together you help power the strongest team of investigative journalists in the state.