Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 27, 2024 | Latest Issue
The Dartmouth

Dunleavy: Deceptive Content

AI-generated content must include direct disclosures to combat political disinformation.

With the advent of low-cost artificial intelligence tools, public interest in artificial intelligence generated content has skyrocketed. Anyone — from preteens to senior campaign staffers — can now create complex, personalized audio, images and text by simply entering keywords into AI content creation tools. The result is an easily accessible weapon in spreading disinformation created to manipulate the public. Thus, the U.S. must mandate that AI content creation tools mark their content with a direct disclosure watermark, with legal repercussions if users modify or remove the watermark. Failure to quickly regulate AI-generated content will overwhelm private institutions’ ability to prevent disinformation and further erode trust in politics.

Manipulated digital content is not new. For years, social media users and bots have spread fake comments, emails, posts and screenshots of real content edited to communicate something other than the original creator’s intent. However, AI tools such as Midjourney now allow users to easily create realistic content instead of clumsily altering existing material. This heightened degree of realism, along with the sheer volume of such fabrications, is making it increasingly difficult for some viewers to distinguish between authentic and AI-generated audio, photos and videos. 

The AI arms race among tech companies is leading to rapid development of AI content creation tools that are even capable of deceiving experts. For example, in a University of South Florida study, linguistic experts correctly recognized research abstracts as AI-generated only 38.9% of the time. In a 2022 study, untrained participants’ brain activity correctly distinguished human faces from AI-generated faces only about half of the time. When verbally identifying the real faces from the AI-generated faces, participants stated the correct answer a mere 37% of the time. In May, an AI-generated image of an explosion near the Pentagon captioned  “Large explosion near The Pentagon Complex in Washington, D.C. – Initial report,” went viral on Twitter. 

AI generated content is the perfect vehicle for political disinformation. Users could create videos of a presidential candidate belittling their voting base, confessing to a crime or harassing employees. Audio could impersonate political staffers over the phone and could instruct voters to cast their votes on the wrong day or at the wrong location. It could even imitate the voice of a political candidate and falsely announce their withdrawal from the race. The possibilities are endless.

Political campaigns are already using AI-generated content. In May, former President Donald Trump posted a video of CNN host Anderson Cooper saying, “Trump just ripped us a new one here on CNN’s live presidential debate,” — a line Cooper never said — which, in many ways, would appear as praise by Trump supporters that view CNN as biased, leftist media. In June, a Gov. Ron DeSantis, R-Fla., campaign attack ad used AI-generated photos of Trump hugging Anthony Fauci, former chief medical advisor to Trump, and, in one picture, kissing him on the forehead. The photos were intermixed with real audio, photos and video without disclosing the presence of false AI-generated images.

The government and other platforms are already looking into regulating the official political use of AI-generated content. The public’s amazement surrounding ChatGPT has set an agenda of deciding what role, if any, the government should play in increasingly powerful AI. In August, the Federal Election Commission announced a public comment period on whether current rules forbidding fraudulent campaign advertising should apply to AI-generated content. In tandem, Google announced it will require political ads displayed on its platforms to clearly identify any AI-generated or AI-modified content beginning in November, which will surely assist in global efforts to manage politicians’ misuse of AI.  

However, limiting official political groups’ use of AI content creation is not enough, as non-affiliated users can just as easily create political misinformation. For example, creators have scattered the internet with AI-generated images of Trump resisting arrest, including photos of Trump running from police, police dragging him away and police holding Trump down. The photos had subtle visual giveaways indicative of AI, but, as is the case with most social media users, a quick scroll past the images may hide those blemishes. Thus, the government needs to do more than just regulate political campaigns. 

Direct disclosure is the best way to regulate AI-generated political content when combined with measures to punish modifying or removing the disclosure. Direct disclosure is a straightforward method of informing viewers via content labels, context notes, disclaimers or watermarking. Indirect disclosure, such as technical signals embedded in the content of cryptographic signatures, is insufficient, as those methods fail to inform viewers across platforms and of varying degrees of technological familiarity. Without consequences like content removal and fines, attempts to modify or remove direct disclosures from AI content will continue uninterrupted.

China is already implementing direct disclosure regulations. Their government is requiring all AI-generated content to have obvious markings labeling the content as AI-generated. To achieve these policy goals, the Chinese government ordered that creation tools must gain government approval before becoming publicly available to ensure their compliance, and creators must register accounts on their AI tool of choice before content creation to allow the tracking of disinformation culprits. Altering, concealing or deleting the direct disclosures is illegal. China’s approach may serve as a valuable blueprint for the American government, as it navigates designing and implementing direct disclosure laws.

Of course, regulated direct disclosure watermarking is not the only proposed solution to differentiating AI-generated content from human-made content. Private corporations are also working on developing methods of identifying AI content. Following prompting from the White House, several major tech companies pledged to develop an AI identification method for their AI content. A Microsoft-led tech and media coalition is also working to agree on a common standard for watermarking AI-generated images. Google’s SynthID allows Vertex AI customers to mark their AI-generated images with a watermark readable only by a trained computer, reducing the risk of a user removing or modifying a watermark. However, such a method would be of little help to viewers seeing AI-generated content while scrolling through social media, and both initiatives neglect audio and videos. 

Unfortunately, efforts led by the private sector will fail to sufficiently solve the issue of AI-generated content contributing to disinformation. First, the private sector may be slow to sufficiently self-regulate. Tech companies’ main AI revenue models are for the benefit of their corporation — AI as a service, AI-enabled product development and data monetization  — providing little incentive to mitigate the public impact of any disinformation side effects. Second, there would be no legal enforcement mechanism to ensure watermarks are not removed or modified. Even if a company automatically marks its content, creators and distributors may simply remove those identifiers. Third, relying on tech companies to create safety measures leaves open-source AI tools free from even weak, industry-imposed standards.

Complex AI is here to stay, necessitating action to restrict its power. 74% of Americans are concerned about AI making important life decisions for people. With American election outcomes having such massive implications for our lives, manipulating peoples’ votes and political actions through intentionally deceptive AI is undoubtedly an example of AI making decisions for us.