Google wants clear AI labeling in political ads to combat deepfakes

Google (NASDAQ: GOOGL) wants advertisers on its platform to clearly label artificial intelligence (AI) elements in political ads as part of a broader effort to crack down on deepfakes.

According to an update to its political content policy in September, Google’s new rules will require election advertisers to disclose the presence of AI-generated content that “inauthentically depicts real or realistic-looking people or events.”

The new rules will take effect in mid-November and will apply to image, video, and audio content. Although Google pointed out that the restrictions will apply to inaccurate depictions of humans and events, the tech giant notes that the requirement needs to be more comprehensive, hinting at a detailed scrutiny of each advert.

Google noted in the update that the disclosure should be clear and placed in a prominent location in the advertisement. Despite the robust nature of the rules, the tech giant makes provisions for exemptions in cases where the use of AI does not make jarring distortion to the ads.

“Ads that contain synthetic content altered or generated in such a way that is inconsequential to the claims made in the ad will be exempt from these disclosure requirements,” read the update.

Google cited editing techniques relying on AI as examples of the exemptions. Ads that rely on AI for image resizing, brightening, cropping, and specific background edits “that do not create realistic depictions of actual events” will be allowed to post ads without the requirement of AI labeling.

The tech behemoth’s new guidelines are coming ahead of the U.S. 2024 general elections, with AI-generated content predicted to play a role in the build-up to the polls. The Federal Elections Commission (FEC) has already pointed out the threat of deepfakes, the use of AI to create ultra-realistic depictions, to the electoral process.

To crack down on misuse, the FEC is scrambling to roll out rules to guide the use of deepfakes by political campaigners. While seeking to stamp out the spread of misinformation, the FEC says it is treading carefully to avoid gagging the freedom of expression by U.S. citizens via “precision of regulation.”

Ahead of the electioneering season, the U.S. has launched guidelines for digital currency donations, focusing on transparency while barring foreign contributions.

Putting a label on AI

With generative AI becoming mainstream, regulators and developers have seen the need to label AI-generated content properly. The requirement of conspicuous labeling is enshrined in the European Union’s (EU) incoming AI rules, while the U.S. Congress has written to leading AI firms.

Google has seized the initiative with a plan to watermark AI-generated images in a way that will be invisible to the naked eye but can easily be spotted using specific tools. Dubbed SynthID, Google CEO Sundar Pichai stated that the hidden watermark tool will not compromise image quality.

Watch: Blockchain can bring accountability to AI as discussed by nChain’s Owen Vaughan & Alessio Pagani

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

Source: https://coingeek.com/google-wants-clear-ai-labeling-in-political-ads-to-combat-deepfakes/