AI can make a digital clone of a politician say whatever it wants, which opens the door to a host of potential problems this election season.
That’s why Google is taking steps to mitigate the political dangers of the technology by requiring advertisers to disclose when they use deepfakes, or realistic AI versions of people, in election ad campaigns.
In a Monday update to its political content policy, Google asked election advertisers to “prominently disclose” when their ads inaccurately portray people or events, if these advertisers are located “in regions where [election ads] verification is required.”
Digital face scanning. Credit: Getty Images
This policy applies to the U.S.; as of Monday, Google also requires verification for all advertisers who run U.S. election ads across federal, state, and territory campaigns. Some local campaigns are also affected.
Related: AI Clones Get Human Emotions, Synthesia Deepfakes Look Real
Google’s update requires that advertisers check off “altered or synthetic content” if they’re running ads containing deepfakes. An example would be an ad that changes existing video footage to make a political candidate appear like they said something different than what was actually said.
The “altered or synthetic” media disclosure statement in ads has to be noticeable and clear, per Google’s requirements. It applies to images, video, and audio.
Certain changes will not trigger the disclosure, including editing techniques like cropping and color correcting.
Deepfakes have shocked the public in recent years with their realistic depictions of everyone from Selena Gomez to the Pope.
In May, hackers attempted to obtain money and personal details from the largest advertising agency in the world, WPP, by using the cloned voice and likeness of its CEO.
Related: WPP CEO Impersonated in Deepfake Scheme to Steal Execs Money
Google will automatically generate a “Paid for by” statement for election ads, according to the same update.
Read the full article here