By David Klepper

Facebook and Instagram will require political ads running on their platforms to disclose if they were created using artificial intelligence, their parent company announced on Wednesday.

Under the new policy by Meta, labels acknowledging the use of AI will appear on users' screens when they click on ads. The rule takes effect Jan. 1 and will be applied worldwide.

Microsoft unveiled its own election-year initiatives on Tuesday, including a tool that will allow campaigns to insert a digital watermark into their ads. These watermarks are intended to help voters understand who created the ads, while also ensuring the ads can't be digitally altered by others without leaving evidence.

The development of new AI programs has made it easier than ever to quickly generate lifelike audio, images and video. In the wrong hands, the technology could be used to create fake videos of a candidate or frightening images of election fraud or polling place violence. When strapped to the powerful algorithms of social media, these fakes could mislead and confuse voters on a scale never seen.

Meta Platforms Inc. and other tech companies have been criticized for not doing more to address this risk. Wednesday's announcement by Meta — which comes on the day House lawmakers hold a hearing on deepfakes — isn't likely to assuage those concerns.

While officials in Europe are working on comprehensive regulations for the use of AI, time is running out for lawmakers in the United States to pass regulations ahead of the 2024 election.

Earlier this year, the Federal Election Commission began a process to potentially regulate AI-generated deepfakes in political ads before the 2024 election. President Joe Biden's administration last week issued an executive order intended to encourage responsible development of AI. Among other provisions, it will require AI developers to provide safety data and other information about their programs with the government.

Democratic U.S. Rep. Yvette Clarke of New York is the sponsor of legislation that would require candidates to label any ad created with AI that runs on any platform, as well as a bill that would require watermarks on synthetic images, and make it a crime to create unlabeled deepfakes inciting violence or depicting sexual activity. Clarke said the actions by Meta and Microsoft are a good start, but not sufficient.

“We stand at the precipice of a new era of disinformation warfare aided by the use of new A.I. tools," she said in an emailed statement. "Congress must establish safeguards to not only protect our democracy but also curb the tide of deceptive AI-generated content that can potentially deceive the American people.”

The U.S. isn't the only nation holding a high-profile vote next year: National elections are also scheduled in countries including Mexico, South Africa, Ukraine, Taiwan, India and Pakistan.

AI-generated political ads have already made an appearance in the U.S. In April, the Republican National Committee released an entirely AI-generated ad meant to show the future of the United States if Biden, a Democrat, is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic. The ad was labeled to inform viewers that AI was used.

In June, Florida Gov. Ron DeSantis’ presidential campaign shared an attack ad against his GOP primary opponent Donald Trump that used AI-generated images of the former president hugging infectious disease expert Dr. Anthony Fauci.

“It’s gotten to be a very difficult job for the casual observer to figure out: What do I believe here?” said Vince Lynch, an AI developer and CEO of the AI company IV.AI. Lynch said some combination of federal regulation and voluntary policies by tech companies is needed to protect the public. “The companies need to take responsibility,” Lynch said.

Meta's new policy will cover any advertisement for a social issue, election or political candidate that includes a realistic image of a person or event that has been altered using AI. More modest use of the technology — to resize or sharpen an image, for instance, would be allowed with no disclosure.

Besides labels informing a viewer when an ad contains AI-generated imagery, information about the ad's use of AI will be included in Facebook's online ad library. Meta, which is based in Menlo Park, California, says content that violates the rule will be removed.

Google unveiled a similar AI labeling policy for political ads in September. Under that rule, political ads that play on YouTube or other Google platforms will have to disclose the use of AI-altered voices or imagery.

Along with its new policies, Microsoft released a report noting that nations such as Russia, Iran and China will try to harness the power of AI to interfere with elections in the U.S. and elsewhere and warning that the U.S. and other nations need to prepare.

Groups working for Russia are already at work, concluded the report from the Redmond, Washington-based tech giant.

“Since at least July 2023, Russia-affiliated actors have utilized innovative methods to engage audiences in Russia and the west with inauthentic, but increasingly sophisticated, multimedia content,” the report's authors wrote. “As the election cycle progresses, we expect these actors’ tradecraft will improve while the underlying technology becomes more capable.”

Share:
More In Technology
Trump signs executive order to block state AI regulations
President Donald Trump has signed an executive order to block states from regulating artificial intelligence. He argues that heavy regulations could stifle the industry, especially given competition from China. Trump says the U.S. needs a unified approach to AI regulation to avoid complications from state-by-state rules. The order directs the administration to draw up a list of problematic regulations for the Attorney General to challenge. States with laws could lose access to broadband funding, according to the text of the order. Some states have already passed AI laws focusing on transparency and limiting data collection.
San Francisco woman gives birth in a Waymo self-driving taxi
Waymo's self-driving taxis have been in the spotlight for both negative and positive reasons. This week, the automated ride-hailing taxis went viral after a San Francisco woman gave birth inside a Waymo taxi while on her way to the hospital. A Waymo spokesperson on Wednesday confirmed the unusual delivery. It said the company's rider support team detected unusual activity inside the vehicle and alerted 911. The taxi arrived safely at the hospital before emergency services. Waymo's popularity is growing despite heightened scrutiny following an illegal U-turn and the death of a San Francisco cat. The company, owned by Alphabet, says it is proud to serve riders of all ages.
OpenAI names Slack CEO Dresser as first chief of revenue
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
Trump approves sale of more advanced Nvidia computer chips used in AI to China
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
It’s time to unpack Spotify Wrapped...
The end of 2025 is almost upon us. And it’s time to unpack Spotify Wrapped. On Wednesday, the music streaming giant delivered its annual recap — giving its hundreds of millions of users worldwide a look at the top songs, artists, podcasts and other audio they listened to over the past year. Spotify isn’t the only platform to roll out a yearly glimpse of data collected from consumers’ online lives. But since its launch about a decade ago, Wrapped has become one of the most anticipated. And Spotify is billing the 2025 edition to be the biggest yet, with a host of new features it hopes may also address some disappointments users had last year.
Load More