By Kelvin Chan

The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises “fresh challenges for the fight against disinformation.”

She said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc's voluntary agreement on combating disinformation to work to tackle the AI problem.

Online platforms that have integrated generative AI into their services, such as Microsoft's Bing search engine and Google's Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

Google, Microsoft, Meta and TikTok did not respond immediately to requests for comment.

Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, "I don’t see any right for the machines to have the freedom of speech.”

The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won't take effect for several years.

Officials in the EU, which also is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative AI.

Recent examples of debunked deepfakes include a realistic picture of Pope Francis in a white puffy jacket and an image of billowing black smoke next to a building accompanied with a claim that it showed an explosion near the Pentagon.

Politicians have even enlisted AI to warn about its dangers. Danish Prime Minister Mette Frederiksen used OpenAI’s ChatGPT to craft the opening of a speech to Parliament last week, saying it was written “with such conviction that few of us would believe that it was a robot — and not a human — behind it.”

European and U.S. officials said last week that they're drawing up a voluntary code of conduct for artificial intelligence that could be ready within weeks as a way to bridge the gap before the EU's AI rules take effect.

Similar voluntary commitments in the bloc's disinformation code will become legal obligations by the end of August under the EU's Digital Services Act, which will force the biggest tech companies to better police their platforms to protect users from hate speech, disinformation and other harmful material.

Jourova said, however, that those companies should start labeling AI-generated content immediately.

Most digital giants are already signed up to the EU disinformation code, which requires companies to measure their work on combating false information and issue regular reports on their progress.

Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

The exit drew a stern rebuke, with Jourova calling it a mistake.

“Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

Twitter will face a major test later this month when European Commissioner Thierry Breton heads to its San Francisco headquarters with a team to carry out a "stress test," meant to measure the platform's ability to comply with the Digital Services Act.

Breton, who’s in charge of digital policy, told reporters Monday that he also will visit other Silicon Valley tech companies including OpenAI, chipmaker Nvidia and Meta.

AP reporter Jan M. Olsen contributed from Copenhagen, Denmark.

Share:
More In Politics
biden putin
Face to face for just over two hours, President Joe Biden and Russia’s Vladimir Putin squared off in a secure video call Tuesday as the U.S. president put Moscow on notice that an invasion of Ukraine would bring enormous harm to the Russian economy.
Instagram Rolls Out New Teen Safety Updates
Ahead of Instagram head Adam Mosseri's congressional hearing on the mental impact of the social platform on teens, the company announced a number of updates aimed at teen safety.
Evergrande Shares Sink as Real Estate Giant Nears Debt Default
Troubled Chinese real estate giant Evergrande is once again nearing the brink of collapse. Shares of Evergrande sunk to a new record low on Monday, closing down 20 percent, as debt default fears resurfaced. Drew Bernstein, co-chairman at consultancy MarcumBP, joined Cheddar's Opening Bell to discuss. He said U.S. investors have to understand that "there is no company in China that's too big to fail, that's for sure," and that the Chinese government will be prioritizing the social welfare of the populace. Bernstein did note that it would be a managed collapse in some form.
Breaking Down U.S. Diplomatic Boycott of 2022 Beijing Olympics
Joan Greve, a politics reporter at The Guardian US, joined Wake Up With Cheddar to break down the implications of the Biden administration announcing a diplomatic boycott of the 2022 Beijing games in response to allegations of human rights abuses against Uyghur Muslims. She noted the significance of the move, assessing the already frayed relationship between the U.S. and China. "The Chinese have said that a boycott would be politically manipulative, and now they are actually threatening countermeasures," she said. "And that will certainly have an impact on the spirit of the games at the very least."
U.S. to Resume 'Remain in Mexico' Policy for Asylum-Seekers
The Biden administration has reached an agreement with the Mexican government to resume the "Remain-in-Mexico" policy under court order. By reinstating a Trump-era border policy, asylum-seekers will be forced to stay in Mexico until their U.S. immigration court date. The program is set to resume on Monday. Ryan Devereaux, a reporter for The Intercept, joins Cheddar News to discuss.
COVID-19 Causes Massive Backlog in Court Cases
COVID-19 is still battering the nation's criminal justice system, causing a massive backlog in cases and delaying verdicts for months on end. This, combined with the fear of crowded prisons during a pandemic, has prompted many defendants to plead guilty in exchange for time served or probation. Tina Luongo, attorney-in-charge of the Criminal Defense Practice, joined Cheddar to discuss the court backlog, the rise in plea bargains, and why this was an issue long before the pandemic.
High-Profile Cases Shine Light on Public Interest in 'Courtroom Drama'
With so many high-profile court cases taking over the media, from the trial over the murder of Ahmaud Arbery to the trial of Kyle Rittenhouse to the ongoing Elizabeth Holmes trial, Cheddar took a look at these cases and why there is such a big interest in them. Rachel Fiset, a white collar criminal defense lawyer and partner with Zeiback, Fiset, and Coleman, and Bryan Hance, attorney-at-law, professor, and academic program director of the pre-law and paralegal studies program at National University, joined Cheddar for a roundtable discussion on why there is so much public interest in so-called courtroom drama.
Load More