By Kelvin Chan

The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises “fresh challenges for the fight against disinformation.”

She said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc's voluntary agreement on combating disinformation to work to tackle the AI problem.

Online platforms that have integrated generative AI into their services, such as Microsoft's Bing search engine and Google's Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

Google, Microsoft, Meta and TikTok did not respond immediately to requests for comment.

Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, "I don’t see any right for the machines to have the freedom of speech.”

The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won't take effect for several years.

Officials in the EU, which also is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative AI.

Recent examples of debunked deepfakes include a realistic picture of Pope Francis in a white puffy jacket and an image of billowing black smoke next to a building accompanied with a claim that it showed an explosion near the Pentagon.

Politicians have even enlisted AI to warn about its dangers. Danish Prime Minister Mette Frederiksen used OpenAI’s ChatGPT to craft the opening of a speech to Parliament last week, saying it was written “with such conviction that few of us would believe that it was a robot — and not a human — behind it.”

European and U.S. officials said last week that they're drawing up a voluntary code of conduct for artificial intelligence that could be ready within weeks as a way to bridge the gap before the EU's AI rules take effect.

Similar voluntary commitments in the bloc's disinformation code will become legal obligations by the end of August under the EU's Digital Services Act, which will force the biggest tech companies to better police their platforms to protect users from hate speech, disinformation and other harmful material.

Jourova said, however, that those companies should start labeling AI-generated content immediately.

Most digital giants are already signed up to the EU disinformation code, which requires companies to measure their work on combating false information and issue regular reports on their progress.

Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

The exit drew a stern rebuke, with Jourova calling it a mistake.

“Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

Twitter will face a major test later this month when European Commissioner Thierry Breton heads to its San Francisco headquarters with a team to carry out a "stress test," meant to measure the platform's ability to comply with the Digital Services Act.

Breton, who’s in charge of digital policy, told reporters Monday that he also will visit other Silicon Valley tech companies including OpenAI, chipmaker Nvidia and Meta.

AP reporter Jan M. Olsen contributed from Copenhagen, Denmark.

Share:
More In Politics
Why Democrats Losing Hispanic Voters
Chuck Rocha, host of 'Nuestro' podcast and opinion contributor at The New York Times, joins Cheddar News to discuss why Democrats are losing Hispanic voters.
Return-to-Office Mandates Might Be Hurting the Middle Class
More businesses are requiring workers to return to the office, but there is concern that many employees in the middle class, especially women and people of color, need remote work options for reasons including childcare and financial security. Joan Williams, director of the Center for WorkLife Law at the University of California, joined Cheddar to discuss why office mandates could be detrimental to the middle class. She noted that while companies claim a return to offices would help foster more collaboration and efficiency, reports show that they are successfully able to do their jobs from home.
California Governor Explores Texas-Like Law to Ban Assault Weapons
The U.S. Supreme Court ruled to allow the controversial Texas abortion law to remain in effect, banning abortion at six weeks and allowing any private citizen to sue a person or doctor aiding or abetting someone seeking an abortion. Outraged at this decision, California Governor Gavin Newsom is working to draft a proposal in line with the law as it relates to guns. Shawn Hubler, California correspondent at the New York Times, joins Cheddar News to discuss.
Getting Into the Vaccine Mandate Debate as Google Implements Its Own
Even as tech giant Google implements a vaccination mandate, charging its employees to declare their vaccine status within a time frame or risk dismissal, the federal government is tangled up in the court system trying to impose one of its own. Cindy Cohn, the executive director of the Electronic Frontier Foundation, and Harry Nelson, founder and managing partner of Nelson Hardiman LLP, joined Cheddar to debate the ethics, efficacy, and legality surrounding the issue. While Cohn noted that she thinks the federal mandate might be legally sound, her organization is also concerned with a separate question of privacy. "At EFF what we're most interested in is the digital surveillance that's going along with some of these attempts to try to track and confirm whether people are vaccinated or not," she said.
Load More