By Kelvin Chan

The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises “fresh challenges for the fight against disinformation.”

She said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc's voluntary agreement on combating disinformation to work to tackle the AI problem.

Online platforms that have integrated generative AI into their services, such as Microsoft's Bing search engine and Google's Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

Google, Microsoft, Meta and TikTok did not respond immediately to requests for comment.

Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, "I don’t see any right for the machines to have the freedom of speech.”

The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won't take effect for several years.

Officials in the EU, which also is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative AI.

Recent examples of debunked deepfakes include a realistic picture of Pope Francis in a white puffy jacket and an image of billowing black smoke next to a building accompanied with a claim that it showed an explosion near the Pentagon.

Politicians have even enlisted AI to warn about its dangers. Danish Prime Minister Mette Frederiksen used OpenAI’s ChatGPT to craft the opening of a speech to Parliament last week, saying it was written “with such conviction that few of us would believe that it was a robot — and not a human — behind it.”

European and U.S. officials said last week that they're drawing up a voluntary code of conduct for artificial intelligence that could be ready within weeks as a way to bridge the gap before the EU's AI rules take effect.

Similar voluntary commitments in the bloc's disinformation code will become legal obligations by the end of August under the EU's Digital Services Act, which will force the biggest tech companies to better police their platforms to protect users from hate speech, disinformation and other harmful material.

Jourova said, however, that those companies should start labeling AI-generated content immediately.

Most digital giants are already signed up to the EU disinformation code, which requires companies to measure their work on combating false information and issue regular reports on their progress.

Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

The exit drew a stern rebuke, with Jourova calling it a mistake.

“Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

Twitter will face a major test later this month when European Commissioner Thierry Breton heads to its San Francisco headquarters with a team to carry out a "stress test," meant to measure the platform's ability to comply with the Digital Services Act.

Breton, who’s in charge of digital policy, told reporters Monday that he also will visit other Silicon Valley tech companies including OpenAI, chipmaker Nvidia and Meta.

AP reporter Jan M. Olsen contributed from Copenhagen, Denmark.

Share:
More In Politics
Former DOJ Agent on Investigation Into Brooklyn Subway Mass Shooting
David Katz, a former federal agent with the Department of Justice, and currently founder, CEO, and owner of Global Security Group, joined Cheddar News to talk about the Tuesday mass shooting on a subway train in Brooklyn, N.Y. even as local authorities have so far stated it was not being investigated as a potential terrorist attack. "At this point between the commissioner of the NYPD and the governor of New York, they're almost saying, 'well, it's an active shooter incident.' Okay, but active shooter incidents can also be motivated by terrorism, so until we know motive, we can't make that conclusion at all," Katz said.
U.S. Stocks Turn Positive in Final Hour to Close Higher
U.S. stocks saw a jump in the final hour of Thursday's session, and ultimately closed slightly higher for the day. Tim Pagliara, Chief Investment Officer of CapWealth, joined Cheddar News' Closing Bell to discuss. "The markets have had to digest a lot of action from the federal reserve this quarter and it's affecting everything from mortgage rates to how they value stocks," he said.
Russia-Ukraine War Threatens Global Food Supply
Gary Schlossberg, Global Strategist at Wells Fargo Investment Institute, joined Cheddar News' Closing Bell to discuss the dire situation caused by Russia's war on Ukraine, as the region is key for exporting grains and corn, and as the UN Food & Agriculture Organization says food prices rose to the highest levels ever in March.
Load More