By Matt O'Brien

The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too," OpenAI CEO Sam Altman said at a Senate hearing.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoodsviolate copyright protections and upend some jobs.

And while there's no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal's floor speeches and reciting a speech written by ChatGPT after he asked the chatbot to compose his opening remarks.

The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market.

Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”

But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, OpenAI has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying were IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.

The panel's ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs and national security. He said Tuesday's hearing marked "a critical first step towards understanding what Congress should do.”

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. Altman and Marcus both called for an AI-focused regulator, preferably an international one, with Altman citing the precedent of the U.N.’s nuclear agency and Marcus comparing it to the U.S. Food and Drug Administration. But IBM's Montgomery instead asked Congress to take a “precision regulation" approach.

“We think that AI should be regulated at the point of risk, essentially,” Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

Share:
More In Business
Rare Dom Pérignon champagne from Charles and Diana’s wedding fails to sell during Denmark auction
A rare magnum of Dom Pérignon Vintage 1961 champagne that was specially produced for the 1981 wedding of Prince Charles and Lady Diana has failed to sell during an auction. Danish auction house Bruun Rasmussen handled the bidding Thursday. The auction's house website lists the bottle as not sold. It was expected to fetch up to around $93,000. It is one of 12 bottles made to celebrate the royal wedding. Little was revealed about the seller. The auction house says the bids did not receive the desired minimum price.
New York Times, after Trump post, says it won’t be deterred from writing about his health
The New York Times and President Donald Trump are fighting again. The news outlet said Wednesday it won't be deterred by Trump's “false and inflammatory language” from writing about the 79-year-old president's health. The Times has done a handful of stories on that topic recently, including an opinion column that said Trump is “starting to give President Joe Biden vibes.” In a Truth Social post, Trump said it might be treasonous for outlets like the Times to do “FAKE” reports about his health and "we should do something about it.” The Republican president already has a pending lawsuit against the newspaper for its past reports on his finances.
OpenAI names Slack CEO Dresser as first chief of revenue
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
Trump approves sale of more advanced Nvidia computer chips used in AI to China
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
Load More