The parents of teenagers who killed themselves after interactions with artificial intelligence chatbots are planning to testify to Congress on Tuesday about the dangers of the technology.
Matthew Raine, the father of 16-year-old Adam Raine of California, and Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida, are set to speak to a Senate hearing on the harms posed by AI chatbots.
Raine’s family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life in April. Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.
___
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
___
Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. Child advocacy groups criticized the announcement as not enough.
“This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.
“What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”
The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.
The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.
Spain's government has fined Airbnb 64 million euros or $75 million for advertising unlicensed tourist rentals. The consumer rights ministry announced the fine on Monday. The ministry stated that many listings lacked proper license numbers or included incorrect information. The move is part of Spain's ongoing efforts to regulate short-term rental companies amid a housing affordability crisis especially in popular urban areas. The ministry ordered Airbnb in May to remove around 65,000 listings for similar violations. The government's consumer rights minister emphasized the impact on families struggling with housing. Airbnb said it plans to challenge the fine in court.
The Islamic State group and other militant organizations are experimenting with artificial intelligence as a tool to boost recruitment and refine their operations. National security experts say that just as businesses, governments and individuals have embraced AI, extremist groups also will look to harness the power of AI. That means aiming to improve their cyberattacks, breaking into sensitive networks and creating deepfakes that spread confusion and fear. Leaders in Washington have responded with calls to investigate how militant groups are using AI and seek ways to encourage tech companies to share more about how their products are being potentially misused.
President Donald Trump has signed an executive order to block states from regulating artificial intelligence. He argues that heavy regulations could stifle the industry, especially given competition from China. Trump says the U.S. needs a unified approach to AI regulation to avoid complications from state-by-state rules. The order directs the administration to draw up a list of problematic regulations for the Attorney General to challenge. States with laws could lose access to broadband funding, according to the text of the order. Some states have already passed AI laws focusing on transparency and limiting data collection.
Waymo's self-driving taxis have been in the spotlight for both negative and positive reasons. This week, the automated ride-hailing taxis went viral after a San Francisco woman gave birth inside a Waymo taxi while on her way to the hospital. A Waymo spokesperson on Wednesday confirmed the unusual delivery. It said the company's rider support team detected unusual activity inside the vehicle and alerted 911. The taxi arrived safely at the hospital before emergency services. Waymo's popularity is growing despite heightened scrutiny following an illegal U-turn and the death of a San Francisco cat. The company, owned by Alphabet, says it is proud to serve riders of all ages.
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
Load More