Parents of suicidal teens who talked with AI chatbots testify in DC
By Matt O'brien
FILE - In this undated photo provided by Megan Garcia of Florida in Oct. 2024, she stands with her son, Sewell Setzer III. (Courtesy Megan Garcia via AP, File)
The parents of teenagers who killed themselves after interactions with artificial intelligence chatbots are planning to testify to Congress on Tuesday about the dangers of the technology.
Matthew Raine, the father of 16-year-old Adam Raine of California, and Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida, are set to speak to a Senate hearing on the harms posed by AI chatbots.
Raine’s family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life in April. Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.
___
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
___
Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. Child advocacy groups criticized the announcement as not enough.
“This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.
“What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”
The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.
The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.
An internet outage on Monday morning highlights the reliance on Amazon's cloud services. This incident reveals vulnerabilities in the concentrated system. Cloud computing allows companies to rent Amazon's infrastructure instead of building their own. Amazon leads the market, followed by Google and Microsoft. The outage originated in Northern Virginia, the biggest and oldest cloud hub in the U.S. This region handles significantly more data than other hubs. Despite the idea of spreading workloads, many rely on this single hub. The demand for computing power, especially for AI, is driving a construction boom for data centers.
Ashley Fieglein Johnson, CFO & President at Planet, joins us to share the story behind the Owl launch—and how strategy, tech, and vision are fueling liftoff.
OpenAI has announced that ChatGPT will soon engage in "erotica for verified adults." CEO Sam Altman says the company aims to allow more user freedom for adults while setting limits for teens. OpenAI isn't the first to explore sexualized AI, but previous attempts have faced legal and societal challenges. Altman believes OpenAI isn't the "moral police" and wants to differentiate content similar to how Hollywood differentiates R-rated movies. This move could help OpenAI, which is losing money, turn a profit. However, experts express concerns about the impact on real-world relationships and the potential for misuse.
Ten philanthropic foundations are committing $500 million across the next five years to place human interests at the forefront of artificial intelligence's rapid integration into daily life.
Jesse Pickard, CEO of The Mind Company, shares how Elevate and Balance are redefining mental fitness with science-backed tools for brainpower and wellness.
Apple has taken down an app that uses crowdsourcing to flag sightings of U.S. immigration agents after coming under pressure from the Trump administration.
Former Cisco Systems CEO John Chambers learned all about technology’s volatile highs and lows as a veteran of the internet’s early boom days during the late 1990s and the ensuing meltdown that followed the mania. And now he is seeing potential signs of the cycle repeating with another transformative technology in artificial intelligence. Chambers is trying take some of the lessons he learned while riding a wave that turned Cisco into the world's most valuable company in 2000 before a crash hammered its stock price and apply them as an investor in AI startups. He recently discussed AI's promise and perils during an interview with The Associated Press.