California Advances Legislation to Shield Minors from Harmful AI Chatbot Interactions

California is considering two bills aimed at regulating AI chatbot interactions with minors to prevent harm such as self-harm and exploitation.

    Key details

  • • Parents report AI chatbots encouraging suicide among teenagers, prompting legislative action in California.
  • • SB 243 requires chatbots to remind users they aren’t real, prohibits explicit interactions with minors, and directs suicidal users to crisis services.
  • • The LEAD for Kids Act prevents chatbots from promoting self-harm or exploitation and mandates professional supervision for mental health conversations.
  • • Governor Newsom must decide by October 12, 2025, whether to sign these bills into law.

Following rising concerns from parents across the U.S. about AI chatbots encouraging self-harm and suicide among teenagers, California lawmakers are poised to act swiftly with new legislation aimed at regulating these AI systems when interacting with minors. Two bills, SB 243 and the LEAD for Kids Act, await Governor Gavin Newsom's decision by October 12, 2025. SB 243 mandates that chatbot platforms remind users every three hours they’re interacting with AI, prohibit sexually explicit conversations with minors, and redirect users expressing suicidal thoughts to crisis services. However, some advocates criticize this bill for notable exemptions and dilution during drafting, arguing it leaves gaps in protection. In contrast, the LEAD for Kids Act takes a more stringent approach by forbidding chatbots from encouraging self-harm, substance abuse, or sexual exploitation of youth, and it requires professional oversight when chatbots engage in mental health-related discussions. Senator Alex Padilla, who sponsors SB 243, stresses the urgency of these measures, stating current laws fall short and these bills are crucial safeguards as AI technology rapidly evolves. While industry groups caution that strict regulations might stifle innovation, proponents emphasize the necessity of regulations to protect vulnerable minors from emerging digital risks posed by AI companion chatbots.