Several States Enact Restrictions on AI Use in Mental Health Therapy

States impose restrictions on AI in mental health therapy amid dependency concerns among healthcare professionals.

Key Points

  • • Multiple U.S. states restrict AI use in mental health therapy.
  • • Regulators express concerns about the reliability and efficacy of AI-driven treatments.
  • • Studies suggest doctors might quickly become dependent on AI for diagnostic procedures.
  • • Over-reliance on AI could undermine clinical judgment.

As of August 19, 2025, multiple states in the U.S. have implemented new restrictions on the use of artificial intelligence (AI) in mental health therapy, revealing significant concerns about the reliability and efficacy of AI-driven treatments. This regulatory movement comes amid growing fears that reliance on AI may lead to diminished human oversight in crucial healthcare decisions.

In particular, state regulators are worried about the risks associated with automated mental health therapies that may lack the human touch essential for effective treatment. Experts argue that while AI can offer benefits in processing information and providing insights, it cannot replace the foundational trust and empathy built through interpersonal interactions between patients and therapists. "AI should support, not replace, the therapeutic relationship that is vital for patient recovery," said a mental health advocate.

The newfound regulatory scrutiny follows various studies suggesting that the industry may be moving too rapidly without sufficient safety frameworks, leading to calls for more stringent oversight. These regulations range from requiring human oversight in treatment decisions to limiting the scope of AI applications in assessments.

Interestingly, a concurrent development has raised questions about the dependency of healthcare professionals on AI technologies. Research indicates that doctors, particularly those in fields like colonoscopy, could rapidly become reliant on AI for diagnostic procedures, presenting a dual challenge: as AI tools become more integrated into healthcare, there is a risk that physicians may defer too much to technology, which could undermine their clinical judgment. A leading health expert pointed out that “over-reliance on AI can lead to potential misdiagnoses or missed diagnoses, as most medical professionals are no longer exercising their own evaluative skills.”

In conclusion, as these regulatory measures unfold, the healthcare industry is facing pivotal issues regarding AI integration. Striking a balance between embracing innovation and ensuring patient safety and the integrity of healthcare practices remains a critical challenge.