Skepticism Grows Among Doctors Towards Generative AI in Medicine

Doctors remain skeptical about the reliability of generative AI in clinical settings.

    Key details

  • • 21% of doctors trust AI-generated medical guidance
  • • Concerns about AI misinterpretation of data
  • • Skepticism about replacing human judgment with AI
  • • Call for guidelines before AI adoption in healthcare

As of September 15, 2025, a pronounced skepticism regarding the use of generative AI in medical decision-making has emerged among health professionals. Many doctors express concerns about the reliability and consequences of using AI-generated recommendations in clinical settings.

Recent surveys indicate that a significant number of medical practitioners hold a cautious view towards their peers who incorporate generative AI into practice. Specifically, only about 21% of doctors feel confident in AI-generated medical guidance, demonstrating a lack of trust in its accuracy and dependability. Some physicians argue that while AI can assist in diagnostics, it should not replace the human element in patient care. "AI can help augment our abilities, but it should never take the place of our judgment and expertise as clinicians," one doctor remarked, reflecting a common sentiment within the community.

This skepticism is rooted in concerns over the complexity of medical information and the potential for AI tools to misinterpret data, leading to incorrect recommendations for patient treatment. Many physicians believe that, without comprehensive oversight and validation, the integration of AI into clinical workflows poses more risks than benefits.

The discussion surrounding generative AI's role in healthcare continues, with many in the medical field advocating for more stringent guidelines and standards before widespread adoption can occur. As these conversations evolve, they highlight the ongoing tension between innovation and safety in medical practice.