Study Reveals AI Coding Tools Slow Down Developers Despite Optimistic Perceptions

A new study indicates that AI coding tools actually slow down developers by 19%, contradicting their perceived efficiency.

Key Points

  • • AI tools such as Cursor and GitHub Copilot are perceived to increase speed, yet a study found they actually slow down developers by 19%.
  • • Developers had expected a 24% speed boost but felt they completed tasks 20% faster despite evidence to the contrary.
  • • Five factors led to slower productivity, including the time spent on prompts and reviewing outputs instead of coding.
  • • The study highlights that further advancements in AI tools may improve productivity over time.

A recent study conducted by the Model Evaluation & Threat Research (METR) group has revealed that AI coding tools, such as Cursor and GitHub Copilot, may hinder rather than enhance developer productivity. Despite developers' initial expectations of a 24% time savings when using these tools, the study found they actually resulted in a completion time increase of 19%. This paradox highlights a significant gap between perception and actuality regarding the effectiveness of AI in coding environments.

The METR study, which spanned from February to June 2025, involved 16 experienced developers who worked on 246 real-world coding tasks split into two categories: those that allowed AI tools and those that did not. Shockingly, while 94% of the developers had used web-based language models (LLMs), only 56% had previous experience with the Cursor tool, the primary AI tool tested. This mix of familiarity may have contributed to the study's unexpected results.

Researchers pinpointed five key factors that contributed to the slowdown in productivity, including extensive time spent prompting the AI, waiting for its responses, and reviewing AI-generated outputs, rather than focusing on active coding. As a result, developers felt that, despite their slower speeds, they had achieved task completions faster than they actually did. After the study, they reported feeling 20% faster due to an optimistic bias.

Other studies have produced mixed conclusions about AI tools' impact on productivity, often noting that while these tools offer assistance, they introduce additional work for developers who must validate the AI's outputs. The METR authors, including Joel Becker, Nate Rush, Beth Barnes, and David Rein, warned against drawing definitive conclusions from their findings, suggesting instead that these results may be context-specific, reflecting the participants' familiarity with the codebases and tools.

As one of the researchers put it, "While we found that AI tools slowed developers down in our study, we acknowledge that this does not negate the potential long-term benefits of these technologies. Future advancements could very well change the effectiveness of AI in coding." The study serves as a reminder that while AI coding tools hold promise, developers should remain cautious in their expectations regarding immediate productivity gains.