Anthropic Revokes OpenAI's API Access to Claude Over Alleged Violations
Anthropic cuts off OpenAI's API access to Claude, citing terms of service violations amid GPT-5 development.
Key Points
- • Anthropic blocked OpenAI's API access to Claude due to alleged violations.
- • OpenAI reportedly used Claude for benchmarking GPT-5, breaching terms of service.
- • The incident highlights rising tensions between AI companies over API access.
- • Both companies emphasize the importance of trust and integrity in AI development.
In a significant development, Anthropic has blocked OpenAI's access to its API for the Claude models, citing violations of terms of service that relate to the development of OpenAI's anticipated GPT-5 model. The access was revoked on August 1, 2025, following allegations that OpenAI's technical staff used Anthropic's Claude Code improperly while benchmarking their upcoming AI product.
According to numerous reports, OpenAI had been utilizing Claude's internal tools not merely for interoperability testing but for performance evaluations against their own system, GPT-5, which is expected to enhance capabilities in code generation and creative writing. Anthropic’s commercial terms prohibit the use of its services for building competing products, which Anthropic insists OpenAI violated. Christopher Nulty, spokesperson for Anthropic, emphasized, "Claude Code has become the go-to choice for coders everywhere... Unfortunately, this is a direct violation of our terms of service."
OpenAI characterized its actions as a standard industry practice for benchmarking against competitors, expressing disappointment over the revoked access. "Evaluating other AI systems is a common industry standard and critical for safety evaluations," remarked Hannah Wong, OpenAI’s chief communications officer.
The competitive tensions highlighted by this incident reflect a broader trend in the AI industry where companies are increasingly protective of their advancements. Anthropic previously limited model access to Windsurf due to concerns over OpenAI's acquisition interests. The looming question now is how this fallout will impact the competitive landscape and the collaborative efforts necessary for advancing AI safety and effectiveness.
Dr. Elena Rodriguez, an AI Ethics Fellow at Stanford University, commented on the incident, drawing parallels to historic tech conflicts over data access. She warned that the increasing focus on proprietary control could lead to a fragmented ecosystem where collaboration on safety and ethical advancements becomes harder to achieve. With the anticipated launch of GPT-5, OpenAI must now rely solely on its internal benchmarks, raising further questions about the future dynamics between these leading AI firms.