Anthropic Accuses OpenAI of Unethical Practices Ahead of GPT-5 Launch
Anthropic raises alarms over OpenAI's alleged misuse of its AI tools for GPT-5 development.
Key Points
- • Anthropic accuses OpenAI of using its Claude coding tools for GPT-5 training.
- • OpenAI allegedly violated terms of service via unauthorized access to Claude's APIs.
- • Anthropic has blocked OpenAI's access to Claude models but allows API access for safety evaluations.
- • OpenAI maintains that assessing competing systems is industry standard.
Anthropic has intensified its rivalry with OpenAI by accusing the latter of unauthorized access to its Claude coding tools for the training of GPT-5. These allegations arise as the launch of OpenAI's GPT-5 approaches, highlighting the competitive tensions in the AI sector. Anthropic claims that OpenAI's engineers used Claude's developer APIs to benchmark their models, which they describe as a clear violation of their terms of service. Christopher Nulty, a spokesperson for Anthropic, pointed out that Claude Code has gained popularity among coders, which allegedly prompted OpenAI's technical staff to exploit these tools. In response to this situation, Anthropic has cut off OpenAI's access to Claude models; however, it will still provide API access specifically for benchmarking and safety evaluations, as is customary in the industry. Meanwhile, OpenAI's Chief Communications Officer, Hannah Wong, expressed disappointment, asserting that evaluating competing AI systems is a standard industry practice, and reiterated that OpenAI's own API remains available to Anthropic.
This incident mirrors a previous action by Anthropic that blocked another AI startup from accessing Claude due to acquisition concerns with OpenAI, underscoring a pattern of strict control over proprietary tools as competition heats up in the AI arena.