Anthropic Enhances Code Security with Automated Reviews and New Tools

Anthropic has unveiled automated security reviews and new tools in its Claude Code tool, aimed at enhancing code security.

Key Points

  • • Introduction of automated security reviews using Claude Code.
  • • New /security-review command and GitHub Actions integration for vulnerability checks.
  • • Launch of 'Claude Code Security Reviewer' as an open-source tool on GitHub.
  • • Industry trends show a rise in AI coding assistant adoption, expected to reach 75% by 2028.

On August 6, 2025, Anthropic announced significant updates to its Claude Code tool, introducing automated security review features aimed at enhancing code security for developers. The new functionalities include a command, `/security-review`, which allows developers to conduct ad-hoc security analyses directly from their terminal. Coupled with an integration of GitHub Actions, this feature automates security reviews for pull requests, ensuring vulnerabilities are caught before deployment. Notably, Anthropic has utilized these features within its workflows, successfully identifying issues like remote code execution vulnerabilities and SSRF attack risks in its own tools.

Additionally, Anthropic has released an open-source tool, the 'Claude Code Security Reviewer', which scans code across various programming languages for security vulnerabilities. This tool, available on GitHub under the MIT license, intelligently filters out false positives and adds comments in code discussions, focusing exclusively on modified files.

These advancements arrive at a time when AI coding assistants are becoming increasingly prevalent; Gartner predicts that by 2028, 75% of software engineers will employ such tools, up from under 10% in 2023. While the rise of AI tools aims to expedite project delivery, concerns about security risks persist, with many facing increased pressure to deliver despite the potential pitfalls of insecure code.

As Anthropic continues to innovate in the AI security space, it also faces industry tensions, exemplified by its recent action of blocking OpenAI from accessing its Claude models, alleging a contract breach. Moving forward, Anthropic will also implement usage limits for Claude subscribers, indicating a tightening control over access to its AI technologies.