Anthropic's research shows that only 250 malicious files can backdoor large language models, pointing to a critical security flaw that defies assumptions about model size and safety.
Anthropic's research shows that only 250 malicious files can backdoor large language models, pointing to a critical security flaw that defies assumptions about model size and safety.
A critical ASCII smuggling vulnerability in Google’s Gemini AI allows hidden malicious commands in text, but Google has decided against patching it, raising corporate security concerns.
Surveys reveal widespread concern among IT leaders about AI-driven cyberattacks and among Americans about AI-enhanced foreign threats to national security.
Anthropic introduces context editing and memory tools for Claude AI while releasing Claude Sonnet 4.5, enhancing cybersecurity and coding capabilities.
Anthropic reaches a $183 billion valuation while advancing its Claude Sonnet AI models to better detect software vulnerabilities, underscoring AI's expanding role in cybersecurity.