Anthropic Enhances AI Security Through Collaboration with US and UK Institutes



Peter Zhang
Oct 28, 2025 03:10

Anthropic partners with US CAISI and UK AISI to strengthen AI safeguards. The collaboration focuses on testing and improving AI security measures, including the development of robust defense mechanisms.



Anthropic Enhances AI Security Through Collaboration with US and UK Institutes

Anthropic, a company focused on AI safety and research, has announced a strategic collaboration with the US Center for AI Standards and Innovation (CAISI) and the UK AI Security Institute (AISI). This partnership aims to bolster the security and integrity of AI systems through rigorous testing and evaluation processes, according to Anthropic.

Strengthening AI Safeguards

The collaboration began with initial consultations and has evolved into a comprehensive partnership. CAISI and AISI teams have been granted access to Anthropic’s AI systems at various development stages, allowing for continuous security assessments. The expertise of these government bodies in areas such as cybersecurity and threat modeling has been instrumental in evaluating potential attack vectors and enhancing defense mechanisms.

One of the key areas of focus has been the testing of Anthropic’s Constitutional Classifiers, which are designed to detect and prevent system jailbreaks. CAISI and AISI have evaluated several iterations of these classifiers on models like Claude Opus 4 and 4.1, identifying vulnerabilities and suggesting improvements.

Key Findings and Improvements

The collaboration has uncovered several vulnerabilities, including prompt injection attacks and sophisticated obfuscation methods, which have since been addressed. For instance, government red-teamers identified weaknesses in early classifiers that allowed prompt injection attacks, which involve hidden instructions that trick models into unintended behaviors. These vulnerabilities have been patched, and the safeguard architecture has been restructured to prevent similar issues.

Additionally, the partnership has led to the development of automated systems that refine attack strategies, enabling Anthropic to enhance its defenses further. The insights gained have not only improved specific security measures but have also strengthened Anthropic’s overall approach to AI safety.

Lessons and Ongoing Collaboration

Through this partnership, Anthropic has learned valuable lessons about engaging effectively with government research bodies. Providing comprehensive model access to red-teamers has proven essential for discovering sophisticated vulnerabilities. This approach includes pre-deployment testing, multiple system configurations, and extensive documentation access, which have collectively enhanced the effectiveness of vulnerability discovery.

Anthropic emphasizes that ongoing collaboration is crucial for making AI models secure and beneficial. The company encourages other AI developers to engage with government bodies and share their experiences to advance the field of AI security collectively. As AI capabilities continue to evolve, independent evaluations of mitigations become increasingly vital.

Image source: Shutterstock


Source: https://blockchain.news/news/anthropic-ai-security-collaboration-us-uk