Project Glasswing and the New Baseline for Cybersecurity Assessment
By Dritan Saliovski
Anthropic's Project Glasswing, announced on April 7, 2026, deploys an unreleased AI model to find and patch vulnerabilities across the world's most critical software infrastructure. The initiative brings together Amazon, Apple, Microsoft, Google, CrowdStrike, Palo Alto Networks, and others, backed by $100 million in usage credits and $4 million in direct donations to open-source security organizations. For professional services firms that advise on cybersecurity risk, technology due diligence, or IT posture assessments, the announcement resets the baseline for what a competent assessment looks like. For the underlying technical context on the model powering Glasswing, see our companion analysis of Claude Mythos Preview and the decision not to release it.
Key Takeaways
- Claude Mythos Preview found vulnerabilities that survived decades of human review and millions of automated security tests, including flaws in every major operating system and browser
- The Linux Foundation's CEO described the initiative as enabling AI-augmented security to become accessible to maintainers who previously could not afford dedicated security teams
- CrowdStrike, Palo Alto Networks, and Microsoft, companies with their own proprietary AI security tools, publicly endorsed Anthropic's model as superior for vulnerability discovery
- The 12 launch partners plus 40 additional organizations with access represent significant portions of global software infrastructure
- Anthropic plans to eventually release models with these capabilities broadly, with new safeguards, meaning this capability gap is temporary, not permanent
- Organizations relying on conventional vulnerability assessments completed before April 2026 are now benchmarked against a demonstrably lower standard
The Gap Between Current Practice and Current Capability
Most cybersecurity assessments delivered by professional services firms follow a well-established methodology: automated vulnerability scanning, manual penetration testing, configuration review, and compliance mapping against frameworks like ISO 27001, NIST CSF, or SOC 2. These assessments are competent for the threat landscape they were designed to address.
The problem is that the threat landscape just shifted.
Claude Mythos Preview identified a line of vulnerable code in FFmpeg (one of the most widely used video processing libraries in the world) that automated testing tools had executed five million times without catching the issue. The vulnerability had existed for 16 years. In OpenBSD, a system specifically engineered for security, the model found a flaw that had been present for 27 years. These are not esoteric edge cases. These are production systems running in enterprise environments today.
The model did not require human guidance for most of these discoveries. It found and reported vulnerabilities autonomously, including chaining multiple Linux kernel flaws together to achieve full system control. External testers confirmed that it completed end-to-end corporate network attack simulations that would take a skilled human over 10 hours.
When CrowdStrike, Palo Alto Networks, and Microsoft (companies that have built their businesses on proprietary AI-powered security) publicly endorse a competitor's model as the standard for vulnerability discovery, the signal is clear. The current generation of security assessment tools and methodologies, including those sold by the endorsing companies, has a ceiling that AI has moved past.
Assessment capability, before and after AI-augmented vulnerability discovery
Synthesized from Anthropic system card and consortium partner disclosures, April 2026
What This Means for Advisory Firms
Three practical implications apply to any firm delivering cybersecurity advisory, technology due diligence, or IT risk assessments.
Assessment scope needs to expand beyond known vulnerability databases. Conventional scans check against databases of known vulnerabilities (CVEs). The vulnerabilities that Mythos Preview found were zero-days. They did not exist in any database. A scan that reports "no critical vulnerabilities found" against CVE databases tells you nothing about whether zero-day exposure exists. This distinction matters in every engagement where a client relies on assessment results to make investment, insurance, or compliance decisions. For M&A deal teams, this issue now sits at the center of the cybersecurity due diligence framework rather than at the periphery.
The definition of "reasonable security measures" is shifting. Regulatory frameworks and industry standards generally require organizations to implement "reasonable" or "proportionate" security measures. What qualifies as reasonable is benchmarked against prevailing practices. When AI systems can identify vulnerabilities that entire security teams and automated tools have missed for decades, the prevailing-practice benchmark moves. Organizations that could demonstrate reasonable care last quarter may face harder questions next quarter, not because they did anything wrong, but because the definition of adequate diligence evolved. The four-framework regulatory alignment across NIS2, DORA, CRA, and the revised CSA becomes more complex when the underlying "state of the art" reference point shifts mid-assessment cycle.
Due diligence reports need a capability disclaimer. Any cybersecurity assessment or technology due diligence report delivered after today should address whether AI-augmented vulnerability discovery was used, or explicitly state that it was not. Acquirers, investors, and boards who rely on these reports deserve to understand the methodology's limitations relative to what is now technically possible. This is not a marketing pitch for new tools. It is a disclosure obligation for anyone providing professional opinions on security posture.
The Open-Source Dimension
The Linux Foundation's involvement in Project Glasswing highlights a structural vulnerability in the software ecosystem. Jim Zemlin, the Foundation's CEO, was direct: security expertise has historically been a luxury available to organizations with large security teams, while open-source maintainers (whose software underpins most of the world's critical infrastructure) have been left to handle security independently.
This matters for advisory work because virtually every enterprise technology stack depends on open-source components. A client's security posture is only as strong as the weakest link in its dependency chain. If the open-source libraries embedded in a client's systems contain undiscovered zero-days, and Anthropic's results suggest many do, then assessments that stop at the client's proprietary code boundary are incomplete by design. This ties directly into the bidirectional supply chain risk AI development tools create, where dependency compromise is already a primary attack vector.
Anthropic's $4 million donation to open-source security organizations through the Linux Foundation, including $2.5 million to Alpha-Omega and OpenSSF and $1.5 million to the Apache Software Foundation, is a starting point. But the scale of the problem (undiscovered vulnerabilities in software running billions of devices) requires more than donations. It requires a structural change in how open-source security is funded, assessed, and maintained.
The Timeline Question
Anthropic has stated that it does not plan to make Claude Mythos Preview generally available but that its eventual goal is to enable users to deploy models with these capabilities at scale, with appropriate safeguards. The system card indicates that Anthropic is developing new safeguards using an upcoming Claude Opus model, allowing them to refine protections with a less risky system before applying them to Mythos-class capabilities. Security professionals affected by those safeguards will be able to apply to a forthcoming Cyber Verification Program.
The practical implication: the capability gap between Project Glasswing participants and the rest of the market is temporary. Within 12 to 18 months, possibly sooner, AI-powered vulnerability discovery at this level will likely be accessible to any organization willing to adopt it. The question for advisory firms is whether they want to be ahead of that curve or behind it.
Firms that begin building AI-augmented assessment methodologies now will have tested workflows, documented case studies, and client trust by the time these capabilities are widely available. Firms that wait will spend that period explaining why their assessments missed what was findable. The same dynamic we covered in our PE firm's guide to cybersecurity due diligence applies here: the cost of adoption is far lower than the cost of being the firm that delivered the clean assessment before the AI-augmented audit found the problems.
Practical Steps
For firms advising on cybersecurity, technology risk, or IT due diligence, four actions apply immediately.
First, review how your current assessment methodology accounts for zero-day exposure. If it does not, document that limitation and communicate it to clients. Second, evaluate whether your tooling pipeline can integrate AI-augmented vulnerability discovery when it becomes broadly available. Anthropic has indicated this is a matter of when, not whether. Third, update your engagement scoping to address open-source dependency analysis. If your assessments do not map the client's open-source supply chain, you are leaving a known gap. Fourth, monitor the Cyber Verification Program that Anthropic intends to launch. Early access to AI-powered security tools will differentiate firms that move first.
The full Intelligence Brief covers the detailed Project Glasswing partner analysis, AI-augmented assessment methodology frameworks, open-source dependency risk mapping, and a comparative timeline for when these capabilities become broadly accessible.
Update Your Cybersecurity Assessment Methodology
Innovaiden works with leadership teams deploying AI agents across their organizations, from initial setup and training to security framework alignment and governance readiness. Reach out to discuss how we can help your team.
Get in TouchFrequently Asked Questions
What is Project Glasswing?
Project Glasswing is an Anthropic-led consortium announced in April 2026 that deploys the unreleased Claude Mythos Preview AI model to find and patch vulnerabilities across critical software infrastructure. Launch partners include Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. An additional 40 organizations have access, backed by $100 million in usage credits and $4 million in direct donations to open-source security organizations.
Why does Project Glasswing matter for cybersecurity assessment practice?
The model powering Project Glasswing found vulnerabilities that survived decades of human and automated review, including a 27-year-old flaw in OpenBSD and a 16-year-old vulnerability in FFmpeg. When competitors like CrowdStrike, Palo Alto Networks, and Microsoft publicly endorse Anthropic's model as superior for vulnerability discovery, the signal is clear: conventional assessment tools and methodologies have a ceiling that AI has moved past.
What assessment gaps does this create for professional services firms?
Three gaps: assessment scope needs to expand beyond known CVE databases (zero-days by definition are not in any database), the definition of reasonable security measures is shifting (prevailing-practice benchmarks move when AI can find previously invisible flaws), and due diligence reports need a capability disclaimer disclosing whether AI-augmented vulnerability discovery was used.
How long will the capability gap last?
Anthropic has stated that its eventual goal is to enable broader deployment of models with these capabilities, using an upcoming Claude Opus model to refine safeguards before applying them at the Mythos level. The practical implication is that the capability gap between Project Glasswing participants and the rest of the market is temporary, likely 12 to 18 months before AI-powered vulnerability discovery at this level is broadly accessible.
What should cybersecurity advisory firms do immediately?
Four actions: review how current assessment methodology accounts for zero-day exposure and document gaps, evaluate whether your tooling pipeline can integrate AI-augmented vulnerability discovery, update engagement scoping to include open-source dependency analysis, and monitor Anthropic's forthcoming Cyber Verification Program for early access to AI-powered security tools.
Related Insights
Sources
- Anthropic. Project Glasswing announcement and consortium disclosure. anthropic.com. 2026.
- Anthropic. Claude Mythos Preview system card. anthropic.com. 2026.
- Linux Foundation. Statement on Project Glasswing participation. linuxfoundation.org. 2026.
- CrowdStrike. Public endorsement of Project Glasswing. crowdstrike.com. 2026.
- Palo Alto Networks. Consortium partner statement. paloaltonetworks.com. 2026.
- Microsoft Security. Project Glasswing partner disclosure. microsoft.com. 2026.
- Apache Software Foundation / OpenSSF Alpha-Omega. Donation acknowledgments. apache.org, openssf.org. 2026.