Project Glasswing Reflects Developments in Cybersecurity Landscap


Project Glasswing” is a new initiative that should command the immediate attention of every C-suite leader, privacy officer, information security professional, and compliance executive in health care and life sciences, financial services and other critical infrastructure industries, and their legal counsel.

Announced in April 2026 by Anthropic, the self-identified “AI safety and research company” best known for its generative artificial intelligence (AI) tool Claude, Project Glasswing reflects a significant development in the cybersecurity landscape. It is a coalition of leading technology and cybersecurity providers united around a single urgent objective: deploying frontier AI capabilities for defensive cybersecurity before malicious actors can exploit similar capabilities offensively to attack first party and open-source software. The project relies on Anthropic’s unreleased Mythos Preview AI model.

Seeing the Unseen: Old Susceptibilities Identified

The Mythos Preview model and similar autonomous AI capabilities in current and future tools are transforming the cybersecurity risk landscape, given the rapid recent development in and accessibility of AI. According to Anthropic’s announcement, the Mythos Preview model was able to detect thousands of critical, previously unknown security vulnerabilities, including flaws in every major operating system and web browser. These systems are essential to our interconnected electronic systems and ability to communicate securely. Some of those vulnerabilities, according to Anthropic, had survived undetected for decades.

The Mythos Preview model is now being deployed as part of Project Glasswing to a select group of organizations, under carefully controlled conditions, to protect the world’s most important and foundational software. The initiative explicitly acknowledges, however, that if these capabilities are not harnessed for defense now, they could be weaponized against critical infrastructure, including health care, financial services, and the Internet itself. Although AI-driven threat detection and other defensive platforms are well-established solutions, Project Glasswing foreshadows a new era where autonomous AI becomes a potentially omnipotent weapon in the wrong hands.

The AI Threat Landscape Has Been Rapidly Evolving

As we previously analyzed here, Anthropic itself reported the first large-scale cyberattack executed without substantial human intervention—a fully automated campaign targeting technology companies, financial institutions, manufacturing, and government agencies. This event showcased the convergence of multi-modal AI and agentic AI to launch a sophisticated and alarming automated cyberattack. The FBI also reported that, in 2025, there were 22,000 complaints reporting AI-related information, resulting in nearly $900 million dollars lost. We anticipate losses attributable to AI attacks against our employment, health care, technology, and other critical infrastructure clients to grow exponentially as AI tools are increasingly used against institutions and become more available to a wider range of attackers.

Such attackers may be foreign. On April 23, 2026, the Executive Office of the President, Office of Science and Technology Policy issued a memorandum for the heads of executive departments and agencies warning them of threats from foreign entities engaged in “deliberate, industrial-scale campaigns” to attack U.S. frontier AI systems, “leveraging tens of thousands of proxy accounts to evade detection[.]” These methods can be applied to virtually any institution and should be anticipated and proactively addressed.

The legal and operational implications for organizations – particularly those in health care, financial services, and technology – are severe. Health care systems, for example, are increasingly digital, interconnected, and powered by a complicated supply chain of vendors and technology. This sprawling digital ecosystem driven by sensitive patient information represents the kind of high-value, complexity-rich environment that agentic attackers are designed to exploit. Although many organizations seek to timely patch vulnerabilities as part of good cybersecurity hygiene practices, the current speed and cadence of such patching may represent a gap in the age of AI.

Of course, an organization cannot patch what it is not aware of, which is one of the chief concerns highlighted by Project Glasswing. Current practices may not be enough because the time between discovering a vulnerability and exploiting a vulnerability is shrinking to the point where AI may empower the immediate exploitation of previously undiscovered—so called “zero day” vulnerabilities (e.g., Log4j), which have historically led to some of the largest potential data breaches. At the same time, the movement toward companies increasingly leveraging AI for business reasons necessarily means a greater reliance on connectivity of systems and data and a larger attack surface for AI tools to potentially exploit.

In addition to software vulnerabilities, we have highlighted previously (here and here) that AI has become increasingly sophisticated in social engineering and other identity-based exploitations, including those that allow attackers to exploit biometric authentication mechanisms through use of deepfakes that synthesize voice, facial, and behavioral data. These multi-modal AI-powered attacks are leading to increased spoofing of biometric identity verification, a common multi-factor authentication method in health care and financial services.

Clearly, the types and volume of cyber risks to organizations are escalating in a new era where AI becomes prevalent and accessible.

Existing Legal and Regulatory Standards Require Consideration of AI Risks

Organizations that collect and process protected data, including health, financial, employment, and other identifying information, operate under overlapping state and federal cybersecurity and privacy requirements. As we have discussed in many of our prior writings, these existing legal obligations [i.e., NYSHIELD Act, the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach Bliley Act, and the NYS Department of Financial Services (NYDFS) Cybersecurity Regulation, among many others] generally require that organizations act reasonably to address anticipated cybersecurity threats. State frameworks —including New York’s SHIELD Act and NYDFS Cybersecurity Regulation, and California’s Privacy Protection Agency regulations—impose risk-based information security program requirements on their covered organizations.

These regulatory regimes require organizations to consider risks from emerging cybersecurity threats. But effective AI-powered attacks are no longer emerging; they have arrived, and organizations need to consider appropriate countermeasures to thwart these threats under the regulatory frameworks. Recognized security practices, such as those described in the National Institute of Standards and Technology (NIST) Cybersecurity Framework 2.0 and NIST’s AI Risk Management Framework, merit renewed emphasis in light of evolving threat landscapes and technological advancement.

Accordingly, boards and executive leadership should ask whether their existing safeguards are sufficient and what role AI should play in proactive cybersecurity efforts. Risk assessments should contemplate agentic and other AI attack vectors, such as deepfakes and identity-based attacks.

Key Next Steps for Critical Infrastructure Organizations

All critical infrastructure organizations should reassess their cybersecurity governance models and information risk frameworks and processes to ensure that they remain legally compliant in order to address the impact of AI on the cyberthreat landscape. Although AI technologies are changing in ways that are ever-more powerful and impactful, existing legal obligations are in place to guide, and require organizational compliance measures to protect sensitive data and communications.

Steps that organizations that operate or maintain critical software infrastructure should take include:

  • Maintaining a written information security program that evaluates risks associated with AI-specific cyber threats in accordance with recognized frameworks and guidance, including: the Open Worldwide Application Security Project Agentic AI and LLM Top 10 frameworks; NIST AI Risk Management guidance; and International Organization for Standardization/Internal Electrotechnical Commission risk management guidance.
  • Evaluating biometric systems, multi-factor authentication implementations, and identity verification workflows against the risk from multi-modal AI attack scenarios as part of routine risk assessments. Behavioral biometrics and continuous authentication architectures—rather than static biometrics alone—may provide meaningfully stronger defenses against AI-synthesized identity spoofing targeting clinical, administrative and workforce systems.
  • Funding automated AI-driven vulnerability detection and response tools. The defensive AI tools being made available through Project Glasswing may help identify complex vulnerabilities that prior-generation automated tools consistently miss. Organizations should accelerate procurement evaluation and deployment of AI-augmented vulnerability detection, particularly for legacy systems managing Protected Health Information and clinical operations, as well as any other sensitive or personal data subject to existing regulations.
  • Revising incident response playbooks to prepare for autonomous AI-driven attacks.
  • Sensitizing and training their workforce to AI-powered threats, particularly as to social engineering, identity spoofing, and business email compromises.
  • Hiring trained cybersecurity professionals who can effectively manage and document the organization’s defensive measures and assess potential AI threats.
  • Managing supply chain risk through effective and tailored contractual arrangements. Business associate agreements and technology vendor contracts should be reviewed for AI cybersecurity representations, breach notification obligations, and indemnification provisions that contemplate AI-augmented threat scenarios.

The Strategic and Legal Imperative: Defense Cannot Wait

For organizations in financial services, health care and life sciences, and other critical infrastructure sectors, the calculus is both legal and existential. Patient data, sensitive financial data, clinical operations, intellectual property, employee data, and medical device integrity are all at stake. Project Glasswing signals that the cybersecurity industry recognizes it is in a race with no finish line in sight. Organizational leadership should recognize that they are in that race too — whether they choose to be or not.

The organizations that engage now with AI-powered defensive capabilities, modernize their risk frameworks, and partner with the right cybersecurity minded stakeholders will be materially better positioned to face the impending risks that an AI-powered threat landscape poses. Those that wait for the threat to actually materialize may find themselves explaining to regulators, plaintiffs, consumers, and patients why they did not act reasonably when the warning signs were this clear. Measures to address this new era should be solidly grounded in existing legal obligations and frameworks to provide the best defense to addressing and responding to AI-powered attacks.

Epstein Becker Green Staff Attorneys Elizabeth A. Ledkovsky and Ann W. Parks contributed to the preparation of this post.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *