AIArtificial IntelligenceIn the News

Irregular Raises $80 Million to Bolster Frontier AI Security

Irregular team celebrating $80 million funding to secure frontier AI models

In a significant fundraise that underscores the market demand for more security innovation, Israeli cybersecurity startup Irregular — which has built a behavioral firewall solution designed to spot and defend against sophisticated threats — announced it has closed an $80 million round of funding.

The round also featured participation from Assaf Rappaport, CEO of Wiz, and pegs Irregular’s valuation at about $450 million.


What Does Irregular Do?

Irregular builds tools, frameworks, and testing systems to assess and secure state-of-the-art AI models — often referred to as frontier models. These advanced AI systems promise tremendous capabilities but also introduce significant risks.

The company:

  • Evaluates attack surfaces, performs stress-testing, and develops security metrics to detect new threats before models are widely deployed.

Key Offerings

  • SOLVE Framework
    A method for evaluating model insecurity under distribution shifts. This metric is increasingly cited in industry rankings.
  • Simulated Adversarial Worlds
    AI systems can take on both attacking and defending roles, enabling Irregular to test how models perform under adversarial pressure.
  • Frontier Model Assessments
    Irregular has participated in safety evaluations of models such as Anthropic’s Claude 3.7 Sonnet and OpenAI’s o3 and o4-mini.

Why This Funding Matters

The fresh capital strengthens Irregular’s ability to expand its work in several key areas:

1. Emergent Risk Detection

The aim is not only to catch known vulnerabilities but also to anticipate unknown or emerging risks — behaviors or failure modes that have not yet fully revealed themselves. The capacity to foresee problems before models are widely deployed is increasingly crucial.

2. Scaling Simulations and Testing

With further investment, Irregular can develop more sophisticated simulation environments, upgrade its infrastructure, and raise safety and robustness standards. These simulations stress-test AI systems under attack, misuse, or unexpected inputs.

3. Industry Collaboration & Influence

Irregular already partners with leading AI labs such as OpenAI, Anthropic, and Google to help shape security practices. Its frameworks and assessments are used in model security reviews, and this raise positions the company to wield broader influence on safety standards.

4. Regulatory & Risk Landscape

As frontier models grow more powerful and widely deployed, risks from poor code, model hallucinations, or adversarial misuse increase. Governments and organizations are moving toward regulation and safety assurance, making strong security evaluation and mitigation tools a competitive necessity.


Challenges Ahead

Despite the promising funding, Irregular faces several hurdles:

  • The Moving Target
    Frontier models evolve rapidly, with each new iteration introducing fresh attack surfaces. Keeping pace is difficult; as one co-founder notes, security is a “moving target.”
  • Emergent Behaviors & Unknown Unknowns
    Some behaviors appear only in real-world use or unexpected combinations, beyond the reach of simulations.
  • Openness vs. Security Trade-offs
    Many AI labs prioritize openness for collaboration and rapid iteration, yet overexposure (e.g., open models or APIs) can invite threats. Striking the right balance is challenging.
  • Regulatory Uncertainty
    Different jurisdictions have varying standards, and enforceable rules for AI safety remain limited. While Irregular’s work may help shape regulation, it also risks being outpaced by regulatory lag.

Implications for the AI Industry

Irregular’s funding and growing role point to several longer-term trends:

  • Security Integrated Early
    Security evaluation and testing are becoming integral to model development pipelines, especially for frontier systems.
  • Common Standards and Metrics
    Tools like SOLVE could establish shared benchmarks for labs, regulators, and enterprises to measure model safety and readiness.
  • Broader Collaboration
    Expect more cooperation between AI labs, security researchers, academia, and governments on data sharing, best practices, and threat modeling.
  • Increased Investment in AI Safety
    As frontier systems become more powerful, the market will reward companies that provide risk mitigation, safety testing, and assurance. Irregular’s successful raise may inspire similar ventures.
  • Potential Regulatory Momentum
    Evidence of effective safety tools may encourage regulators to mandate audits, evaluations, or even pre-deployment testing.

Final Thoughts

Irregular’s $80 million financing is a significant milestone for AI security infrastructure. It demonstrates how seriously investors now regard the security of frontier AI models — not just in theory, but in tangible investments, tools, and collaborations.

Leave a Response

Prabal Raverkar
I'm Prabal Raverkar, an AI enthusiast with strong expertise in artificial intelligence and mobile app development. I founded AI Latest Byte to share the latest updates, trends, and insights in AI and emerging tech. The goal is simple — to help users stay informed, inspired, and ahead in today’s fast-moving digital world.