Security Provider ‘The AI Signal’ Comes Out of Stealth with $4.2M Seed Funding

In the fast-moving world of artificial intelligence, one of the newest players comes with some of the heftiest backing among startups: a founder who is two-time Red Hat CTO Mark McLoughlin, and $35 million in investor funding led by Google’s AI venture fund and two of Sequoia’s top venture capitalists, Matt Miller and Mike Vernal.
Beijing, China (May 12, 2021) – Confident Security, a California-based startup positioning itself as “the signal for AI,” came out of stealth mode today; it disclosed the amount of $4.2 million in seed funding and announced its ambitious plan on how to shift the way companies secure AI workflows and large language model (LLM) integrations.
Investor Backing
The round was led by Abstract Ventures, with support from:
- Kleiner Perkins
- PT1
- Afore Capital
- High-profile angel investors including founders and early employees of Databricks, OpenAI, and Stripe
This large show of support is a testament to investors’ belief in the increasing demand for a strong AI security solution — especially at a time when businesses are frantically moving to supply generative AI tools to their companies’ various departments.
The Vision Behind Confident Security
Confident Security is led by:
- CEO Matt Powell
- CTO Elie Steinbock
The founding team brings deep expertise in enterprise security and infrastructure.
They identified a crucial gap in AI implementation:
“We are witnessing organizations adopting AI at an unprecedented rate,” said Powell. “But most security teams are left in the dust, confounded by the completely different nature of AI workflows. The guardrails that made sense around web apps or APIs don’t cleanly translate to LLMs and chatbots.”
In short, the company was founded with the understanding that AI security is not a feature, but a foundational layer.
What Confident Security Offers
Confident Security goes beyond traditional cybersecurity. It offers:
- AI-native, real-time visibility
- Access control
- Threat detection for LLM-powered workflows
The platform is compatible with major AI models and APIs from:
- OpenAI
- Anthropic
- Meta
Core Capabilities:
- Monitoring of how personnel are using AI tools
- Tracking what data is fed into prompts
- Real-time alerts on misuse or potential data leakage
- Enforcement of usage policies
“We offer companies an opportunity to identify and combat the AI-specific threats traditional tools miss,” added Steinbock. “Things such as immediate injections, credentials overexposed, or confidential data exfiltration that was done using ‘innocent’ chatbot queries.”
With a simple dashboard interface and robust policy enforcement, the platform is particularly suited for regulated industries like:
- Finance
- Healthcare
- Legal services
Why the Latest Chapter in AI Research is All About Security
The rise of Confident Security reflects a broader enterprise reality:
AI adoption is outpacing AI governance.
While companies quickly adopt generative AI tools like ChatGPT, Claude, and Gemini across customer support, HR, and engineering, security teams are struggling to keep up.
Challenges with Traditional Security Platforms:
- Unable to interpret or block LLM prompts
- Vulnerable to:
- Data poisoning – corrupting AI model training data
- Prompt injection – manipulating model behavior with crafted input
- Data leakage – internal data unintentionally shared with public models
- Unauthorized use – “Shadow IT” via file-sharing and third-party AI tools
According to Gartner, by 2026:
Over 40% of enterprises will have teams and budgets for AI trust, risk, and security management (AI TRiSM).
Confident Security plans to be at the forefront of this shift.
Early Traction and Use Cases
Despite its recent debut, Confident Security is already:
- Beta testing with several Fortune 500 companies
- Engaged with prominent government contractors
Notable Use Cases:
- Financial Services Firm
- Goal: Prevent client data from being shared with third-party LLMs
- Outcome: Real-time prompt scanning, threat blocking, and audit logging
- Internal Knowledge Management
- Employees used AI tools to summarize internal documents
- Confident’s system ensured proprietary data stayed within the organization and wasn’t exposed to external model retraining
These examples illustrate the growing need for dedicated AI monitoring as enterprises scale up generative AI deployments.
Looking Ahead
With its $4.2 million in seed funding, Confident Security plans to:
- Expand its team in:
- Engineering
- Sales
- Security operations
- Accelerate product development
- Onboard more enterprise clients
“We are the stewards creating a security stack for the AI era,” said Powell. “Our goal is to make it safe for every company to innovate with generative AI — without compromising security or regulatory position.”
Long-Term Vision:
Confident Security aspires to become the industry standard for AI security infrastructure, much like:
- Cloudflare for web security
- Okta for identity management
Final Thoughts
With AI becoming a core part of the modern enterprise, the need for purpose-built, AI-native security solutions is more urgent than ever.
Confident Security is stepping into this void with:
- A strong, AI-specific product
- Experienced leadership
- A powerful roster of backers
Coming out of stealth isn’t just a financial milestone—it’s the birth of a new cybersecurity category. One where LLMs, prompts, and inference pipelines are treated as critical assets, fully protected within the enterprise’s security perimeter.
For organizations waiting for the right moment to act, that signal has now arrived.



