OpenAI Bought Promptfoo — AI Agent Security Just Got Real
OpenAI just bought an AI security company — and you should pay attention
On March 9, OpenAI announced its acquisition of Promptfoo, a startup that helps companies find security vulnerabilities in AI systems before attackers do. Promptfoo’s platform simulates thousands of attacks against AI agents — prompt injections, jailbreaks, data exfiltration, unauthorized tool use — and reports what breaks.
This is not an acqui-hire or a talent grab. Promptfoo reached over 150,000 developers and 25% of Fortune 500 companies before the deal. OpenAI is buying a tool that enterprises already depend on because the company knows AI agents have a security problem — and that problem is about to reach small businesses too.
What OpenAI bought and why
Promptfoo is an open-source platform that acts as a red team for AI applications. It automatically generates adversarial prompts — thousands of them — and tests how an AI system responds. Think of it as a penetration test, but for the AI tools your business uses instead of your network.
The platform tests for over 50 vulnerability types, aligned with frameworks from OWASP, NIST, and MITRE ATLAS. The key categories:
| Vulnerability type | What it means |
|---|---|
| Prompt injection | An attacker tricks the AI into ignoring its instructions and following theirs instead |
| Data exfiltration | The AI leaks sensitive information — customer records, internal documents, API keys |
| Jailbreaking | Someone bypasses the AI’s safety filters to generate harmful or unauthorized content |
| Tool misuse | The AI is manipulated into using its connected tools (email, databases, APIs) in unauthorized ways |
| Memory poisoning | Malicious input corrupts the AI’s long-term memory, affecting future interactions |
OpenAI is integrating Promptfoo directly into its Frontier platform for enterprise AI deployment. The open-source version remains MIT licensed and free.
The timing is deliberate. On the same day, OpenAI launched Codex Security for scanning AI-generated code. The company is racing to make its platform secure enough for businesses to trust AI agents with real data and real decisions.
The growing security surface of AI agents
Here is the problem in plain terms: the more useful an AI agent becomes, the more dangerous it is when compromised.
A chatbot that answers FAQs is low-risk. An AI agent that books appointments, accesses your CRM, sends emails, and processes payments has access to everything an attacker wants. And unlike a human employee, an AI agent can be manipulated through carefully crafted text alone — no phishing email or stolen password required.
Indirect prompt injection is the most underappreciated risk. An attacker hides malicious instructions inside a document, email, or webpage. When the AI agent processes that content, it follows the hidden instructions — potentially leaking internal data to an external server without the user ever knowing something went wrong.
This is not theoretical. Researchers have demonstrated these attacks against every major AI platform. And as more businesses deploy AI agents — 42% of organizations now run them in production — the attack surface grows daily.
Our take
OpenAI buying Promptfoo signals that AI security is no longer a nice-to-have. It is infrastructure. The largest AI company in the world just admitted, through a nine-figure acquisition, that the agents it builds need dedicated security tooling.
The bottom line: If OpenAI does not trust its own agents without security testing, you should not trust any AI tool without asking how it is secured.
What is missing from the conversation
- Small businesses are deploying AI agents without any security review. Enterprise companies have security teams and compliance requirements that force them to test. A restaurant using an AI phone agent or a contractor using AI dispatch is running the same technology with none of the safeguards.
- Vendor security matters more than your own. Most small businesses do not build their own AI — they buy it. The question is whether your AI vendor is testing for prompt injection, data leakage, and tool misuse. Start asking.
Questions that remain
- Will AI security testing become a standard part of vendor due diligence, the way SOC 2 compliance is today?
- How quickly will affordable, small-business-friendly security tools emerge now that the open-source foundation exists?
What you should do
Immediate actions
- Ask your AI vendors about security testing. Whether you use an AI chatbot, scheduling tool, or phone agent, ask the vendor: “Do you test for prompt injection and data leakage?” If they cannot answer clearly, that is a red flag.
- Limit AI agent permissions. Every AI tool should have the minimum access it needs to do its job. An AI that answers customer questions does not need access to your financial records. Review what your tools can reach.
- Audit what data your AI tools can see. List every AI tool your business uses and what data it has access to. Customer names, phone numbers, payment info, internal documents — know what is exposed.
- Keep humans in the loop for high-stakes actions. AI agents should flag, not execute, any action that involves money, personal data, or irreversible decisions.
Watch for
- Promptfoo’s open-source tools becoming easier to use. The open-source version is already free. As the community grows, expect simpler setups that non-technical business owners can run.
- AI vendors advertising security certifications. This will become a competitive differentiator. Prefer vendors who can show their AI has been tested.
Resources
- OpenAI’s acquisition announcement — the official statement
- OWASP AI Agent Security Cheat Sheet — practical security guidelines
- Agentic AI: the top cyber threat for small business — our earlier deep dive on AI-specific cyber threats
- AI found 22 Firefox bugs in two weeks — why AI security testing works
AI agents need security — and now they are getting it
OpenAI’s Promptfoo acquisition is a milestone. It means the tools to secure AI agents are becoming standard infrastructure, not aftermarket add-ons. For small businesses, the priority is straightforward: know what AI tools you use, know what data they touch, and demand that your vendors take security as seriously as OpenAI now does.
If you are deploying AI tools and want to make sure your setup is secure, we can help you audit your AI stack and build a plan that fits your budget.