MMNTM logo
Return to Index
Company Profile

7AI: When AI Agents Defend Against AI Attacks

The $130M Series A validates a thesis: only autonomous AI agents can fight AI-driven threats. Inside the Cybereason founders' bet on Agentic Security.

MMNTM Research
13 min read
#Company Profile#Cybersecurity#AI Agents#Autonomous Security#Enterprise Security

Fifty-one seconds.

That's the fastest recorded "breakout time" for an eCrime adversary—the window between initial compromise and lateral movement to other systems. CrowdStrike logged it in their 2024 Global Threat Report.

The average time for a human Security Operations Center to detect, triage, and investigate an alert? Hours. Sometimes days.

The math doesn't work anymore.

Every enterprise SOC runs on the same model: collect telemetry, aggregate into a SIEM, hire analysts to watch dashboards and triage alerts. This funnel worked when attacks were human-speed. It's collapsing now that attackers have AI.

7AI, a Boston-based startup founded by Cybereason veterans, raised $130 million in Series A funding—the largest cybersecurity A-round in history. Their thesis: the only viable defense against AI-powered offense is fully autonomous, AI-powered defense.

This isn't incremental improvement. It's a category shift.

The 1000x Problem

The "1000x problem" is the central premise driving autonomous security. Generative AI enables threat actors to scale attacks by orders of magnitude without proportional cost increases.

Consider spear-phishing. A sophisticated campaign once required manual reconnaissance—reading a target's LinkedIn, analyzing their company's earnings calls, crafting personalized messages. Today, LLMs ingest that digital footprint and generate thousands of hyper-personalized lures in seconds. Phishing attacks linked to generative AI surged over 1,000% in early 2025.

The case study that crystallizes the threat: engineering firm Arup lost $25 million in a deepfake-enabled fraud. An employee transferred funds after a video conference with AI-generated deepfakes of the company's CFO and other colleagues. Not a phishing email. A synthetic reality.

Beyond social engineering, AI is automating the technical attack chain:

Polymorphic malware no longer just changes signatures—it rewrites its own logic to adapt to the specific defenses of the target environment. Researchers have demonstrated reinforcement learning agents trained to modify malware binaries to bypass leading EDR solutions.

Autonomous vulnerability scanning turns the "scouting" phase of attacks into fire-and-forget operations. Attackers deploy agents that continuously probe infrastructure, identify weaknesses, and attempt exploitation using vulnerability libraries. No human attacker sitting at a keyboard. Just software running indefinitely.

The speed differential is fatal. An attacker executing at machine speed, a defender processing at human speed. The window for defense collapses to nothing.

The Human Bottleneck

The defensive side faces its own crisis: 4.8 million unfilled cybersecurity roles globally as of 2025—a record workforce gap.

The statistics paint a grim picture:

  • 66% of cybersecurity professionals report their job is more stressful than five years ago
  • Nearly half of security leaders expect to change jobs by 2025 due to burnout
  • Tier 1 analysts spend 50% of their time on false positives—"stitching" data from disparate tools rather than hunting threats

This creates a vicious cycle. The shortage overworks existing staff. Overwork drives burnout and turnover. Turnover worsens the shortage.

7AI's founders identified this not as a recruitment problem but as an automation failure. Their thesis: humans shouldn't be doing "robot work." The SOC crisis isn't a labor deficit—it's a technological deficit.

The Company: Cybereason's Second Act

7AI was founded in 2023 by two titans of Israeli cybersecurity: Lior Div (CEO) and Yonatan Striem-Amit (CTO).

Both are Unit 8200 veterans—the elite intelligence unit of the Israel Defense Forces. Both previously co-founded Cybereason, the company that pioneered Endpoint Detection and Response and defined the "malop" (malicious operation) concept. Div received a Medal of Honor for his service. Striem-Amit invented the behavioral analysis engine at Cybereason.

Their pedigree matters. These are security veterans applying AI to a problem they spent a decade defining—not AI researchers trying to learn security.

They left Cybereason with a specific insight: EDR solved the visibility problem but created a data overload problem. Organizations had more telemetry than ever and were drowning in it.

The "assistive" AI wave—Copilots—was insufficient. A copilot can answer a question. It cannot wake up at 3 AM to stop a ransomware encryption process. The copilot still requires a human driver.

7AI was founded to build autonomous agents—software entities capable of performing the job of a security analyst end-to-end. The company positions itself as delivering "Service as Software"—selling outcomes (investigations completed) rather than tools (a query language).

The $130M Signal

In December 2025, 7AI announced the largest Series A in cybersecurity history: $130 million, bringing total funding to $166 million.

The investor syndicate validates the infrastructure thesis:

Index Ventures led the round. Partner Shardul Shah joined the board. Index has backed category-defining security companies like Wiz and Adallom—suggesting they view 7AI as the next platform shift.

Blackstone Innovations Investments participated strategically. Blackstone isn't just a financier—it's a massive potential customer with hundreds of portfolio companies. Their CISO noted that 7AI helps "fundamentally reimagine how security operations function at scale." The investment is driven by operational necessity as much as financial return.

Greylock, CRV, and Spark Capital—seed investors—doubled down, signaling strong insider confidence.

While the exact valuation wasn't disclosed, a Series A of this magnitude typically implies north of $500 million, possibly approaching unicorn status. This isn't a feature acquisition play. It's a standalone platform bet.

The Architecture: Swarming Agents

7AI's differentiation lies in its "swarming" agent architecture. Unlike a monolithic LLM that tries to do everything—and hallucinates or fails at complex reasoning—7AI decomposes security operations into discrete tasks handled by specialized agents.

This mirrors the organizational structure of a mature SOC. A Tier 1 analyst triages an alert. A malware specialist analyzes the binary. A network engineer reviews traffic logs. Specialization drives efficiency.

The Swarm Model

When an alert is ingested, 7AI triggers a swarm of purpose-built agents:

  • Device Enrichment Agent: Pulls context about the affected endpoint—user, OS, patch level
  • IP Correlation Agent: Checks external IP reputation against threat intelligence feeds
  • File Investigation Agent: Analyzes file hashes, metadata, and behaviors
  • Storyline Agent: Synthesizes findings into a coherent narrative timeline—effectively writing the incident report
  • Hunting Agent: Proactively searches for Indicators of Compromise without waiting for alerts

This Mixture of Agents approach reduces hallucinations because each agent is constrained to a specific domain and data source. The agents communicate and share context, building a "Case" as a single source of truth.

The key insight: if the File Agent finds a malicious hash, it informs the Device Agent to check if that file executed on other machines. Dynamic feedback loops, not static playbooks.

The Workflow

The platform operates across the full security lifecycle:

  1. Detection: Connect to existing tools (EDR, SIEM, Cloud, Identity) via APIs. No rip-and-replace required.

  2. Investigation: Autonomous enrichment with transparent reasoning. The system shows its chain of thought—why an agent decided a file was malicious or benign. Explainability is non-negotiable for enterprise adoption.

  3. Conclusion: A definitive verdict ("This is a True Positive: Ransomware Precursor"), not a probability score. Binary decision: actionable threat or false positive.

  4. Response: Autonomous mode can isolate endpoints, disable accounts, or block IPs automatically. Human-in-the-loop mode presents one-click approval with full investigation context.

Performance Claims

In their first 10 months of deployment:

  • 2.5 million+ alerts processed
  • 650,000+ investigations completed
  • 95-99% reduction in false positives reaching human analysts
  • Investigation time reduced from hours to minutes

DXC Technology, a Fortune 500 customer, reported cutting investigation time by 30 minutes to 2.5 hours per alert.

The Federated Data Model

Unlike legacy SIEMs that require centralizing all data (at great cost and latency), 7AI queries data where it lives—in the EDR console, cloud log store, identity provider. No massive ETL process.

This positions 7AI as an intelligence overlay rather than a storage layer, reducing total cost of ownership for customers already paying for Splunk or Snowflake storage.

The Autonomy Spectrum: PLAID Framework

The most contentious aspect of 7AI's proposition is autonomy. Security teams are risk-averse by nature. A firewall rule that blocks legitimate traffic can cost millions in revenue. The fear of "automating the outage" is a real psychological barrier.

7AI addresses this with their PLAID (People-Led, AI-Driven) framework. Autonomy is a spectrum, not a binary switch. For how PLAID maps to agent identity and access governance, see Agent Identity Crisis.

Level 1: Recommendations Only The AI investigates and suggests an action. The human pushes the button. Entry point for conservative enterprises.

Level 2: Guidance + AI-Executed Actions The human pre-approves categories of actions ("Always block known bad IPs from non-business countries"). The AI executes within those guardrails.

Level 3: Elite Response Team (Managed) 7AI provides managed service where their experts oversee the AI, acting as human-in-the-loop for the customer. Effectively outsources the liability—customers buy outcomes rather than software.

Enterprise Insights

The agents ingest organizational context—who is a VIP, what is a critical server, what's normal behavior for marketing versus engineering. Grounding decisions in business context, not just threat intelligence, minimizes disruptive false positives.

The DevOps team routinely spins up servers with open ports? Normal for that group, critical alert for Finance.

The Liability Question

Who's responsible when an agent makes a mistake? If 7AI erroneously shuts down a production server, is 7AI liable?

Currently, most software terms of service limit vendor liability. But as agents gain agency, legal frameworks may shift. The EU AI Act and emerging US regulations scrutinize "high-risk" AI systems.

The PLAID model acts as a liability buffer. By keeping a human—either the customer's or 7AI's—in the loop for critical decisions, they maintain a chain of accountability. This human-in-the-loop requirement will likely persist as a regulatory and insurance necessity, even as AI capability improves.

For the pattern library on human oversight design, see HITL Firewall.

The AI vs AI Arms Race

7AI exists because of the AI arms race. Attackers adopt AI; defenders must follow. This creates complex adversarial dynamics.

Adversarial Machine Learning

The primary risk: attackers train their AI to evade the defender's AI. Model evasion attacks use gradient descent to find inputs—changing a few bytes in a malware file—that cause defensive models to misclassify threats as benign.

7AI addresses this through context-aware agents rather than simple file classifiers. An attacker might fool a malware detector by padding a binary. Fooling a swarm of agents simultaneously looking at lateral movement, identity anomalies, and network traffic is harder. Even if one signal is evaded, the broader behavioral pattern remains visible.

The Red Queen Effect

In evolutionary biology, the Red Queen hypothesis states that organisms must constantly adapt just to survive against evolving predators. In cybersecurity, AI models must be continuously retrained.

7AI likely uses Reinforcement Learning from Human Feedback (RLHF)—every time an analyst corrects an agent, the model learns.

But attackers are using RL too. Research demonstrates RL agents trained to automatically generate adversarial examples that bypass WAFs and EDRs. The future: cyber warfare at machine speed between autonomous agents—offensive bots probing for weaknesses, defensive bots patching them in real-time.

The Equilibrium Question

Will there be a winner, or perpetual stalemate?

Game theory suggests perpetual arms race. But defenders have a structural advantage: control of the terrain. 7AI's agents have "home field advantage"—access to internal logs, identity context, and historical baselines that attackers cannot see.

The goal isn't perfect security. It's raising the cost of attack to the point where it becomes economically unviable for adversaries.

The Competition

Is 7AI a feature or a platform? Incumbents are all rolling out their own AI agents. The question is whether "AI-native" beats "AI-augmented."

CrowdStrike (Charlotte AI)

CrowdStrike's Charlotte AI is primarily a generative AI security analyst—a chatbot interface for querying the Falcon platform using natural language. Powerful, but fundamentally assistive. It helps humans do their job faster. It doesn't replace them.

7AI's differentiation: "agentic" rather than "generative." Charlotte answers questions. 7AI performs workflows.

CrowdStrike is moving toward an "Agentic SOC" vision, but 7AI is building it as native architecture from day one—unburdened by legacy codebases or the need to protect existing human-centric service revenue.

SentinelOne (Purple AI)

SentinelOne's Purple AI is the closest competitor—CEO Tomer Weingarten has declared their platform the "first fully agentic AI SOC." Direct collision course.

The battleground: vendor lock-in. SentinelOne's agents work best within the Singularity Data Lake ecosystem.

7AI's differentiation: vendor agnosticism. As an overlay platform integrating with multiple vendors (Splunk, Microsoft, CrowdStrike), 7AI appeals to large enterprises with heterogeneous stacks who don't want walled gardens.

Palo Alto Networks (Precision AI)

Palo Alto focuses on "Precision AI"—ML deeply integrated into network and cloud infrastructure. Their approach is platformization: consolidate all security tools into one stack.

7AI's differentiation: operations-centric versus infrastructure-centric. PANW secures the pipes. 7AI replaces the process. 7AI functions as the brain sitting on top of PANW's muscle.

The AI-Native Advantage

7AI argues it's AI-Native—the entire platform built around autonomous agents. Incumbents are AI-Augmented—bolting AI onto existing workflows.

The architectural difference matters. 7AI's agents can dynamically generate queries based on unfolding investigations. Legacy SOAR platforms rely on static, linear playbooks that break when threats deviate from the script.

For the broader pattern of AI-native beating AI-augmented, see Vertical Agents Are Winning.

The Skeptic's View

Legitimate concerns exist about autonomous security. The 7AI thesis is not universally accepted.

The Black Box Problem

Despite claims of transparent reasoning, neural networks are inherently opaque. If an agent misses a breach, explaining why to a regulator or board is difficult. "The model didn't weigh that feature correctly" is not a legally defensible answer.

The industry needs standardized agent auditing frameworks to verify decision-making logic.

Hallucination Risk

LLMs hallucinate. In security, a hallucination—imagining a file exists when it doesn't, inventing a threat intel record—creates chaos.

7AI claims agents are "architecturally grounded" to eliminate hallucinations, likely through retrieval-augmented generation on trusted internal data rather than open-ended generation. But "elimination" is a strong claim.

A single high-profile failure—where an agent deletes critical data based on a hallucination—could set the entire category back. The hallucination tax applies here with unusually high stakes.

Skill Atrophy

If junior analysts are replaced by agents, how does the industry train the next generation of senior analysts? A SOC run entirely by bots and a few senior experts lacks a talent pipeline.

The industry may need to shift from on-the-job triage training to simulation-based training (cyber ranges) to build human expertise.

Regulatory Headwinds

GDPR, the EU AI Act, and emerging SEC cybersecurity rules impose strict governance on automated decision-making. The requirement for human oversight in "high-risk" AI applications may limit the theoretical ceiling of autonomy in regulated industries.

See Agent Safety Stack for the governance frameworks emerging around these constraints.

The Signal

What does a $130 million Series A tell us?

It signals that venture capital and enterprise buyers believe the Agentic SOC is the inevitable future of cybersecurity. The investment isn't just in a company—it's in a thesis: the only way to secure the digital future is to remove the human bottleneck.

The core thesis holds: the speed and volume of AI-powered attacks have rendered human-speed defense obsolete. The 1000x asymmetry requires automated response. 7AI's swarming agent architecture represents a credible technical approach—moving beyond chatbots to true operational autonomy.

The trust gap remains the largest barrier to adoption. CISOs must be convinced to hand over the keys to software. The PLAID framework and managed service options are designed to bridge this gap incrementally.

Ultimately, 7AI represents a bet that security is becoming a data science problem rather than an analyst problem. If they succeed, they won't just build a successful company. They'll fundamentally alter the labor economics of the cybersecurity industry.

The SOC of the future won't be a room full of people staring at screens. It will be a server room of agents humming in the dark—with a few humans watching the watchers.

For related analysis on the security implications of agent deployment, see Agent Attack Surface and Agent Failure Modes.

7AI Deep Dive: Autonomous Security Agents