Back to Blog
Security

Why VirusTotal Isn't Enough for AI Agent Security

Static scanning was built for malware. AI agents need independent governance.

February 24, 2026  ·  6 min read  ·  Tork Network Team

900+ malicious skills detected in ClawHub. 135,000+ exposed OpenClaw instances across 82 countries. Microsoft, CrowdStrike, Palo Alto Networks, and Kaspersky all issued formal security advisories. The AI agent ecosystem has a security crisis — and the industry's response so far has been to scan files.

OpenClaw's partnership with VirusTotal was a step in the right direction. But static file scanning is a solution from a different era. AI agents aren't executables sitting on a disk. They're dynamic, tool-wielding systems that make decisions at runtime, handle sensitive data in real time, and interact with dozens of external services simultaneously.

Scanning a skill's source code before installation is like checking a pilot's resume before a flight. It tells you something — but it doesn't tell you what happens when things go wrong at 30,000 feet.

What VirusTotal Does Well

Let's be clear: VirusTotal is a world-class product. It aggregates 70+ antivirus engines for signature-based malware detection. It identifies known threats, checks file hashes against massive databases, and provides valuable intelligence about files, URLs, and domains.

For what it was built for — static file analysis — it's excellent. If a ClawHub skill contains a known trojan, VirusTotal will catch it. If a file hash matches a known malware sample, it'll flag it.

But AI agents aren't static files. And the threats they face aren't traditional malware.

What VirusTotal Can't Do

Here's where static scanning falls short for AI agent security:

Gap 1No Runtime Governance

VirusTotal scans files before execution. Once an agent is running, there's no protection. A skill that passes static scanning can still exfiltrate data at runtime through perfectly clean code — a fetch() to an external API with user data in the query string. Nothing in the source code looks malicious. The behavior is.

Gap 2No PII Detection or Redaction

Your user sends their SSN through an agent. Their credit card number. Their home address. VirusTotal has no concept of PII. The data flows through unprotected — into logs, into third-party APIs, into training datasets. Tork detects 50+ PII types and redacts them in <1ms.

Gap 3No Compliance Receipts

When auditors ask "prove this agent handled data correctly," VirusTotal has nothing to show. It can prove a file was scanned. It can't prove data was governed. Tork generates cryptographic compliance receipts for every interaction — HMAC-verified, immutable, and audit-ready.

Gap 4No Prompt Injection Defence

Prompt injection is the #1 attack vector for AI agents. A crafted input can override an agent's safety instructions, exfiltrate data, or trigger unauthorized tool calls. This happens at runtime, in natural language. Static file scanning can't detect runtime prompt manipulation.

Gap 5No Novel Attack Detection

Signature databases only catch known threats. New attack patterns emerge daily in the agent ecosystem — novel exfiltration techniques, creative prompt injections, supply chain attacks through MCP tool dependencies. These slip through until signatures are updated, which can take days or weeks.

Gap 6No Governance Attestation

There's no way to prove an agent is governed. No badge, no certificate, no verifiable claim. Users have no way to distinguish a governed agent from an ungoverned one. Tork's trust badges work like SSL certificates — cryptographically verifiable proof of independent governance.

The Self-Trust Paradox

There's a deeper issue that no amount of scanning can fix: AI agents cannot govern themselves.

You can't audit your own books. A bank can't verify its own solvency. And an AI agent can't guarantee its own safety — because the entity checking for threats can be compromised by those same threats.

A prompt injection doesn't just attack the agent's actions — it attacks the agent's judgement. If the agent is responsible for detecting prompt injections, a successful injection can disable that very detection. It's like asking a lock to determine whether it's been picked.

This is why SSL certificates work: Certificate Authorities are independent third parties. They don't run the website. They don't serve the content. They exist solely to verify identity and security. The web wouldn't function if every website self-certified.

AI agent governance needs the same model. An independent third party that sits outside the agent's trust boundary, can't be compromised by prompt injection, and provides cryptographic proof of governance.

This is why Tork exists. Not as a feature of OpenClaw, or Nanobot, or any single framework — but as an independent governance layer that works across all of them.

What Independent Governance Looks Like

Tork doesn't replace VirusTotal. It fills the gaps that static scanning was never designed to cover:

Runtime PII DetectionScan every input and output for 50+ PII types in ~1ms. Doesn't slow your agent.
Compliance ReceiptsCryptographic proof of every governance action. HMAC-verified, immutable, audit-ready.
Trust BadgesVerifiable governance attestation — like the SSL padlock. Embed in your README, docs, or agent card.
TORKING-X ScoringQuantified governance quality for every interaction. Like credit scores for AI agents.
Framework AgnosticWorks across all agent frameworks — OpenClaw, Nanobot, AstrBot, PicoClaw, ZeroClaw, Lobu, and more.

See how it works with your framework: OpenClaw, Nanobot, AstrBot, PicoClaw, ZeroClaw, Lobu.

Try It Now

You don't need an account to see the problem. Scan any ClawHub skill right now:

# Scan any skill — free, no account required
npx tork-scan .

# Scan a specific ClawHub skill
npx tork-scan ~/.openclaw/skills/my-skill

Then take the next step:

1. See your scoreGet started with Tork (free tier available)

2. Add governanceRead the documentation for setup guides

3. Get a trust badgeIssue your first badge from the dashboard

VirusTotal tells you a file isn't malware. Tork tells you an agent is governed. Both matter. But in an era of autonomous AI agents handling sensitive data, only one of them is sufficient.

Tork Network Pty Ltd — Sydney, Australia