Back to Home
Product LaunchJanuary 18, 2026

Tork Phase 5: Complete AI Governance Platform Now Live

Introducing Memory Integrity, HITL Enforcement, Supply Chain Visibility, Cost Controls, and a complete Admin Console for production-ready AI governance.

Tork Phase 5 Dashboard

Complete AI Governance Platform

Why AI Governance Matters

As AI agents become more autonomous, the need for robust governance has never been greater. Organizations are deploying agents that can access sensitive data, make financial decisions, and interact with production systems. Without proper guardrails, the risks are significant: data breaches, runaway costs, compliance violations, and loss of trust.

Tork is the AI governance platform that solves these challenges. We provide comprehensive tools for PII protection, policy enforcement, audit logging, and now with Phase 5, we've added memory integrity, human oversight, supply chain security, and cost controls.

What's New in Phase 5

Memory Integrity & Trust Scoring

Cryptographic verification of agent memory with drift detection and trust scoring.

HITL Enforcement

Human-in-the-loop approval workflows with slicing attack detection and velocity limits.

Supply Chain Visibility

Track dependencies and vulnerabilities across models, packages, and tools.

Cost Controls

Budget governance with spend tracking, alerts, and hard limits per agent or team.

MCP Tool Auditing

48 governance tools for Claude, Cursor, and any MCP-compatible agent.

Memory Integrity: Trust Your Agent's Memory

Agent memory can be modified, corrupted, or poisoned over time. Memory Integrity provides cryptographic snapshots that detect unauthorized changes and calculate trust scores based on verification history.

Python
from tork import TorkClient, MemoryIntegrity

client = TorkClient(api_key="your_key")
memory = MemoryIntegrity(client)

# Take a snapshot of agent memory
snapshot = memory.snapshot(
    agent_id="agent-1",
    memory_data=agent.get_memory(),
    memory_type="conversation"
)

# Later, verify memory hasn't been tampered with
result = memory.verify(agent_id="agent-1")
print(f"Trust Score: {result['trustScore']}/100")
print(f"Modifications: {result['modificationCount']}")

HITL Enforcement: Humans in the Loop

Some actions are too risky for full automation. HITL Enforcement lets you require human approval for high-stakes decisions, with slicing attack detection to prevent agents from breaking up large operations to avoid oversight.

Slicing Attack Detection

Detect when agents split operations to avoid thresholds

Velocity Limits

Limit how many approvals can be requested per time period

Cool-down Periods

Enforce waiting periods between sensitive operations

Context Preservation

Reviewers see full context for informed decisions

Admin Console: No-Code Governance

Not everyone wants to write code to manage AI governance. The Admin Console provides a complete web interface for managing agents, policies, budgets, approvals, and alerts.

Dashboard

Real-time TORKING-X scores and system health

Agents

Register, monitor, and manage AI agents

Policies

Visual rule builder for governance policies

Budgets

Create and track spending limits

Approvals

HITL approval queue with context

Alerts

Monitor and respond to governance events

TORKING-X: The Industry Standard

TORKING-X is our 9-metric scoring system for AI governance health. Each metric is weighted based on its importance, giving you a single score (0-100) that represents your overall governance posture.

T

Tamper-Evidence

10%

P

Privacy Protection

15%

E

Ephemeral Identity

10%

Q

Quality Guardrails

15%

C

Circuit Breakers

15%

S

Supply Chain

10%

L

Tool Safety

10%

M

Memory Integrity

10%

H

HITL Enforcement

5%

Combined into a single governance score from 0-100

Getting Started

Start using Tork in minutes with our Python or JavaScript SDK:

Installation
# Python
pip install tork

# JavaScript/TypeScript
npm install @torknetwork/sdk
Quick Start
from tork import TorkClient

client = TorkClient(api_key="your_key")

# Evaluate content for PII and safety
result = client.evaluate(
    content="User message here",
    checks=["pii", "toxicity", "jailbreak"]
)

if result.allowed:
    # Safe to proceed
    process_with_llm(result.content)
else:
    # Handle policy violation
    log_violation(result.violations)

What's Next

Phase 5 completes our core governance platform, but we're not stopping here. On our roadmap:

  • Enterprise SSO - SAML, OIDC, and custom identity providers
  • Advanced Analytics - ML-powered anomaly detection and insights
  • On-Premise Deployment - Self-hosted option for air-gapped environments
  • More Integrations - Vercel AI SDK, Haystack, DSPy, and more

Try Tork Today

Get started with Tork for free. No credit card required. See how easy it is to add governance to your AI agents.

T

Tork Team

Building the future of AI governance