Back to Blog
Tutorial

Add AI Governance to Your AI Agent in 5 Minutes

A hands-on tutorial: install the SDK, add PII redaction, enforce policies, and integrate with LangChain — all in under 5 minutes.

February 12, 2026  ·  6 min read  ·  Tork Network

“Your AI agent just leaked a customer's Social Security number to a third-party API.”

That's not a hypothetical. It happened last month to a production agent with no governance layer. The fix took 30 seconds — adding one middleware call. The cleanup took three weeks.

This tutorial walks you through adding real governance to your AI agent. Not a toy demo — actual PII redaction, policy enforcement, and compliance receipts that hold up under audit. You'll be done before your coffee gets cold.

Prerequisites

Step 1: Install the SDK

Pick your language:

# Python
pip install tork-governance
 
# JavaScript / Node.js
npm install tork-governance

Set your API key as an environment variable:

export TORK_API_KEY="tork_jwt_your_key_here"

Step 2: Add PII Redaction

The most common governance failure is PII leakage — user data passing through to LLMs, logs, or third-party APIs unredacted. Tork detects 50+ PII types across 13 regional formats and redacts them before they leave your system.

Python

from tork_governance import Tork
import os
 
tork = Tork(api_key=os.environ["TORK_API_KEY"])
 
# Scan for PII before sending to the LLM
result = tork.govern(
  content="Email john@acme.com, SSN 123-45-6789",
  direction="input"
)
 
# result.governed → "Email [EMAIL_REDACTED], SSN [SSN_REDACTED]"
# result.pii_detected → [{type: "email", ...}, {type: "ssn", ...}]
safe_content = result.governed

JavaScript

import { Tork } from 'tork-governance';
 
const tork = new Tork({ apiKey: process.env.TORK_API_KEY });
 
// Scan for PII before sending to the LLM
const result = await tork.govern({
  content: 'Email john@acme.com, SSN 123-45-6789',
  direction: 'input',
});
 
// result.governed → "Email [EMAIL_REDACTED], SSN [SSN_REDACTED]"
const safeContent = result.governed;

The governed field always contains the redacted version. The original is never logged or stored by Tork — redaction happens in-memory.

Step 3: Enforce Policies

PII redaction keeps data safe. Policy enforcement keeps your agent safe. Define what your agent is — and isn't — allowed to do:

Python

result = tork.govern(
  content=user_message,
  policy="strict",
  direction="input"
)
 
if result.action == "block":
  print(f"Blocked: {result.reason} ")
  # e.g. "Blocked: Prompt injection attempt detected"
elif result.action == "flag":
  print(f"Warning: {result.reason} ")
  # Continue but log the warning
else:
  # action == "allow" — safe to proceed
  llm_response = call_llm(result.governed)

JavaScript

const result = await tork.govern({
  content: userMessage,
  policy: 'strict',
  direction: 'input',
});
 
if (result.action === 'block') {
  throw new Error(`Blocked: ${result.reason}`);
}
 
const llmResponse = await callLLM(result.governed);

Policies can be strict (block on any violation), standard (warn and log), or minimal (log only). You can also block specific tool calls:

// tork.config.json
{
  "blocked_tools": ["shell_execute", "file_delete", "send_email"],
  "require_human_approval": ["database_write", "payment_process"]
}

Step 4: Govern Both Input and Output

Governing input catches prompt injection and PII in user messages. But LLMs can also hallucinate PII or leak training data in their responses. Govern both directions:

Python

# Govern the user's input
input_result = tork.govern(content=user_msg, direction="input", policy="strict")
if input_result.action == "block":
  return "I can't process that request."
 
# Call the LLM with governed input
llm_response = call_llm(input_result.governed)
 
# Govern the LLM's output before returning to the user
output_result = tork.govern(content=llm_response, direction="output")
return output_result.governed

JavaScript

// Govern input
const inputResult = await tork.govern({
  content: userMsg, direction: 'input', policy: 'strict'
});
if (inputResult.action === 'block') return 'I can\'t process that request.';
 
// Call LLM with governed input
const llmResponse = await callLLM(inputResult.governed);
 
// Govern output before returning to the user
const outputResult = await tork.govern({
  content: llmResponse, direction: 'output'
});
return outputResult.governed;

Step 5: Integrate with LangChain

If you're using LangChain (or CrewAI, AutoGen, or any of the 116 supported frameworks), Tork plugs in as a callback or middleware — no refactoring required:

Python (LangChain)

from tork_governance.adapters import TorkLangChainCallback
from langchain_openai import ChatOpenAI
 
llm = ChatOpenAI(model="gpt-4")
tork_cb = TorkLangChainCallback(api_key=os.environ["TORK_API_KEY"])
 
# Every LLM call is now governed automatically
response = llm.invoke(
  "Summarize this customer record",
  config={ "callbacks": [tork_cb] }
)

JavaScript (LangChain.js)

import { TorkLangChainCallback } from 'tork-governance/adapters';
import { ChatOpenAI } from '@langchain/openai';
 
const llm = new ChatOpenAI({ model: 'gpt-4' });
const torkCb = new TorkLangChainCallback({
  apiKey: process.env.TORK_API_KEY
});
 
// Every LLM call is now governed automatically
const response = await llm.invoke(
  'Summarize this customer record',
  { callbacks: [torkCb] }
);

The callback intercepts every LLM call, governs input and output, and writes a compliance receipt — zero changes to your existing chain logic.

Step 6: Verify in the Dashboard

Run your agent and head to the Tork dashboard. You'll see:

Compliance receipts are the part auditors care about. Each receipt is cryptographically chained to the previous one, so tampering with any single entry invalidates the entire chain. Hand these to your compliance team for HIPAA, GDPR, SOC 2, or PCI-DSS reporting.

What You Just Built

In under 5 minutes, you added:

All at ~20ms per call. Your agent is now governed, auditable, and compliant — without a single architectural change.

Next Steps

Ready to govern your agent?

Free tier includes 5,000 API calls per month. No credit card required.

Get Your API Key

Tork Network Pty Ltd — Sydney, Australia