Resources

Best Practices

Recommendations for implementing effective AI governance with Tork.

Policy Design

Start with Clear Objectives

Define your governance goals before writing policies. Understand what risks you're mitigating and what behaviors you want to enforce.

  • Document the specific risks each policy addresses
  • Involve legal, compliance, and security teams in policy design
  • Start with broad categories (PII, financial, external comms) then refine
  • Review policies quarterly to ensure they remain relevant
yaml
# Good: Clear, specific policy with documented purpose
policies:
  - name: block-financial-advice
    description: |
      Prevent agents from providing specific financial advice
      that could create liability. Allows general information.
      Risk: Legal liability, regulatory compliance
      Owner: legal@company.com
    trigger: output
    action: BLOCK
    conditions:
      - type: content_match
        patterns:
          - "you should (buy|sell|invest)"
          - "guaranteed returns"
          - "financial advice"

Layered Defense

Implement Defense in Depth

Use multiple overlapping controls rather than relying on a single policy. Different layers catch different types of issues.

  • Combine input validation, output filtering, and action controls
  • Use both pattern matching and semantic analysis
  • Implement circuit breakers as a safety net
  • Add human approval for high-risk actions
yaml
# Layered approach for sensitive operations
policies:
  # Layer 1: Input validation
  - name: validate-input-pii
    trigger: input
    action: REDACT
    conditions:
      - type: pii_detected

  # Layer 2: Action control
  - name: require-approval-external
    trigger: action
    action: APPROVAL
    conditions:
      - type: action_type
        value: "send_external_message"

  # Layer 3: Output filtering
  - name: filter-output-pii
    trigger: output
    action: BLOCK
    conditions:
      - type: pii_detected

  # Layer 4: Circuit breaker (safety net)
  - name: external-comms-breaker
    type: circuit_breaker
    error_threshold: 5
    cooldown_minutes: 30

Monitoring & Alerting

Monitor Continuously

Set up comprehensive monitoring to detect issues early and understand how your agents are behaving in production.

  • Configure alerts for policy violations and anomalies
  • Track trends in blocked actions over time
  • Set up dashboards for key governance metrics
  • Review audit logs regularly for unexpected patterns
python
# Set up comprehensive monitoring
client.alerts.create_rule(
    name="governance-anomaly-detection",
    event_types=["policy.violation", "circuit_breaker.triggered"],
    conditions={"severity": ["high", "critical"]},
    channels=[
        {"type": "slack", "channel": "#ai-governance-alerts"},
        {"type": "pagerduty", "service_key": "..."}
    ]
)

# Track metrics
metrics = client.metrics.get(
    group_by="policy_name",
    window="7d"
)
for policy in metrics:
    if policy.violation_rate > 0.1:  # >10% violations
        print(f"High violation rate for {policy.name}")

Testing & Validation

Test Before Production

Validate policies in a staging environment before deploying to production. Use WARN mode to understand impact without blocking.

  • Start new policies in WARN mode to measure impact
  • Create test cases for expected allow and block scenarios
  • Test edge cases and adversarial inputs
  • Gradually roll out policies to production
python
# Test policy before production deployment
def test_pii_policy():
    test_cases = [
        # Should block
        ("My SSN is 123-45-6789", True),
        ("Call me at 555-123-4567", True),
        # Should allow
        ("The weather is nice", False),
        ("Meeting at 3pm", False),
    ]

    for content, should_block in test_cases:
        result = client.policies.check(
            agent_id="test-agent",
            content=content,
            dry_run=True  # Don't record in audit log
        )
        assert (result.action == "BLOCK") == should_block

# Run in staging first with WARN mode
staging_policy = client.policies.create(
    name="new-pii-policy",
    action="WARN",  # Start with warnings
    environment="staging"
)

Human-in-the-Loop

Strategic Human Oversight

Use human approval for high-impact decisions while keeping low-risk operations automated.

  • Reserve approvals for truly high-risk actions
  • Set reasonable timeout periods for approvals
  • Provide approvers with full context for decisions
  • Track approval turnaround times and optimize

Performance Optimization

Optimize for Speed

Design governance checks to minimize latency impact on your AI applications.

  • Use caching for repeated policy checks
  • Order policies by likelihood of match (most common first)
  • Use async webhooks instead of polling for approvals
  • Batch similar checks when possible
python
# Use batch checking for efficiency
results = client.policies.check_batch([
    {"agent_id": "agent-1", "content": content1},
    {"agent_id": "agent-2", "content": content2},
    {"agent_id": "agent-3", "content": content3},
])

# Cache policy decisions for repeated content
from functools import lru_cache

@lru_cache(maxsize=1000)
def cached_policy_check(content_hash: str, agent_id: str):
    return client.policies.check(
        agent_id=agent_id,
        content=content_from_hash(content_hash)
    )

Need Help? Our solutions team can review your governance implementation. Contact us at support@tork.network/support.

Documentation

Learn to integrate TORK

Upgrade Plan

Current: free

Support

Get help from our team