Tork

Feature

Policy Enforcement

Define governance rules in human-readable YAML and enforce them consistently across all your AI operations. Block, warn, redact, or allow based on content, user context, or custom conditions.

How It Works

Tork's Policy Engine evaluates every AI input and output against your defined rules before they're processed or returned. This happens in milliseconds, adding negligible latency while providing complete governance control.

policy.yaml

policies:
  # Block harmful content
  - name: content-safety
    trigger: output
    action: BLOCK
    conditions:
      - type: contains_harmful_content
        threshold: 0.8
    message: "Content blocked due to safety policy"

  # Redact PII before sending to AI
  - name: pii-protection
    trigger: input
    action: REDACT
    patterns:
      - email
      - phone
      - ssn
      - credit_card

  # Warn on sensitive topics
  - name: sensitive-topics
    trigger: output
    action: WARN
    keywords:
      - medical_advice
      - legal_advice
      - financial_advice
    escalate_to: compliance@company.com

Policy Actions

ALLOW

Let the content pass through unchanged. Use for approved patterns or trusted sources.

BLOCK

Reject the request entirely. Returns a configurable error message to the user.

REDACT

Remove or mask sensitive content while allowing the request to proceed.

WARN

Allow but flag for review. Optionally notify compliance teams via webhook.

Integration Example

Python

from tork import PolicyEngine

# Load your policies
engine = PolicyEngine.from_yaml("policy.yaml")

# Evaluate input before sending to AI
user_input = "Contact me at john@example.com"
input_result = engine.evaluate(user_input, trigger="input")

if input_result.action == "BLOCK":
    return {"error": input_result.message}

# Safe input - proceed with AI call
ai_response = call_your_ai_model(input_result.content)

# Evaluate output before returning to user
output_result = engine.evaluate(ai_response, trigger="output")

if output_result.action == "BLOCK":
    return {"error": "Response blocked by policy"}

# Return governed response
return {"response": output_result.content}

Use Cases

Healthcare AI

Block medical diagnoses, ensure HIPAA compliance, redact patient identifiers

Financial Services

Prevent unauthorized financial advice, comply with SEC regulations

Customer Support

Ensure brand-safe responses, block competitor mentions, enforce tone guidelines

Education

Age-appropriate content filtering, prevent academic dishonesty patterns

Legal Tech

Block unauthorized legal advice, ensure jurisdiction-appropriate responses

Why Policy-as-Code?

Version Controlled

Track policy changes in Git. Review, approve, and rollback like code.

Environment Parity

Same policies in dev, staging, and production. No configuration drift.

Auditable

Every policy change is logged. Know who changed what and when.

Testable

Unit test your policies before deployment. CI/CD integration ready.

Declarative

Define what you want, not how. Tork handles the implementation.

Portable

Move between frameworks without rewriting rules.

Ready to enforce policies?

Get started with Tork in under 5 minutes.

Start Free Trial View on GitHub