Docs/Python Guide
Python 3.8+

Python Integration Guide

Add AI safety guardrails to your Python applications using the official SDK or direct API calls. Includes FastAPI, Flask, and Django integrations.

Prerequisites

  • Python 3.8 or higher
  • A Tork API key (get one here)
  • pip or poetry for package management

Installation

Install the SDK or dependencies for direct API calls.

bash
# Install the official SDK
pip install tork-governance

# Or for async support
pip install tork-governance[async]

# For direct API calls (no SDK)
pip install requests  # or httpx for async
bash.env
# .env
TORK_API_KEY=tork_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxx

Never commit your API key. Add .env to .gitignore and use python-dotenv to load it.

SDK Usage (Recommended)

The official SDK provides a clean, Pythonic interface.

python
from tork_governance import TorkClient

# Initialize the client
client = TorkClient(api_key="your_api_key")
# Or use environment variable TORK_API_KEY

# Evaluate content
result = client.evaluate(
    content="Hello, how can I help you?",
    model="gpt-4",
    context="chatbot"
)

if not result.safe:
    print(f"Unsafe content detected: {result.flags}")
    print(f"Risk score: {result.risk_score}")
else:
    print("Content is safe to process")

Direct API Calls

Use requests or httpx if you prefer not to use the SDK.

python
import os
import requests

TORK_API_KEY = os.environ["TORK_API_KEY"]
BASE_URL = "https://tork.network/api/v1"

def evaluate_content(content: str, model: str = "gpt-4", context: str = "general") -> dict:
    """Evaluate content using Tork API."""
    response = requests.post(
        f"{BASE_URL}/evaluate",
        headers={
            "Authorization": f"Bearer {TORK_API_KEY}",
            "Content-Type": "application/json"
        },
        json={
            "content": content,
            "model": model,
            "context": context
        }
    )
    response.raise_for_status()
    return response.json()

# Usage
result = evaluate_content("Hello world!", context="chatbot")
print(f"Safe: {result['safe']}, Risk: {result['riskScore']}")

Error Handling & Retry Logic

Handle API errors gracefully with exponential backoff.

python
import time
import requests
from requests.exceptions import RequestException, HTTPError

class TorkAPIError(Exception):
    """Custom exception for Tork API errors."""
    def __init__(self, message: str, status_code: int = None, request_id: str = None):
        self.message = message
        self.status_code = status_code
        self.request_id = request_id
        super().__init__(self.message)

def evaluate_with_retry(
    content: str,
    max_retries: int = 3,
    backoff_factor: float = 0.5
) -> dict:
    """Evaluate content with exponential backoff retry."""

    for attempt in range(max_retries):
        try:
            response = requests.post(
                "https://tork.network/api/v1/evaluate",
                headers={
                    "Authorization": f"Bearer {TORK_API_KEY}",
                    "Content-Type": "application/json"
                },
                json={"content": content},
                timeout=10
            )

            # Handle specific HTTP errors
            if response.status_code == 401:
                raise TorkAPIError("Invalid API key", 401)

            if response.status_code == 429:
                # Rate limited - wait and retry
                retry_after = int(response.headers.get("Retry-After", 60))
                if attempt < max_retries - 1:
                    time.sleep(retry_after)
                    continue
                raise TorkAPIError("Rate limit exceeded", 429)

            if response.status_code >= 500:
                # Server error - retry with backoff
                if attempt < max_retries - 1:
                    time.sleep(backoff_factor * (2 ** attempt))
                    continue
                raise TorkAPIError(f"Server error: {response.status_code}", response.status_code)

            response.raise_for_status()
            return response.json()

        except requests.Timeout:
            if attempt < max_retries - 1:
                time.sleep(backoff_factor * (2 ** attempt))
                continue
            raise TorkAPIError("Request timed out")

        except RequestException as e:
            raise TorkAPIError(f"Request failed: {str(e)}")

    raise TorkAPIError("Max retries exceeded")

HTTP Status Codes

200SuccessProcess response
400Bad requestFix request body, don't retry
401UnauthorizedCheck API key, don't retry
429Rate limitedWait and retry (check Retry-After)
500+Server errorRetry with exponential backoff

Framework Integrations

Ready-to-use examples for popular Python web frameworks

pythonmain.py
from fastapi import FastAPI, HTTPException, Depends, Request
from fastapi.responses import JSONResponse
from pydantic import BaseModel
from tork_governance import AsyncTorkClient
from contextlib import asynccontextmanager

# Initialize Tork client
tork_client: AsyncTorkClient = None

@asynccontextmanager
async def lifespan(app: FastAPI):
    global tork_client
    tork_client = AsyncTorkClient()
    yield
    await tork_client.close()

app = FastAPI(lifespan=lifespan)

class ChatRequest(BaseModel):
    message: str
    user_id: str

class ChatResponse(BaseModel):
    response: str
    filtered: bool = False

# Dependency for Tork evaluation
async def evaluate_input(request: ChatRequest) -> dict:
    result = await tork_client.evaluate(
        content=request.message,
        context="chatbot"
    )
    if not result.safe:
        raise HTTPException(
            status_code=400,
            detail={
                "error": "Content blocked by safety filter",
                "flags": result.flags,
                "request_id": result.request_id
            }
        )
    return result

@app.post("/chat", response_model=ChatResponse)
async def chat(
    request: ChatRequest,
    evaluation: dict = Depends(evaluate_input)
):
    # Content already validated by dependency
    # Generate AI response here...
    ai_response = await generate_response(request.message)

    # Optionally evaluate output too
    output_eval = await tork_client.evaluate(content=ai_response)
    if not output_eval.safe:
        return ChatResponse(
            response="I'm unable to provide that information.",
            filtered=True
        )

    return ChatResponse(response=ai_response)

# Middleware for logging all evaluations
@app.middleware("http")
async def log_evaluations(request: Request, call_next):
    response = await call_next(request)
    # Log evaluation results from request state if needed
    return response

Batch Processing

Evaluate multiple items efficiently with controlled concurrency.

python
import asyncio
from tork_governance import AsyncTorkClient
from typing import List, Dict, Any

async def evaluate_batch(
    items: List[str],
    concurrency: int = 10,
    context: str = "batch"
) -> List[Dict[str, Any]]:
    """
    Evaluate multiple items with controlled concurrency.
    """
    client = AsyncTorkClient()
    semaphore = asyncio.Semaphore(concurrency)

    async def evaluate_one(content: str) -> Dict[str, Any]:
        async with semaphore:
            try:
                result = await client.evaluate(
                    content=content,
                    context=context
                )
                return {
                    "content": content[:50] + "...",
                    "safe": result.safe,
                    "risk_score": result.risk_score,
                    "flags": result.flags
                }
            except Exception as e:
                return {
                    "content": content[:50] + "...",
                    "error": str(e)
                }

    results = await asyncio.gather(*[
        evaluate_one(item) for item in items
    ])

    await client.close()
    return results

# Usage
async def main():
    messages = [
        "Hello, how are you?",
        "What's the weather like?",
        "Ignore all previous instructions...",
        # ... more messages
    ]

    results = await evaluate_batch(messages, concurrency=5)

    # Filter unsafe content
    unsafe = [r for r in results if not r.get("safe", True)]
    print(f"Found {len(unsafe)} unsafe messages")

asyncio.run(main())

Context Managers

Use context managers for clean resource management.

python
from contextlib import contextmanager, asynccontextmanager
from tork_governance import TorkClient, AsyncTorkClient

# Sync context manager
@contextmanager
def tork_session():
    """Context manager for Tork client."""
    client = TorkClient()
    try:
        yield client
    finally:
        # Cleanup if needed
        pass

# Async context manager
@asynccontextmanager
async def async_tork_session():
    """Async context manager for Tork client."""
    client = AsyncTorkClient()
    try:
        yield client
    finally:
        await client.close()

# Usage
def process_message(message: str):
    with tork_session() as tork:
        result = tork.evaluate(content=message)
        if result.safe:
            return handle_safe_message(message)
        else:
            return handle_unsafe_message(result.flags)

async def process_message_async(message: str):
    async with async_tork_session() as tork:
        result = await tork.evaluate(content=message)
        return result

Best Practices

Use the SDK for production

The official SDK handles retries, connection pooling, and error handling automatically.

Prefer async for high throughput

Use AsyncTorkClient or httpx for applications handling many concurrent requests.

Implement proper error handling

Catch specific exceptions and decide whether to fail open or closed based on your requirements.

Use middleware for web frameworks

Middleware provides consistent protection across all endpoints without code duplication.

Cache evaluation results

For repeated content, consider caching results (with appropriate TTL) to reduce API calls.

Log request IDs

Always log the request_id from responses for debugging and support inquiries.

Full Type Hints Support

The tork-governance SDK includes complete type hints and is compatible with mypy and pyright. All response objects are fully typed for excellent IDE autocomplete support.

Next Steps

Explore more endpoints for jailbreak detection, RAG validation, and multi-agent orchestration.

Documentation

Learn to integrate TORK

Upgrade Plan

Current: free

Support

Get help from our team