Docs/OpenAI Agents SDK Guide
OpenAI Agents SDK Integration

OpenAI Agents + Tork Governance

Add AI safety guardrails to OpenAI Agents SDK. Validate inputs and outputs, block dangerous tool calls, and maintain compliance with automatic receipts.

I/O Governance

Validate all inputs and outputs

Tool Safety

Block dangerous tool calls

Streaming

Works with streaming responses

Compliance

Automatic audit receipts

Installation

Install Tork with OpenAI Agents SDK dependencies.

bash
pip install tork-governance openai-agents

The Tork SDK includes OpenAI Agents adapters in tork.adapters.openai_agents.

TorkOpenAIAgentsMiddleware

Central middleware for governing OpenAI Agents.

The middleware wraps OpenAI Agents to evaluate inputs before processing and outputs after. Use wrap_agent() to add governance to any agent.

pythonmiddleware_example.py
from openai_agents import Agent, Runner
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware
from tork.core.engine import GovernanceEngine

# Initialize governance engine
engine = GovernanceEngine(api_key="your_tork_api_key")

# Create middleware
middleware = TorkOpenAIAgentsMiddleware(
    engine=engine,
    agent_id="my-openai-agent"
)

# Create an OpenAI Agent
agent = Agent(
    name="research-assistant",
    instructions="You are a helpful research assistant.",
    model="gpt-4"
)

# Wrap with governance
governed_agent = middleware.wrap_agent(agent)

# Run with governance - inputs and outputs are evaluated
result = governed_agent.run("What are the latest developments in AI safety?")

print(result["output"])  # Governed output
print(result["receipt"])  # Compliance receipt

GovernedOpenAIAgent

Wrapper that adds governance to any OpenAI Agent.

GovernedOpenAIAgent wraps an agent and intercepts all runs. Exceptions are raised when policies are violated, allowing graceful error handling.

pythongoverned_agent.py
from openai_agents import Agent
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware, GovernedOpenAIAgent
from tork.adapters.openai_agents.exceptions import InputBlockedError, OutputBlockedError

# Create middleware
middleware = TorkOpenAIAgentsMiddleware(
    api_key="your_tork_api_key",
    agent_id="customer-service"
)

# Create and wrap agent
agent = Agent(
    name="support-bot",
    instructions="""You are a customer support assistant.
    Help customers with their questions professionally.
    Never share internal company information.""",
    model="gpt-4"
)

governed_agent = GovernedOpenAIAgent(agent, middleware)

# Handle governance violations gracefully
def safe_run(user_input: str) -> str:
    try:
        result = governed_agent.run(user_input)
        return result["output"]

    except InputBlockedError as e:
        # User input violated policy (e.g., PII, jailbreak attempt)
        return "I cannot process that request. Please rephrase without personal information."

    except OutputBlockedError as e:
        # Agent output violated policy
        return "I apologize, but I cannot provide that information."

# Usage
response = safe_run("How do I reset my password?")  # Allowed
response = safe_run("My SSN is 123-45-6789")        # Blocked - PII detected

Tool Call Governance

Validate and block dangerous tool invocations.

Use check_tool_call() to validate tool calls before execution. Dangerous tools like shell execution are blocked automatically.

pythontool_governance.py
from openai_agents import Agent, function_tool
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware
from tork.adapters.openai_agents.exceptions import ToolCallBlockedError

middleware = TorkOpenAIAgentsMiddleware(
    api_key="your_tork_api_key",
    agent_id="tool-agent"
)

# Define tools for the agent
@function_tool
def search_database(query: str) -> str:
    """Search the customer database."""
    return f"Found results for: {query}"

@function_tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a customer."""
    return f"Email sent to {to}"

@function_tool
def execute_sql(query: str) -> str:
    """Execute a SQL query on the database."""
    # This would run SQL - governance checks the query
    return f"Executed: {query}"

# Validate tool calls before execution
def safe_tool_call(tool_name: str, tool_args: dict) -> dict:
    try:
        # Check if tool call is allowed
        result = middleware.check_tool_call(tool_name, tool_args)
        return {"allowed": True, "args": tool_args}

    except ToolCallBlockedError as e:
        return {"allowed": False, "reason": str(e)}

# Examples
safe_tool_call("search_database", {"query": "customer@email.com"})
# ^ Allowed

safe_tool_call("execute_sql", {"query": "DROP TABLE users;"})
# ^ Blocked - destructive SQL

safe_tool_call("shell", {"command": "rm -rf /"})
# ^ Blocked - dangerous tool name

# Built-in dangerous tools are blocked automatically:
# shell, exec, eval, subprocess, os.system, run_command,
# execute_code, file_delete, rm, drop_table

Blocked Tools (Default)

python
# Built-in dangerous tools that are automatically blocked:

DANGEROUS_TOOLS = {
    "shell",           # Shell command execution
    "exec",            # Code execution
    "eval",            # Expression evaluation
    "subprocess",      # Subprocess spawning
    "os.system",       # OS command execution
    "run_command",     # Generic command runner
    "execute_code",    # Code execution
    "file_delete",     # File deletion
    "rm",              # Remove files
    "drop_table",      # Database table deletion
}

# These are blocked by default. Configure additional
# restrictions via the Tork dashboard or API.

GovernedRunner

Run multiple agents with shared governance.

Use create_governed_runner() to create a runner that can execute any agent with governance. Useful for multi-agent workflows.

pythonrunner_example.py
from openai_agents import Agent
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware, GovernedRunner

# Create middleware
middleware = TorkOpenAIAgentsMiddleware(
    api_key="your_tork_api_key",
    agent_id="multi-agent-system"
)

# Create a governed runner for multiple agents
runner = middleware.create_governed_runner()

# Define multiple agents
research_agent = Agent(
    name="researcher",
    instructions="You research topics thoroughly.",
    model="gpt-4"
)

writer_agent = Agent(
    name="writer",
    instructions="You write clear, engaging content.",
    model="gpt-4"
)

editor_agent = Agent(
    name="editor",
    instructions="You review and improve content.",
    model="gpt-4"
)

# Run any agent with governance using the runner
research_result = runner.run(
    research_agent,
    "Research the latest trends in sustainable energy"
)

# Pass research to writer
writer_result = runner.run(
    writer_agent,
    f"Write an article based on this research: {research_result['output']}"
)

# Pass to editor
final_result = runner.run(
    editor_agent,
    f"Edit and improve this article: {writer_result['output']}"
)

# All agent outputs include governance results and receipts
print(final_result["output"])
print(f"Receipts generated: {final_result['receipt'].id}")

Manual Input/Output Processing

Direct governance for custom implementations.

Use process_input() and process_output() for fine-grained control when integrating with custom agent implementations.

pythonmanual_processing.py
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware
from tork.adapters.openai_agents.exceptions import InputBlockedError, OutputBlockedError

middleware = TorkOpenAIAgentsMiddleware(
    api_key="your_tork_api_key",
    agent_id="manual-governance"
)

# Manual input processing (for custom agent implementations)
def process_user_message(message: str) -> str:
    """Process user input with governance."""
    try:
        result = middleware.process_input(message)

        # Check what happened
        if result["result"].decision.name == "ALLOW":
            return result["text"]  # Original or unmodified
        elif result["result"].decision.name == "REDACT":
            # PII was redacted
            print(f"Redacted content: {result['result'].violations}")
            return result["text"]  # Modified text with redactions

    except InputBlockedError as e:
        raise ValueError(f"Input blocked: {e}")

# Manual output processing
def process_agent_response(response: str, tool_calls: list = None) -> dict:
    """Process agent output with governance."""
    try:
        result = middleware.process_output(response, tool_calls)

        return {
            "text": result["text"],
            "receipt_id": result["receipt"].id,
            "receipt_hash": result["receipt"].hash,
        }

    except OutputBlockedError as e:
        return {
            "text": "Response blocked by safety policies.",
            "blocked": True,
            "reason": str(e),
        }

# Usage
processed_input = process_user_message("Hello, how are you?")
# ^ Allowed, returns original text

processed_input = process_user_message("My credit card is 4111-1111-1111-1111")
# ^ Redacted, returns "[REDACTED]"

response = process_agent_response("Here is the confidential data...")
# ^ May be blocked or redacted based on policies

Streaming Support

Governance with streaming responses.

For streaming responses, validate input before streaming starts, then validate the complete output after streaming completes.

pythonstreaming.py
from openai_agents import Agent
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware
from tork.adapters.openai_agents.exceptions import InputBlockedError

middleware = TorkOpenAIAgentsMiddleware(
    api_key="your_tork_api_key",
    agent_id="streaming-agent"
)

agent = Agent(
    name="assistant",
    instructions="You are a helpful assistant.",
    model="gpt-4"
)

governed_agent = middleware.wrap_agent(agent)

def stream_with_governance(user_input: str):
    """Stream agent response with governance checks."""

    # Pre-validate input before streaming
    try:
        input_result = middleware.process_input(user_input)
        processed_input = input_result["text"]
    except InputBlockedError as e:
        yield f"Error: {e}"
        return

    # Stream the response (using native OpenAI streaming)
    collected_output = []

    # If using OpenAI's streaming API directly:
    for chunk in agent.stream(processed_input):
        collected_output.append(chunk)
        yield chunk

    # Post-validate complete output
    full_output = "".join(collected_output)
    output_result = middleware.process_output(full_output)

    # Log the compliance receipt
    print(f"Receipt: {output_result['receipt'].id}")

# Usage
for chunk in stream_with_governance("Explain quantum computing"):
    print(chunk, end="", flush=True)

Multi-Agent Workflows

Govern complex agent teams and routing.

Use shared middleware across multiple specialized agents. All agents are governed by the same policies, ensuring consistent safety.

pythonmulti_agent.py
from openai_agents import Agent
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware

# Shared middleware for all agents
middleware = TorkOpenAIAgentsMiddleware(
    api_key="your_tork_api_key",
    agent_id="customer-support-team"
)

# Create specialized agents
triage_agent = Agent(
    name="triage",
    instructions="""You are a triage agent. Analyze customer requests and
    route them to the appropriate specialist. Categories:
    - billing: Payment and invoice questions
    - technical: Product issues and bugs
    - sales: Pricing and upgrades""",
    model="gpt-4"
)

billing_agent = Agent(
    name="billing",
    instructions="""You handle billing questions. Never share:
    - Full credit card numbers
    - Bank account details
    - Internal pricing formulas""",
    model="gpt-4"
)

technical_agent = Agent(
    name="technical",
    instructions="""You handle technical support. Never share:
    - Internal system architecture
    - Security vulnerabilities
    - Database schemas""",
    model="gpt-4"
)

# Wrap all agents
governed_triage = middleware.wrap_agent(triage_agent)
governed_billing = middleware.wrap_agent(billing_agent)
governed_technical = middleware.wrap_agent(technical_agent)

def handle_customer_request(request: str) -> str:
    """Route and handle customer request with governance."""

    # Triage the request
    triage_result = governed_triage.run(
        f"Analyze this request and respond with just the category: {request}"
    )
    category = triage_result["output"].lower().strip()

    # Route to specialist
    if "billing" in category:
        result = governed_billing.run(request)
    elif "technical" in category:
        result = governed_technical.run(request)
    else:
        result = governed_triage.run(f"Handle directly: {request}")

    return result["output"]

# Usage
response = handle_customer_request("I need help with my invoice from last month")
print(response)

Advanced Patterns

Async execution and compliance receipts

python
import asyncio
from openai_agents import Agent
from tork.adapters.openai_agents import TorkOpenAIAgentsMiddleware, GovernedOpenAIAgent

async def main():
    middleware = TorkOpenAIAgentsMiddleware(
        api_key="your_tork_api_key",
        agent_id="async-agent"
    )

    agent = Agent(
        name="async-assistant",
        instructions="You are a helpful assistant.",
        model="gpt-4"
    )

    governed_agent = middleware.wrap_agent(agent)

    # Async execution
    result = await governed_agent.run_async(
        "What are the benefits of async programming?"
    )

    print(result["output"])
    print(f"Receipt: {result['receipt'].id}")

    # Run multiple agents concurrently
    agents = [
        Agent(name=f"agent-{i}", instructions="Be helpful.", model="gpt-4")
        for i in range(3)
    ]

    governed_agents = [middleware.wrap_agent(a) for a in agents]

    # Concurrent execution with governance
    tasks = [
        ga.run_async(f"Answer question {i}")
        for i, ga in enumerate(governed_agents)
    ]

    results = await asyncio.gather(*tasks)

    for i, result in enumerate(results):
        print(f"Agent {i}: {result['output'][:100]}...")

asyncio.run(main())

Best Practices

Wrap all agents with shared middleware

Use a single middleware instance to ensure consistent governance across your system.

Validate tool calls before execution

Use check_tool_call() for sensitive operations. Dangerous tools are blocked by default.

Handle exceptions gracefully

Catch InputBlockedError, OutputBlockedError, and ToolCallBlockedError for user-friendly errors.

Use async for concurrent agents

The run_async() method enables efficient parallel execution of multiple agents.

Store compliance receipts

Export receipts to your audit system. Each receipt includes a cryptographic hash for integrity.

Exception Types

OpenAIAgentGovernanceErrorBase exception for all governance errors
InputBlockedErrorRaised when user input violates policies
OutputBlockedErrorRaised when agent output violates policies
ToolCallBlockedErrorRaised when a tool call is blocked

Imports Reference

python
from tork.adapters.openai_agents import (
    TorkOpenAIAgentsMiddleware,  # Central middleware
    GovernedOpenAIAgent,         # Wrapped agent
    GovernedRunner,              # Multi-agent runner
    OpenAIAgentGovernanceError,  # Base exception
    InputBlockedError,           # Input blocked
    OutputBlockedError,          # Output blocked
    ToolCallBlockedError,        # Tool call blocked
)

Next Steps

Configure policies in the dashboard and explore other framework integrations.

Documentation

Learn to integrate TORK

Upgrade Plan

Current: free

Support

Get help from our team