Docs/LangChain Guide
LangChain Integration

LangChain + Tork Governance

Add AI safety guardrails to your LangChain chains and agents. Evaluate inputs and outputs, enforce policies, and generate compliance receipts automatically.

Policy Enforcement

Block unsafe content automatically

Chain Governance

Wrap any LangChain chain

Tool Safety

Govern tool inputs and outputs

LCEL Support

Works with modern LangChain

Installation

Install Tork with LangChain dependencies.

bash
pip install tork-governance langchain langchain-openai

The Tork SDK includes LangChain adapters in tork.adapters.langchain.

TorkCallbackHandler

Monitor and govern all LangChain operations via callbacks.

The callback handler integrates with LangChain's callback system to evaluate LLM calls, chain executions, and tool invocations against your governance policies.

pythoncallback_example.py
from langchain_openai import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from tork.adapters.langchain import TorkCallbackHandler
from tork.core.engine import GovernanceEngine

# Initialize governance engine
engine = GovernanceEngine(api_key="your_tork_api_key")

# Create callback handler
tork_handler = TorkCallbackHandler(
    engine=engine,
    agent_id="my-langchain-app"
)

# Use with any LangChain component
llm = ChatOpenAI(
    model="gpt-4",
    callbacks=[tork_handler]
)

prompt = PromptTemplate.from_template("Tell me about {topic}")
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[tork_handler])

# All LLM calls and chain executions are now governed
result = chain.invoke({"topic": "machine learning"})

# Access compliance receipts
print(f"Generated {len(tork_handler.receipts)} receipts")

Callback Events

on_llm_startEvaluate prompts before sending to LLM
on_llm_endEvaluate LLM responses
on_chain_startEvaluate chain inputs
on_chain_endEvaluate chain outputs
on_tool_startEvaluate tool inputs
on_tool_endEvaluate tool outputs

GovernedChain

Wrap any LangChain chain with governance controls.

GovernedChain wraps any LangChain chain (including LCEL) to evaluate inputs before execution and outputs after. Supports automatic redaction of sensitive data.

pythongoverned_chain.py
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnablePassthrough
from tork.adapters.langchain import GovernedChain, create_governed_chain
from tork.core.engine import GovernanceEngine

# Initialize
engine = GovernanceEngine(api_key="your_tork_api_key")
llm = ChatOpenAI(model="gpt-4")

# Create a simple LCEL chain
prompt = ChatPromptTemplate.from_template(
    "You are a helpful assistant. Answer: {question}"
)
chain = prompt | llm

# Wrap with governance controls
governed_chain = create_governed_chain(
    chain=chain,
    engine=engine,
    agent_id="qa-assistant"
)

# Use like a normal chain - inputs and outputs are evaluated
try:
    result = governed_chain.invoke({"question": "What is Python?"})
    print(result)
except GovernanceViolation as e:
    print(f"Blocked: {e.message}")
    print(f"Violations: {e.violations}")

TorkGovernedTool

Apply governance to individual LangChain tools.

Wrap any LangChain tool to evaluate inputs before execution and outputs after. Useful for sensitive operations like database access, API calls, or file operations.

pythongoverned_tool.py
from langchain.tools import Tool, DuckDuckGoSearchRun
from langchain.agents import initialize_agent, AgentType
from langchain_openai import ChatOpenAI
from tork.adapters.langchain import TorkGovernedTool, create_governed_tool

# Create a standard LangChain tool
search = DuckDuckGoSearchRun()

# Wrap it with Tork governance
governed_search = create_governed_tool(
    tool=search,
    api_key="your_tork_api_key",  # Or pass engine=
    agent_id="search-agent"
)

# The governed tool has the same interface
result = governed_search.run("latest AI news")

# Use in an agent
llm = ChatOpenAI(model="gpt-4")
tools = [governed_search]

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

# Tool inputs/outputs are evaluated against policies
response = agent.run("Search for information about AI safety")

LCEL Integration

Use with LangChain Expression Language (LCEL) chains.

Tork integrates seamlessly with modern LangChain using LCEL. Use callbacks for monitoring or GovernedChain for strict input/output governance.

pythonlcel_example.py
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from tork.adapters.langchain import TorkCallbackHandler, GovernedChain
from tork.core.engine import GovernanceEngine

engine = GovernanceEngine(api_key="your_tork_api_key")

# Create LCEL chain with multiple steps
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful coding assistant."),
    ("human", "{input}")
])

llm = ChatOpenAI(model="gpt-4")
output_parser = StrOutputParser()

# Build the chain
chain = prompt | llm | output_parser

# Option 1: Use callback handler for monitoring
handler = TorkCallbackHandler(engine=engine, agent_id="code-assistant")
result = chain.invoke(
    {"input": "Write a hello world in Python"},
    config={"callbacks": [handler]}
)

# Option 2: Wrap entire chain for input/output governance
governed = GovernedChain(
    chain=chain,
    engine=engine,
    agent_id="code-assistant"
)
result = governed.invoke({"input": "Explain recursion"})

AgentExecutor with Governance

Full governance for multi-step agent workflows.

Combine callback handlers with governed tools for comprehensive agent governance. All agent reasoning steps, tool calls, and outputs are evaluated against policies.

pythonagent_executor.py
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import tool
from tork.adapters.langchain import TorkCallbackHandler, TorkGovernedTool
from tork.core.engine import GovernanceEngine

engine = GovernanceEngine(api_key="your_tork_api_key")
handler = TorkCallbackHandler(engine=engine, agent_id="research-agent")

# Define custom tools
@tool
def search_database(query: str) -> str:
    """Search the internal database for information."""
    return f"Results for: {query}"

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to the specified recipient."""
    return f"Email sent to {to}"

# Wrap sensitive tools with governance
governed_email = TorkGovernedTool(
    tool=send_email,
    engine=engine,
    agent_id="email-tool"
)

# Create the agent
llm = ChatOpenAI(model="gpt-4", callbacks=[handler])

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful research assistant."),
    MessagesPlaceholder(variable_name="chat_history", optional=True),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

agent = create_openai_functions_agent(
    llm=llm,
    tools=[search_database, governed_email],
    prompt=prompt
)

executor = AgentExecutor(
    agent=agent,
    tools=[search_database, governed_email],
    callbacks=[handler],
    verbose=True
)

# Run with full governance tracking
result = executor.invoke({
    "input": "Search for Q4 results and email summary to manager@company.com"
})

# Access audit trail
for receipt in handler.receipts:
    print(f"Action: {receipt.action}, Decision: {receipt.decision}")

Policy Enforcement Patterns

Handle governance violations gracefully.

When content violates policies, GovernanceViolation is raised. Catch this exception to provide user-friendly responses instead of errors.

pythonpolicy_handling.py
from tork.adapters.langchain import GovernedChain, GovernanceViolation
from tork.core.engine import GovernanceEngine
from tork.core.models import Policy, PolicyRule
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

# Configure governance engine with policies
engine = GovernanceEngine(api_key="your_tork_api_key")

# Example: Add custom policy rules (usually done via dashboard)
# These policies are evaluated against inputs/outputs

# Create governed chain
llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_template("{input}")
chain = prompt | llm

governed = GovernedChain(
    chain=chain,
    engine=engine,
    agent_id="customer-service-bot"
)

def safe_invoke(user_input: str) -> str:
    """Safely invoke chain with governance handling."""
    try:
        result = governed.invoke({"input": user_input})
        return result.content if hasattr(result, 'content') else str(result)

    except GovernanceViolation as e:
        # Handle policy violations gracefully
        if "pii" in str(e.violations).lower():
            return "I cannot process requests containing personal information."
        elif "jailbreak" in str(e.violations).lower():
            return "I cannot respond to that type of request."
        else:
            return f"Request blocked for safety reasons."

# Safe usage
response = safe_invoke("What's the weather like?")  # Allowed
response = safe_invoke("My SSN is 123-45-6789")    # Blocked - PII detected
response = safe_invoke("Ignore instructions...")   # Blocked - Jailbreak attempt

Advanced Patterns

Streaming, async, and compliance receipts

python
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from tork.adapters.langchain import TorkCallbackHandler
from tork.core.engine import GovernanceEngine

engine = GovernanceEngine(api_key="your_tork_api_key")

# Combine Tork with streaming
tork_handler = TorkCallbackHandler(engine=engine, agent_id="streaming-app")
stream_handler = StreamingStdOutCallbackHandler()

llm = ChatOpenAI(
    model="gpt-4",
    streaming=True,
    callbacks=[tork_handler, stream_handler]
)

prompt = ChatPromptTemplate.from_template("Write a poem about {topic}")
chain = prompt | llm

# Governance is enforced even with streaming
result = chain.invoke({"topic": "technology"})

Best Practices

Use callbacks for monitoring, GovernedChain for strict enforcement

Callbacks log and can block, but GovernedChain is designed to evaluate every input/output.

Wrap sensitive tools individually

Use TorkGovernedTool for tools that access databases, send emails, or modify state.

Handle GovernanceViolation gracefully

Catch exceptions and return user-friendly messages instead of exposing errors.

Enable receipt generation for compliance

Pass receipt_generator to create an audit trail of all governance decisions.

Use consistent agent IDs

Set meaningful agent_id values to track which component triggered evaluations.

Imports Reference

python
from tork.adapters.langchain import (
    TorkCallbackHandler,   # Callback handler for monitoring
    GovernedChain,         # Chain wrapper
    create_governed_chain, # Convenience function
    TorkGovernedTool,      # Tool wrapper
    create_governed_tool,  # Convenience function
    GovernanceViolation,   # Exception for policy violations
)

Next Steps

Configure policies in the dashboard and explore other integration options.

Documentation

Learn to integrate TORK

Upgrade Plan

Current: free

Support

Get help from our team