Feature
One governance layer for all your AI. 100 native adapters across Python, JavaScript, Ruby, Go, Rust, Java, PHP, and .NET. Write policies once, enforce everywhere.
Native integration across 11 SDKs: Python, JavaScript, Ruby, Go, Rust, Java, PHP, .NET, Swift, Elixir, and Kotlin
LLM Provider SDKs
Direct API
Claude API
Gemini API
Multi-model
Enterprise
Chat/Embed
Orchestration
RAG pipelines
NLP pipelines
Microsoft SDK
100+ LLMs
Local LLMs
High-perf
Multi-agent
Microsoft
Swarm SDK
Multi-agent
Task-driven
Autonomous
Browser
Type-safe
Structured
Declarative
Templates
Query lang
Constrained
AI functions
Validation
NVIDIA
Visual
No-code
LLMOps
AI-native
Managed
Vector search
Similarity
Scalable
Tracing
Analytics
Arize
Monitoring
Experiments
ML Ops
Gateway
Prompts
Optimize
Async
Full-stack
Micro
Node.js web
Fast & low overhead
Next-gen Node
Ultrafast web
Plugin-based
Enterprise Node
React framework
Full stack React
Svelte framework
Vue framework
Content sites
Bun framework
Deno framework
Bun native
JS chains
AI SDK
Type-safe APIs
GraphQL server
Real-time
WebSocket
Full-stack Ruby
Micro framework
API framework
HTTP framework
High perf web
Express-like
Lightweight router
Powerful actor
Tokio-based
Type-safe web
Enterprise Java
Cloud native
Modular JVM
Full-stack PHP
Enterprise PHP
Standalone
Web framework
Standalone
from tork.adapters import LangChainAdapter
from langchain.chains import LLMChain
adapter = LangChainAdapter("policy.yaml")
# Wrap any chain
chain = LLMChain(llm=llm, prompt=prompt)
governed_chain = adapter.wrap(chain)
# Use normally - governance is automatic
result = governed_chain.run("User input here")from tork.adapters import CrewAIAdapter
from crewai import Crew, Agent
adapter = CrewAIAdapter("policy.yaml")
# Wrap entire crew
crew = Crew(agents=[agent1, agent2], tasks=[task1])
governed_crew = adapter.wrap(crew)
# Agent communications are governed
result = governed_crew.kickoff()from tork.adapters import OpenAIAdapter
from openai import OpenAI
adapter = OpenAIAdapter("policy.yaml")
client = OpenAI()
# Wrap the client
governed_client = adapter.wrap(client)
# All completions are governed
response = governed_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)Same policy.yaml works across all frameworks. No rewriting rules when you switch.
Whether using LangChain in prod or testing with raw OpenAI calls, same governance applies.
Move from one framework to another without rebuilding your compliance layer.
Running CrewAI for agents and LangChain for RAG? One Tork instance governs both.
New framework released? Our SDK lets you add governance in hours, not weeks.