Tork Governance Microservice
Deploy Tork as a standalone HTTP service for non-Python applications. Use from Node.js, Go, Java, or any language that can make HTTP requests.
Language Agnostic
Call from any language via HTTP
Docker Ready
Containerized deployment
Scalable
Horizontal scaling with load balancing
Observable
Health checks and metrics
Architecture Pattern
Deploy Tork as a sidecar container or centralized service. Your app calls it via REST API.
FastAPI Service
Complete governance microservice implementation.
Dependencies
Python package requirements.
Docker Deployment
Containerize and deploy with Docker or Kubernetes
Client Examples
Call the service from any programming language
Sidecar Pattern
Deploy Tork alongside your application containers.
Run Tork as a sidecar container that your app communicates with via localhost. This pattern ensures low latency and network isolation.
Load Balancing
Scale horizontally with load balancing.
For high-throughput scenarios, deploy multiple Tork instances behind a load balancer.
Environment Configuration
Configure the service via environment variables.
Observability
Add Prometheus metrics for monitoring.
Instrument your service with Prometheus metrics for observability.
API Endpoints
Production Best Practices
Use health checks
Configure liveness and readiness probes for your orchestrator (Kubernetes, ECS, etc.).
Set resource limits
Define CPU and memory limits to prevent resource exhaustion.
Enable connection pooling
Use HTTP keepalive and connection pooling in your clients for better performance.
Implement retry logic
Add exponential backoff retries for transient failures.
Use batch endpoints
For high throughput, use /batch/evaluate to process multiple requests in one call.
Monitor metrics
Collect Prometheus metrics and set up alerts for latency and error rates.
Next Steps
Configure policies in the dashboard or explore native SDK integrations.