Python SDK
The ThinkHive Python SDK (thinkhive) provides decorator-based tracing for Python AI applications. Add observability to your agent with just a few lines of code.
Installation
pip install thinkhiveOr with poetry:
poetry add thinkhiveOptional Dependencies
# For LangChain integration
pip install thinkhive[langchain]
# For LlamaIndex integration
pip install thinkhive[llamaindex]
# All integrations
pip install thinkhive[all]Requirements
- Python: 3.8 or higher
- Dependencies:
opentelemetry-api,opentelemetry-sdk,requests
Quick Start
Set your API key
export THINKHIVE_API_KEY=thk_your_api_keyInitialize the SDK
import thinkhive
thinkhive.init(
service_name="my-ai-agent",
)Trace your first LLM call
from openai import OpenAI
client = OpenAI()
@thinkhive.trace_llm(model_name="gpt-4", provider="openai")
def chat(message: str) -> str:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": message}],
)
return response.choices[0].message.content
# Automatically traced with model, tokens, and latency
answer = chat("What is ThinkHive?")
print(answer)View your traces
Open app.thinkhive.ai/traces to see your trace with span details, token usage, and latency breakdown.
Configuration Options
import thinkhive
thinkhive.init(
# Required
service_name="my-ai-agent",
# Optional: API key (can also use THINKHIVE_API_KEY env var)
api_key="thk_your_api_key",
# Optional: Custom endpoint (for self-hosted instances)
endpoint="https://app.thinkhive.ai",
# Optional: Agent identification
agent_id="agent_123",
# Optional: Debug mode (logs trace data to console)
debug=True,
# Optional: PII redaction
pii_redact=True,
pii_mode="redact", # "detect", "redact", or "hash"
)Configuration Reference
| Option | Type | Default | Description |
|---|---|---|---|
service_name | str | Required | Identifier for your service |
api_key | str | env.THINKHIVE_API_KEY | Your ThinkHive API key |
endpoint | str | https://app.thinkhive.ai | API endpoint URL |
agent_id | str | None | Agent identifier |
debug | bool | False | Enable debug logging |
pii_redact | bool | False | Enable PII redaction |
pii_mode | str | "redact" | PII handling mode |
Core Decorators
The SDK provides three main decorators for common AI operations:
@trace_llm — Language Model Calls
@thinkhive.trace_llm(model_name="gpt-4", provider="openai")
def generate_response(prompt: str) -> str:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
)
return response.choices[0].message.contentCaptures: model, provider, token counts, latency, finish reason, errors.
@trace_retrieval — Search & Retrieval
@thinkhive.trace_retrieval(query="dynamic")
def search_documents(query: str, top_k: int = 5) -> list:
results = vector_db.search(query, top_k=top_k)
return [{"id": r.id, "content": r.text, "score": r.score} for r in results]Captures: query, document count, scores, latency.
@trace_tool — Tool & Function Calls
@thinkhive.trace_tool(tool_name="web_search")
def search_web(query: str) -> dict:
response = requests.get("https://api.example.com/search", params={"q": query})
return response.json()Captures: tool name, inputs, outputs, errors.
Building a Traced RAG Pipeline
Combine decorators to trace a complete RAG pipeline:
import thinkhive
from openai import OpenAI
thinkhive.init(service_name="rag-agent")
client = OpenAI()
@thinkhive.trace_retrieval()
def retrieve(query: str) -> list:
# Your retrieval logic (Pinecone, FAISS, pgvector, etc.)
return vector_db.search(query, top_k=3)
@thinkhive.trace_llm(model_name="gpt-4", provider="openai")
def generate(question: str, context: str) -> str:
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": f"Answer based on:\n{context}"},
{"role": "user", "content": question},
],
)
return response.choices[0].message.content
def answer_question(question: str) -> str:
"""Full RAG pipeline — retrieval and generation are automatically traced."""
tracer = thinkhive.get_tracer()
with tracer.start_as_current_span("rag-pipeline") as span:
span.set_attribute("question", question)
docs = retrieve(question)
context = "\n\n".join(doc["content"] for doc in docs)
answer = generate(question, context)
return answer
# Result: rag-pipeline → retrieve → generate (nested spans)
result = answer_question("How does ThinkHive detect hallucinations?")Async Support
All decorators work with async functions:
from openai import AsyncOpenAI
async_client = AsyncOpenAI()
@thinkhive.trace_llm(model_name="gpt-4", provider="openai")
async def async_chat(message: str) -> str:
response = await async_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": message}],
)
return response.choices[0].message.content
# Usage
import asyncio
result = asyncio.run(async_chat("Hello!"))Accessing the Tracer
For advanced use cases, access the OpenTelemetry tracer directly:
tracer = thinkhive.get_tracer()
with tracer.start_as_current_span("custom_operation") as span:
span.set_attribute("custom.key", "value")
span.add_event("processing_started", {"step": 1})
# Your code here
result = do_something()
span.set_attribute("result.status", "success")Error Handling
Errors are automatically captured in trace spans:
@thinkhive.trace_llm(model_name="gpt-4", provider="openai")
def risky_call(prompt: str) -> str:
try:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
)
return response.choices[0].message.content
except Exception:
# Error is automatically recorded: error.type, error.message, error.stack
raiseEnvironment Variables
| Variable | Description |
|---|---|
THINKHIVE_API_KEY | Your API key (required if not passed to init) |
THINKHIVE_ENDPOINT | Custom API endpoint |
THINKHIVE_SERVICE_NAME | Default service name |
THINKHIVE_AGENT_ID | Default agent ID |
Type Hints
The SDK is fully typed for IDE autocompletion:
from thinkhive import TraceOptions, TraceLLMOptions
options: TraceLLMOptions = {
"model_name": "gpt-4",
"provider": "openai",
}V4 API Modules
The Python SDK v4 includes these API modules:
client = thinkhive.ThinkHive(api_key="thk_your_key")
# Core tracing
client.trace(trace_data)
client.get_trace(trace_id)
# Analysis
client.analyze_trace(trace_id)
client.explainer.analyze(traces=[...])
# Quality metrics
client.quality.get_rag_scores(trace_id)
client.quality.detect_hallucinations(data)
# ROI analytics
client.analytics.get_roi_summary()
# Guardrails
thinkhive.guardrails.scan(input="...", scanners=["pii"])Some advanced modules are currently available in the JavaScript SDK only: humanReview, nondeterminism, evalHealth, deterministicGraders, conversationEval, transcriptPatterns, businessMetrics, apiKeys, linking, customerContext. See the SDK comparison for details.
Next Steps
- Decorators — Detailed decorator reference with advanced patterns
- V4 APIs — New V4 API reference
- Guardrails SDK — Real-time content scanning
- Framework Integrations — LangChain, CrewAI, LlamaIndex, Anthropic
- Examples — Complete working examples
- Multi-Agent Tracing — Trace multi-agent systems
- API Reference — REST API documentation
Need Help? Check the Troubleshooting guide or contact support@thinkhive.ai.