V4 APIs (Python)
v4.0.1StableLast updated: March 2026
The Python SDK v4 provides access to ThinkHive’s analysis, quality, and ROI APIs through the ThinkHive client.
Client Initialization
import thinkhive
client = thinkhive.ThinkHive(
api_key='thk_your_api_key',
endpoint='https://app.thinkhive.ai' # optional
)Trace Management
from thinkhive import TraceData, SpanData
# Submit a trace
result = client.trace(TraceData(
agent_id='agent_abc123',
spans=[
SpanData(
name='customer-chat',
type='llm',
input='How do I reset my password?',
output='To reset your password...',
model='gpt-4',
tokens={'input': 25, 'output': 150}
)
],
outcome='success'
))
print(result.trace_id) # 'tr_xyz789'
# Retrieve a trace
trace = client.get_trace('tr_xyz789')Explainability Analysis
# Analyze traces
analysis = client.explainer.analyze(
traces=[{
'userMessage': 'How do I reset my password?',
'agentResponse': 'To reset your password...',
'retrievedDocuments': [{'content': '...', 'score': 0.92}]
}],
options={
'tier': 'full_llm',
'includeRagEvaluation': True,
'includeHallucinationDetection': True
}
)
print(analysis.overall_score)
print(analysis.rag_evaluation.groundedness)Quality Metrics
# Get RAG quality scores for a trace
rag_scores = client.quality.get_rag_scores('tr_xyz789')
print(rag_scores.groundedness)
print(rag_scores.faithfulness)
# Detect hallucinations
hallucination_report = client.quality.detect_hallucinations({
'input': user_message,
'output': agent_response,
'context': retrieved_context
})
if hallucination_report.detected:
for h in hallucination_report.hallucinations:
print(f'{h.type}: {h.description} (confidence: {h.confidence})')ROI Analytics
# Get ROI summary
roi = client.analytics.get_roi_summary()
print(f'Total ROI: {roi.total_roi}')
print(f'Cost savings: {roi.cost_savings}')
# Per-agent ROI
agent_roi = client.analytics.get_roi_by_agent('agent_abc123')Feedback
from thinkhive import Feedback
# Submit feedback on a trace
client.feedback(Feedback(
trace_id='tr_xyz789',
rating=5,
comment='Accurate response'
))Auto-Instrumentation
# Automatically trace LLM calls from supported frameworks
thinkhive.auto_instrument(
frameworks=['openai', 'langchain', 'llamaindex', 'anthropic']
)
# Now all OpenAI/LangChain/etc. calls are automatically traced
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(...) # Automatically tracedJS-Only Features
The following modules are currently available only in the JavaScript SDK:
humanReview— Human review queue managementnondeterminism— Pass@k analysis and reliability testingevalHealth— Evaluation health monitoringdeterministicGraders— Rule-based evaluationconversationEval— Multi-turn conversation analysistranscriptPatterns— Transcript pattern detectionbusinessMetrics— Business metric trackingapiKeys— API key management (v2)issues— Issue management (v2)analyzer— Trace analysis engine (v2)linking— Ticket linking (7 methods)customerContext— Customer context snapshots
You can access these features via the REST API directly from Python.
Next Steps
- Decorators — Decorator-based tracing patterns
- Guardrails SDK — Real-time content scanning
- Framework Integrations — LangChain, LlamaIndex, and more
- API Reference — Full REST API documentation