Getting StartedHow It Works

How It Works

This page explains how ThinkHive processes your AI agent data from ingestion to actionable insights.

Data Flow

1. Trace Ingestion

Your agent sends trace data to ThinkHive using one of:

  • ThinkHive SDKsJavaScript or Python
  • OpenTelemetry Protocol (OTLP) — standard POST /v1/traces endpoint
  • Third-party format adapters — LangSmith, Langfuse, Helicone, and 20+ other formats

Each trace contains spans — individual operations like LLM calls, retrieval queries, and tool invocations.

2. Processing Pipeline

Once ingested, traces pass through:

StageWhat Happens
PII RedactionSensitive data is detected and redacted before storage
NormalizationTraces from different formats are normalized to a common schema
EnrichmentMetadata is extracted (model, provider, token counts, latency)
IndexingData is indexed for fast search and filtering

3. Analysis

ThinkHive runs several analysis passes on stored traces:

Explainability Engine — AI-powered analysis that examines:

  • What the agent did and why
  • Claims made in responses (facts vs. inferences)
  • Quality signals (groundedness, faithfulness, relevance)
  • Potential hallucinations and errors

Case Clustering — Automatic grouping of similar failures:

  • Semantic similarity-based clustering
  • Pattern extraction across failure groups
  • Severity assessment and prioritization

Evaluation Pipeline (ThinkEval) — Structured quality measurement:

  • Deterministic graders for objective checks
  • LLM judges for subjective quality assessment
  • Jury mode for high-stakes consensus scoring

4. Insights & Actions

Analysis results surface in the dashboard and API:

  • Cases — Clustered failure patterns with AI-generated fix proposals
  • Quality Metrics — Scores, trends, and distributions over time
  • Drift Alerts — Notifications when quality degrades
  • Shadow Tests — Validate fixes before deploying

Architecture Components

Frontend

The dashboard is a React single-page application providing:

  • Real-time trace exploration with timeline and tree views
  • Case management and fix tracking
  • Evaluation suite configuration (ThinkEval wizard)
  • Analytics dashboards with quality trends
  • Settings for API keys, webhooks, and compliance

Backend

The API server handles:

  • OTLP trace ingestion with format auto-detection
  • RESTful API for all platform features
  • Background job processing for analysis and evaluation
  • Webhook delivery with retry and circuit breaker logic

Database

PostgreSQL stores all platform data:

  • Traces, spans, and agent metadata
  • Cases, fixes, and evaluation results
  • User accounts, API keys, and settings
  • Audit logs for compliance

Integration Points

IntegrationProtocolPurpose
OTLP IngestionHTTP/gRPCReceive traces
REST APIHTTPSPlatform features
WebhooksHTTPSEvent notifications
Auth0OAuth 2.0Enterprise authentication
StripeHTTPSBilling and credits

Supported Trace Formats

ThinkHive accepts traces from 25+ observability platforms:

CategoryPlatforms
NativeThinkHive SDK, OTLP
LLM ObservabilityLangSmith, Langfuse, Helicone, Braintrust, HoneyHive
Agent FrameworksCrewAI, AutoGen, LangGraph
ML PlatformsMLflow, Weights & Biases, Opik
General ObservabilityDatadog, OpenTelemetry, Jaeger
OtherAgentOps, Portkey, TruLens, Lunary, LangWatch

Next Steps