Tracing

ThinkHive provides four main tracing functions to capture different types of AI operations.

traceLLM

Trace calls to language models (OpenAI, Anthropic, etc.).

import { traceLLM } from '@thinkhive/sdk';
 
const response = await traceLLM({
  name: 'chat-completion',
  modelName: 'gpt-4',
  provider: 'openai',
  input: userMessage,
  metadata: {
    temperature: 0.7,
    customerId: 'cust_123',
  },
}, async () => {
  return await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: userMessage }],
    temperature: 0.7,
  });
});

Options

OptionTypeRequiredDescription
namestringYesName for this LLM call
modelNamestringYesModel identifier (e.g., ‘gpt-4’)
providerstringYesProvider name (e.g., ‘openai’, ‘anthropic’)
inputstringNoInput prompt or message
metadataobjectNoAdditional attributes to record

Automatic Capture

The SDK automatically captures from OpenAI-compatible responses:

  • Token counts: prompt_tokens, completion_tokens, total_tokens
  • Response content: The generated text
  • Finish reason: Why generation stopped
  • Model: Actual model used (may differ from requested)

traceRetrieval

Trace retrieval operations (vector search, database queries).

import { traceRetrieval } from '@thinkhive/sdk';
 
const documents = await traceRetrieval({
  name: 'vector-search',
  query: userQuery,
  topK: 10,
  metadata: {
    collection: 'knowledge_base',
    filter: { category: 'support' },
  },
}, async () => {
  return await vectorDB.search({
    query: userQuery,
    limit: 10,
  });
});

Options

OptionTypeRequiredDescription
namestringYesName for this retrieval
querystringYesThe search query
topKnumberNoNumber of results to retrieve
metadataobjectNoAdditional attributes

Automatic Capture

  • Document count: Number of retrieved documents
  • Document content: If documents have a content field
  • Relevance scores: If returned by the vector database

traceTool

Trace tool/function calls.

import { traceTool } from '@thinkhive/sdk';
 
const result = await traceTool({
  name: 'web-search',
  toolName: 'google_search',
  parameters: { query: 'ThinkHive AI observability' },
}, async () => {
  return await searchAPI.search('ThinkHive AI observability');
});

Options

OptionTypeRequiredDescription
namestringYesName for this tool call
toolNamestringYesThe tool being called
parametersobjectNoParameters passed to the tool
metadataobjectNoAdditional attributes

traceChain

Trace a workflow or chain of operations.

import { traceChain, traceLLM, traceRetrieval } from '@thinkhive/sdk';
 
const answer = await traceChain({
  name: 'rag-pipeline',
  input: { question: userQuestion },
}, async () => {
  // Nested spans are automatically associated
  const docs = await traceRetrieval({
    name: 'retrieve',
    query: userQuestion,
  }, async () => vectorDB.search(userQuestion));
 
  const response = await traceLLM({
    name: 'generate',
    modelName: 'gpt-4',
    provider: 'openai',
  }, async () => openai.chat.completions.create({
    model: 'gpt-4',
    messages: [
      { role: 'system', content: `Context: ${docs.join('\n')}` },
      { role: 'user', content: userQuestion },
    ],
  }));
 
  return response.choices[0].message.content;
});

Options

OptionTypeRequiredDescription
namestringYesName for this chain
inputobjectNoInput to the chain
metadataobjectNoAdditional attributes

Adding Custom Attributes

Add custom attributes to any span:

await traceLLM({
  name: 'chat',
  modelName: 'gpt-4',
  provider: 'openai',
  metadata: {
    // Business context
    customerId: 'cust_123',
    channel: 'chat',
    intent: 'product_inquiry',
 
    // Technical context
    version: '1.2.3',
    environment: 'production',
 
    // Quality hints
    expectedOutcome: 'success',
    criticalPath: true,
  },
}, async () => { /* ... */ });

Error Handling

Errors are automatically captured:

try {
  await traceLLM({
    name: 'risky-call',
    modelName: 'gpt-4',
    provider: 'openai',
  }, async () => {
    throw new Error('API rate limit exceeded');
  });
} catch (error) {
  // Error is recorded in the span with:
  // - error.type: 'Error'
  // - error.message: 'API rate limit exceeded'
  // - error.stack: full stack trace
  console.error(error);
}

Span Context

Access the current span for advanced use cases:

import { getActiveSpan, setSpanAttribute } from '@thinkhive/sdk';
 
await traceLLM({ /* ... */ }, async () => {
  const span = getActiveSpan();
 
  // Add attributes dynamically
  setSpanAttribute('custom.metric', 42);
 
  // Add events
  span?.addEvent('cache_hit', { key: 'user_context' });
 
  return await llmCall();
});

Nested Spans

Spans automatically nest based on call hierarchy:

await traceChain({ name: 'parent' }, async () => {
  // This becomes a child of 'parent'
  await traceLLM({ name: 'child-1' }, async () => { /* ... */ });
 
  // This also becomes a child of 'parent'
  await traceRetrieval({ name: 'child-2' }, async () => { /* ... */ });
 
  // Nested chains work too
  await traceChain({ name: 'child-3' }, async () => {
    // This becomes a child of 'child-3'
    await traceTool({ name: 'grandchild' }, async () => { /* ... */ });
  });
});

Result hierarchy:

parent (chain)
├── child-1 (llm)
├── child-2 (retrieval)
└── child-3 (chain)
    └── grandchild (tool)

Performance Tips

Minimize Overhead

  1. Use autoInstrument for OpenAI/LangChain instead of manual tracing
  2. Batch traces are sent asynchronously - no blocking
  3. The SDK adds ~1-2ms overhead per span

Next Steps