Framework Integrations
ThinkHive provides automatic instrumentation for popular AI frameworks. Enable it once and all calls are traced automatically.
OpenAI
Auto-Instrumentation
The easiest way to trace OpenAI calls:
import { init } from '@thinkhive/sdk';
import OpenAI from 'openai';
// Enable auto-instrumentation
init({
serviceName: 'my-app',
autoInstrument: true,
frameworks: ['openai'],
});
// All OpenAI calls are now automatically traced
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
// Trace automatically created with model, tokens, latencyManual Instrumentation
For more control, use the instrumentation helper:
import { instrumentOpenAIClient } from '@thinkhive/sdk/instrumentation/openai';
import OpenAI from 'openai';
const openai = new OpenAI();
instrumentOpenAIClient(openai, {
captureInput: true, // Include prompts in traces
captureOutput: true, // Include responses in traces
captureMetadata: true, // Include model, tokens, etc.
});
// Now traced
const response = await openai.chat.completions.create({ /* ... */ });What’s Captured
| Attribute | Description |
|---|---|
llm.model | Model name (e.g., ‘gpt-4’) |
llm.provider | Always ‘openai’ |
llm.input_tokens | Prompt token count |
llm.output_tokens | Completion token count |
llm.total_tokens | Total tokens used |
llm.finish_reason | Why generation stopped |
llm.temperature | Temperature setting |
llm.latency_ms | Response time |
Streaming Support
Streaming completions are also traced:
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Trace includes full streamed responseLangChain
Auto-Instrumentation
import { init } from '@thinkhive/sdk';
init({
serviceName: 'my-langchain-app',
autoInstrument: true,
frameworks: ['langchain'],
});
// LangChain calls are now traced
import { ChatOpenAI } from '@langchain/openai';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatPromptTemplate } from '@langchain/core/prompts';
const model = new ChatOpenAI({ modelName: 'gpt-4' });
const prompt = ChatPromptTemplate.fromTemplate('Tell me about {topic}');
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const result = await chain.invoke({ topic: 'AI observability' });
// Full chain traced with each stepCallback Handler
For more control, use the callback handler:
import { setupLangChainCallback } from '@thinkhive/sdk/instrumentation/langchain';
const callback = setupLangChainCallback({
runName: 'my-chain-run',
metadata: { environment: 'production' },
});
const result = await chain.invoke(
{ topic: 'AI observability' },
{ callbacks: [callback] }
);What’s Captured
| Component | Captured Data |
|---|---|
| ChatModels | Model, tokens, messages, response |
| Chains | Chain name, inputs, outputs |
| Retrievers | Query, documents, scores |
| Tools | Tool name, inputs, outputs |
| Agents | Agent type, steps, final answer |
Agent Tracing
Agents with multiple steps are fully traced:
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
const agent = createOpenAIFunctionsAgent({
llm: model,
tools: [searchTool, calculatorTool],
prompt: agentPrompt,
});
const executor = AgentExecutor.fromAgentAndTools({
agent,
tools: [searchTool, calculatorTool],
});
const result = await executor.invoke(
{ input: 'What is 42 * 17?' },
{ callbacks: [callback] }
);
// Trace shows: Agent Decision -> Tool Call -> Agent Decision -> Final AnswerVercel AI SDK
import { init } from '@thinkhive/sdk';
init({
serviceName: 'my-vercel-app',
autoInstrument: true,
frameworks: ['vercel-ai'],
});
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('gpt-4'),
prompt: 'What is ThinkHive?',
});
// Automatically tracedCustom Frameworks
Create custom instrumentation for any framework:
import { traceLLM, traceRetrieval } from '@thinkhive/sdk';
// Wrap your custom LLM client
class MyLLMClient {
async chat(message: string) {
return traceLLM({
name: 'my-llm-chat',
modelName: 'my-model',
provider: 'my-provider',
input: message,
}, async () => {
// Your actual LLM call
const response = await this.internalChat(message);
return response;
});
}
}
// Wrap your custom retriever
class MyRetriever {
async search(query: string) {
return traceRetrieval({
name: 'my-retriever-search',
query: query,
topK: 10,
}, async () => {
// Your actual search
const docs = await this.internalSearch(query);
return docs;
});
}
}Disabling Auto-Instrumentation
Disable for specific calls or globally:
import { withoutTracing } from '@thinkhive/sdk';
// Disable for a specific call
const response = await withoutTracing(async () => {
return await openai.chat.completions.create({ /* ... */ });
});
// Disable globally
init({
serviceName: 'my-app',
autoInstrument: false, // Disable all auto-instrumentation
});⚠️
Performance Note: Auto-instrumentation adds minimal overhead (~1-2ms per call), but if you’re making thousands of calls per second, consider using sampling or selective tracing.