Guardrails SDK
v4.1.0BetaLast updated: March 2026
The guardrails module provides real-time content scanning for PII, secrets, keywords, topics, regex patterns, and tool call validation.
Quick Start
import { init, guardrails } from '@thinkhive/sdk';
init({ apiKey: process.env.THINKHIVE_API_KEY });
// Scan user input before sending to LLM
const result = await guardrails.scan({
input: userMessage,
scanners: ['pii', 'secrets'],
config: {
pii: { action: 'redact', entities: ['email', 'ssn', 'phone'] },
secrets: { action: 'block' }
}
});
if (result.action === 'block') {
throw new Error(`Content blocked: ${result.actionReason}`);
}
// Use redacted content
const safeInput = result.redactedInput ?? userMessage;Available Methods
guardrails.scan(options)
Scan content against one or more scanners.
const result = await guardrails.scan({
// Content to scan (at least one required)
input: 'User message text',
output: 'Agent response text',
toolCall: { name: 'delete_user', arguments: { userId: '123' } },
// Scanners to run (optional if using policyId)
scanners: ['pii', 'secrets', 'keywords', 'regex', 'topic', 'tool_call'],
// Named policy (overrides scanners)
policyId: 'policy_abc123',
// Per-scanner configuration
config: {
pii: { action: 'redact', entities: ['email', 'phone'] },
secrets: { action: 'block' },
keywords: { keywords: ['confidential', 'internal'], caseSensitive: false },
regex: { patterns: [{ pattern: '\\bINT-\\d+\\b', name: 'internal_id', action: 'flag' }] },
topic: { topics: ['medical_advice'], action: 'block' },
tool_call: { allowedTools: [{ name: 'search', requiredArgs: ['query'] }] }
},
// Execution options
options: {
timeout: 3000, // Per-scanner timeout (ms)
failOpen: false, // On timeout: true=pass, false=block
shortCircuit: true, // Stop on first block
returnRedacted: true // Include redacted content
}
});Returns:
{
action: 'pass' | 'flag' | 'redact' | 'block',
actionReason: string,
redactedInput?: string,
redactedOutput?: string,
results: {
[scannerName: string]: {
scanner: string,
status: 'completed' | 'timeout' | 'error',
action: string,
findings: Array<{
type: string,
value: string,
start: number,
end: number,
confidence: number
}>,
latencyMs: number
}
},
metadata: {
scanId: string,
totalLatencyMs: number,
scannersExecuted: number,
cached: boolean
}
}guardrails.listScanners()
List all available scanners.
const scanners = await guardrails.listScanners();
// Returns: { scanners: [{ name: 'pii', description: '...' }, ...] }Middleware Pattern
Integrate guardrails as middleware in your agent pipeline:
import { init, guardrails } from '@thinkhive/sdk';
init({ apiKey: process.env.THINKHIVE_API_KEY });
async function processMessage(userMessage: string) {
// 1. Scan input
const inputScan = await guardrails.scan({
input: userMessage,
policyId: 'production-input-policy'
});
if (inputScan.action === 'block') {
return { error: 'Your message contains content that cannot be processed.' };
}
const safeInput = inputScan.redactedInput ?? userMessage;
// 2. Get LLM response
const agentResponse = await callLLM(safeInput);
// 3. Scan output
const outputScan = await guardrails.scan({
output: agentResponse,
policyId: 'production-output-policy'
});
if (outputScan.action === 'block') {
return { error: 'The response was filtered for safety.' };
}
return { response: outputScan.redactedOutput ?? agentResponse };
}Error Handling
import { ThinkHiveApiError, RateLimitError } from '@thinkhive/sdk';
try {
const result = await guardrails.scan({ input: text, scanners: ['pii'] });
} catch (error) {
if (error instanceof RateLimitError) {
// Back off and retry
await delay(error.retryAfter * 1000);
} else if (error instanceof ThinkHiveApiError) {
console.error(`API error: ${error.message} (${error.statusCode})`);
}
}Next Steps
- Guardrails API Reference — Full endpoint documentation
- Guardrail Policies Guide — Creating and managing policies
- Compliance & Scanning — Compliance features
The Guardrails API is in beta. Endpoints and response schemas may change. Pin your SDK version for stability.