Tekimax LogoSDK

Core Concepts

The SDK is built around two primitives: the Tekimax client and the AIProvider interface.

The Client

The Tekimax client is the unified entry point. It organizes capabilities into namespaces — one per modality — so auto-complete guides you to the right method without memorizing the API surface.

Code
import { Tekimax, OpenAIProvider } from 'tekimax-omat'; const client = new Tekimax({ provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }) }); client.text // Chat, completions, embeddings client.images // Image generation and vision analysis client.audio // Text-to-speech and transcription client.videos // Video analysis (Gemini)

Chat (Text)

Two equivalent calling styles — use whichever reads better in your codebase:

Code
// OpenAI-style dot-chain const response = await client.text.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello!' }] }); // Direct method const response = await client.text.generate({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello!' }] }); // Both return ChatResult with a flat shape — no choices array. console.log(response.message.content); console.log(response.usage?.totalTokens);

Providers

Providers translate the SDK's unified types to each upstream API format and normalize responses back. All implement the AIProvider interface. Additional capabilities (vision, image generation, transcription) use Capability Interfaces — providers explicitly opt in.

Code
// Base provider — text chat only interface AIProvider { name: string; chat: (options: ChatOptions) => Promise<ChatResult>; chatStream: (options: ChatOptions) => AsyncIterable<StreamChunk>; } // Providers opt into additional capabilities interface VisionCapability { analyzeImage: (options: ImageAnalysisOptions) => Promise<ImageAnalysisResult>; } interface TranscriptionCapability { transcribeAudio: (options: TranscriptionOptions) => Promise<TranscriptionResult>; }

Calling .images.generate() on a text-only provider throws a compile-time TypeScript error — no runtime surprises.

Streaming

Streaming uses a dedicated method rather than a stream: true flag. TypeScript infers the return type correctly at the call site.

Code
const stream = client.text.chat.completions.createStream({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Tell me a story' }], }); for await (const chunk of stream) { process.stdout.write(chunk.delta); }

Multi-Turn Conversations

The Conversation class maintains message history automatically. Each send() call appends the user message, gets the assistant response, appends that too, and returns — so the next call has full context.

Code
import { Conversation, OpenAIProvider } from 'tekimax-omat'; const provider = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }); const convo = new Conversation(provider, { model: 'gpt-4o', system: 'You are a helpful intake coordinator for a nonprofit job training program.', }); const r1 = await convo.send('I want to learn software development'); console.log(r1.message.content); // Asks about background, goals, etc. const r2 = await convo.send('I have no prior experience but I can commit 20 hours a week'); console.log(r2.message.content); // Tailored program recommendations console.log(convo.turnCount); // 2 console.log(convo.history); // All messages including system prompt

Conversation Methods

Code
// Send a message (auto-appends to history) const result = await convo.send('What programs do you offer?'); // Stream a response (chunks assembled into history after completion) for await (const chunk of convo.stream('Tell me more about the web track')) { process.stdout.write(chunk.delta); } // Inject a message without a model call (seed context or replay history) convo.inject({ role: 'assistant', content: 'Previous session summary...' }); // Export and restore (persist to database, reload later) const snapshot = convo.export(); // Message[] await db.save(userId, snapshot); const restored = new Conversation(provider, { model: 'gpt-4o' }); restored.restore(await db.load(userId)); // Clear history (system prompt is kept) convo.clear();

Tool Calling

Tool definitions use the OpenAI function-calling schema. The SDK normalizes to each provider's format internally (Gemini's functionDeclarations, Anthropic's tool_use blocks, etc.).

Code
const response = await client.text.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Check eligibility for the summer youth program' }], tools: [{ type: 'function', function: { name: 'check_eligibility', description: 'Check if a participant meets program eligibility requirements', parameters: { type: 'object', properties: { age: { type: 'number' }, zipCode: { type: 'string' }, householdIncome: { type: 'number' } }, required: ['age', 'zipCode'] } } }] }); // Uniform tool call shape — same regardless of provider const toolCalls = response.message.toolCalls; if (toolCalls) { console.log(toolCalls[0].function.name); // "check_eligibility" console.log(toolCalls[0].function.arguments); // '{"age":17,"zipCode":"94103"}' }

Agentic Loops with generateText

For multi-step tool-calling workflows, generateText handles the tool execution loop automatically.

Code
import { generateText, OpenAIProvider } from 'tekimax-omat'; const provider = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }); const result = await generateText({ adapter: provider, model: 'gpt-4o', messages: [{ role: 'user', content: 'Find available job training programs for veterans in Oakland' }], tools: { search_programs: { type: 'function', function: { name: 'search_programs', description: 'Search for programs matching criteria', parameters: { type: 'object', properties: { population: { type: 'string' }, city: { type: 'string' }, category: { type: 'string' } } } }, execute: async ({ population, city, category }) => { return await db.searchPrograms({ population, city, category }); } } }, maxSteps: 5 // Prevents runaway loops — default is 1 }); console.log(result.text); // Final answer after tool execution console.log(result.aiTag); // { source: 'ai', operation: 'read', confidence: 'high', ... }

AI Action Tagging

Every ChatResult and GenerateTextResult includes an aiTag field that identifies what kind of operation the AI performed. Use this to distinguish AI-generated content from human content and to log CRUD operations for audit trails.

Code
const result = await client.text.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Create a new case file for participant #1024' }] }); console.log(result.aiTag); // { // source: 'ai', // operation: 'create', // 'create' | 'read' | 'update' | 'delete' | 'none' // confidence: 'low', // 'high' (from tool name) | 'low' (inferred from text) // model: 'gpt-4o', // timestamp: 1710000000000 // }

Add the AIActionTagPlugin to get audit callbacks and high-confidence tagging via tool call names:

Code
import { AIActionTagPlugin } from 'tekimax-omat'; const client = new Tekimax({ provider, plugins: [ new AIActionTagPlugin({ onTag: (tag, context) => { auditLog.record({ operation: tag.operation, model: tag.model, userId: context.requestOptions?.userId, timestamp: tag.timestamp, }); } }) ] });

Reasoning Models

Capture chain-of-thought reasoning from models like DeepSeek R1, Claude extended thinking, or o1.

Code
const response = await client.text.chat.completions.create({ model: 'deepseek-r1', messages: [{ role: 'user', content: 'Should we expand the after-school program to two locations?' }], think: true }); console.log(response.message.thinking); // Reasoning trace (collapsible in UI) console.log(response.message.content); // Final recommendation

Model Context Window Management

The SDK knows the context window for 50+ models. TokenAwareContextPlugin automatically trims history when approaching the limit — always preserving the system prompt.

Code
import { TokenAwareContextPlugin } from 'tekimax-omat'; const client = new Tekimax({ provider, plugins: [ new TokenAwareContextPlugin({ truncationStrategy: 'last_messages', // Keep most recent context contextUsageFraction: 0.85, // Use up to 85% of the context window reserveOutputTokens: 2048, // Reserve space for the response }) ] });

The plugin auto-detects context windows from a built-in registry (GPT-4o: 128K, Claude: 200K, Gemini 1.5 Pro: 1M, etc.) and falls back to a live OpenRouter API lookup for models not in the registry.

React Hooks

Code
import { useChat } from 'tekimax-omat/react'; import { useAssessment } from 'tekimax-omat/react';

See the React Integration guide and OMAT Assessment guide for full usage.

On this page