Tekimax LogoSDK

Core Concepts

The Tekimax SDK is built around two main primitives: the Tekimax client and the AIProvider interface.

The Tekimax Client

The Tekimax class is the main entry point. It handles:

  • Request validation.
  • Response normalization.
  • Error handling.
  • Streaming logic.
Code
import { Tekimax } from 'tekimax-ts'; const client = new Tekimax({ provider: /* ... */, baseURL: 'https://api.your-proxy.com', // Optional override });

Providers

Providers act as translation layers. They convert strict Tekimax types into the specific format required by the upstream API (e.g., OpenAI, Anthropic) and normalize the response back.

All providers implement the AIProvider interface:

Code
interface AIProvider { chat( request: ChatCompletionRequest ): Promise<ChatCompletionResponse>; chatStream( request: ChatCompletionRequest ): AsyncIterable<ChatCompletionChunk>; }

Streaming

Streaming is handled automatically. If you set stream: true, the client returns an AsyncIterable.

Code
const stream = await client.chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'user', content: 'Tell me a story' } ], stream: true, }); for await (const chunk of stream) { process.stdout.write( chunk.choices[0]?.delta?.content || '' ); }

React Hooks

If you are using React, we provide built-in hooks for instant integration.

Code
import { useChat } from 'tekimax-ts/react';

See the React Integration guide for more details.

Tekimax Native Adapter

The tekimax provider is a special adapter designed to work with the Tekimax Inference Engine (OpenResponse). It supports advanced features like native tool calling and multimodal inputs.

Tool Calling

The adapter automatically flattens tool definitions to match the OpenResponse spec.

Code
const client = new Tekimax({ provider: new TekimaxProvider({ baseUrl: 'https://api.your-inference-engine.com' }) }); const response = await client.chat.completions.create({ model: 'meta-llama/Llama-2-70b-chat', messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }], tools: [{ type: 'function', function: { name: 'get_weather', description: 'Get current weather', parameters: { type: 'object', properties: { location: { type: 'string' } }, required: ['location'] } } }] }); // Access tool calls const toolCalls = response.choices[0].message.tool_calls; if (toolCalls) { console.log(toolCalls[0].function.name); // "get_weather" }

OpenResponse Format

Under the hood, this adapter communicates using the OpenResponse standard. It maps standard OpenAI-compatible requests into the OpenResponse Message and Item format, ensuring compatibility with any OpenResponse-compliant backend.

On this page