Tekimax LogoSDK

Tekimax organizes AI capabilities into modalities. Each modality maps to a namespace on the client. Providers opt into capabilities explicitly — calling an unsupported modality throws a compile-time TypeScript error, not a runtime 404.


Text

The client.text namespace handles all language model interactions.

Code
import { Tekimax, OpenAIProvider } from 'tekimax-omat'; const client = new Tekimax({ provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }) }); // Standard chat const response = await client.text.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Summarize this grant report' }], }); console.log(response.message.content); // Streaming const stream = client.text.chat.completions.createStream({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Write a program description' }], }); for await (const chunk of stream) { process.stdout.write(chunk.delta); } // Embeddings const embedding = await client.text.embed({ model: 'text-embedding-3-small', input: 'Workforce development for returning citizens', }); console.log(embedding.data[0].embedding); // number[]

Images

Image Generation

Supported by: OpenAI (DALL-E 3), OpenRouter (model-dependent)

Code
const result = await client.images.generate({ prompt: 'A diverse group of young adults collaborating on a coding project, warm lighting', model: 'dall-e-3', size: '1024x1024', }); console.log(result.data[0].url);

Image Analysis (Vision)

Analyze images with multi-modal models. The SDK normalizes the format — OpenAI uses image_url content parts, Anthropic uses image source blocks, Gemini uses inlineData — but you always call the same method.

Supported by: OpenAI (GPT-4o), Anthropic (Claude 3.x), Gemini, OpenRouter

Code
const analysis = await client.images.analyze({ model: 'gpt-4o', image: 'https://example.org/student-drawing.png', // URL or base64 data URL prompt: 'Describe what this student has drawn. Note any labels or text visible.', }); console.log(analysis.content);

OMAT use case: The AssessmentPipeline uses vision analysis to process student drawings and handwriting submissions — the image is described into text before rubric scoring. See the Assessment Guide.


Audio

Text-to-Speech

Supported by: OpenAI

Code
const audio = await client.audio.speak({ model: 'tts-1', input: 'Welcome to the program. Let\'s get started with your intake assessment.', voice: 'nova', // alloy | echo | fable | onyx | nova | shimmer }); // Returns ArrayBuffer — write to file or pipe to audio player await fs.promises.writeFile('welcome.mp3', Buffer.from(audio.buffer));

Transcription (Speech-to-Text)

Supported by: OpenAI (Whisper)

Code
import fs from 'node:fs'; const transcription = await client.audio.transcribe({ file: fs.readFileSync('participant-response.mp3'), model: 'whisper-1', language: 'es', // Optional: ISO 639-1 language code response_format: 'verbose_json', // Includes timestamps and segments }); console.log(transcription.text); console.log(transcription.segments); // [{ start, end, text }, ...]

OMAT use case: The AssessmentPipeline transcribes speech responses automatically before rubric scoring. Set modality: 'speech' on your StudentResponse and the pipeline handles the rest. See the Assessment Guide.


Video

Video Analysis

Supported by: Gemini (native multi-modal video understanding)

Code
import { Tekimax, GeminiProvider } from 'tekimax-omat'; const client = new Tekimax({ provider: new GeminiProvider({ apiKey: process.env.GOOGLE_API_KEY! }) }); const analysis = await client.videos.analyze({ video: 'https://cdn.example.org/session-recording.mp4', model: 'gemini-1.5-flash', // 1M context window — preferred for video prompt: 'Describe the key moments in this training session recording.', }); console.log(analysis.content);

Note: Video generation is not currently implemented. The client.videos.generate() method will throw a capability error at runtime. This capability is tracked for a future release.

On this page