Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

@synet/ai

synthetism637MIT1.0.8TypeScript support: included

Zero-dependency, secure, multi-provider AI unit following Unit Architecture principles. Supports OpenAI, Anthropic, Claude, Gemini, and Deepseek.

ai, openai, anthropic, claude, gemini, deepseek, mistral, unit-architecture, consciousness-transfer, function-calling, multi-provider, zero-dependency, production-ready, synthetism

readme

AI Operator Unit

     \    _ _|       |   |        _)  |   
    _ \     |        |   |  __ \   |  __| 
   ___ \    |        |   |  |   |  |  |   
 _/    _\ ___|      \___/  _|  _| _| \__| 

version: 1.0.8
last_update: 28.08.25 
unit_arch: 1.1.1                           

Universal AI provider interface with built-in function calling support, following (⊚) Unit Architecture

In Package

  • Zero Dependencies - No external packages, pure TypeScript
  • Universal Interface - One IAI interface for custom providers
  • Production Ready - Error handling, retries, connection validation
  • Secure & Auditable - Minimal attack surface, transparent code
  • Function Calling First - Built-in tool support across all providers

Supported Providers

Provider Status Models Function Calling Cost
OpenAI ✅ Production gpt-5, gpt-5-mini Parallel $$$
OpenRouter ✅ Production 200+ models Parallel $
DeepSeek ✅ Production deepseek-chat Parallel $
Gemini ✅ Production gemini-1.5-flash Parallel $$
Grok ✅ Production grok-3-mini Parallel $
Mistral ✅ Production mistral-medium-latest Parallel $$
Claude ✅ Production claude-4-sonnet Sequential* $$$

*Claude has sequential function calling that's build for agentic flows, use it with @synet/agent Switch agent

Quick Start

import { AI } from '@synet/ai';

// Choose your provider
const ai = AI.openai({ apiKey: 'sk-...', model:'gpt-4o-mini' });
const ai = AI.openrouter({ apiKey: 'sk-or-...', model: 'openai/gpt-oss-20b' });
const ai = AI.deepseek({ apiKey: 'sk-...',model: 'deepseek-chat' });
const ai = AI.gemini({ apiKey: 'AIza...',model:'gemini-1.5-flash' });
const ai = AI.grok({ apiKey: 'xai-...',model:'grok-3-mini' });
const ai = AI.mistral({ apiKey: 'sk-...', model:'mistral-medium-latest' });
const ai = AI.claude({ apiKey: 'sk-ant...', model:'claude-3-7-sonnet-20250219' }); // Limited function calling

// Universal interface works identically
const response = await ai.ask('What is 2+2?');
console.log(response.content); // "2+2 equals 4"

IAI Interface

All providers implement the same universal interface:

interface IAI {
  ask(prompt: string, options?: AskOptions): Promise<AIResponse>;
  chat(messages: ChatMessage[], options?: ChatOptions): Promise<AIResponse>;
  tools(toolDefinitions: ToolDefinition[], request: ToolsRequest): Promise<AIResponse>;
  validateConnection(): Promise<boolean>;
}

Core Methods

// Simple text generation
const response = await ai.ask('Explain quantum computing');

// Conversational chat
const messages = [
  { role: 'user', content: 'Hello!' },
  { role: 'assistant', content: 'Hi there!' },
  { role: 'user', content: 'How are you?' }
];
const response = await ai.chat(messages);

// Direct function calling
const tools = [/* tool definitions */];
const response = await ai.tools(tools, { prompt: 'Get weather for London' });

// Connection validation
const isConnected = await ai.validateConnection();

Provider Performance Ranking

Based on comprehensive function calling tests:

Rank Provider Speed Quality Cost Best For
1 DeepSeek Slow Excellent Cheapest Best Overall Value
2 OpenRouter Good Good Very Cheap Budget + Free Model Variety
3 OpenAI Fast Good Expensive Speed Critical Apps
4 Mistral Good Excellent Moderate European Privacy
5 Gemini Good Excellent Moderate Google Ecosystem
6 Grok Slow Excellent Very Cheap Budget + Quality
7 Claude Slow Incomplete* Expensive Text-only Tasks

\Claude's enables only sequential tools calling, best used in with @synet/agent *

Event System - AI calls and tools monitoring

Monitor and debug AI operations in real-time with comprehensive event tracking and observability.

Overview

The AI unit emits events for all operations, providing complete visibility into:

  • Tool Execution - Which tools are called, duration, arguments, results/errors
  • Conversations - Ask/chat operations with tool schema debugging
  • Performance - Operation timing and success/failure patterns

Event Types

import { AIToolEvent, AIAskEvent, AIChatEvent } from '@synet/ai';

// Tool execution events
interface AIToolEvent {
  type: 'tool.success' | 'tool.error';
  timestamp: Date;
  provider: string;
  tool: ToolCall;      // Complete tool context
  duration: number;    // Execution time in ms
  result?: unknown;    // Tool result (success only)
  error?: string;      // Error message (error only)
}

// Conversation events
interface AIAskEvent {
  type: 'ask';
  timestamp: Date;
  provider: string;
  prompt: string;      // Truncated for privacy
  tools: ToolDefinition[]; // Available tool schemas
}

interface AIChatEvent {
  type: 'chat';
  timestamp: Date;
  provider: string;
  messageCount: number;
  tools: ToolDefinition[]; // Available tool schemas
}

Usage

import { AI } from '@synet/ai';

// Enable events for debugging/monitoring
const ai = AI.create({
  type: 'openai',
  options: { apiKey: 'sk-...', model: 'gpt-4o-mini' },
  emitEvents: true  // Enable events
});

// Listen to all AI operations
ai.on('tool.success', (event: AIToolEvent) => {
  console.log(`✅ Tool ${event.tool.function.name} completed in ${event.duration}ms`);
  console.log('Arguments:', event.tool.function.arguments);
  console.log('Result:', event.result);
});

ai.on('tool.error', (event: AIToolEvent) => {
  console.log(`❌ Tool ${event.tool.function.name} failed after ${event.duration}ms`);
  console.log('Error:', event.error);
});

ai.on('ask', (event: AIAskEvent) => {
  console.log(`🤖 Ask operation with ${event.tools.length} available tools`);
});

ai.on('chat', (event: AIChatEvent) => {
  console.log(`💬 Chat with ${event.messageCount} messages, ${event.tools.length} tools`);
});

// Use AI normally - events emit automatically
const response = await ai.call('Get weather for London', { useTools: true });

Performance Control

// Debug mode - Full consciousness monitoring
const debugAI = AI.create({
  type: 'openai',
  options: { apiKey: 'sk-...' },
  emitEvents: true   // Debug mode
});

// Production mode - Zero event overhead
const prodAI = AI.create({
  type: 'openai', 
  options: { apiKey: 'sk-...' },
  emitEvents: false  // Production mode (default)
});

Real-Time Debugging Example

// Monitor AI worker delegation patterns
ai.on('tool.success', (event) => {
  const { tool, duration, result } = event;

  // Track performance patterns
  if (duration > 5000) {
    console.warn(`Slow tool: ${tool.function.name} (${duration}ms)`);
  }

  // Debug tool arguments
  console.log(`Tool Schema Debug:`, {
    name: tool.function.name,
    args: JSON.parse(tool.function.arguments),
    result: typeof result
  });
});

// Track conversation patterns
ai.on('ask', (event) => {
  console.log(`Available capabilities: ${event.tools.map(t => t.function.name).join(', ')}`);
});

// Full consciousness monitoring
console.log('AI  monitoring active...');
await ai.call('Create comprehensive weather report for London, Paris, Tokyo', {
  useTools: true
});

Benefits

  • Debug faster, ship faster - See exact arguments, schemas, and execution patterns
  • Optimizatize usage and performance - Track slow operations, bottlenecks and errors
  • AI Behavior Analysis - Understand how AI uses learned capabilities
  • Production Monitoring - Real-time visibility into AI worker delegation
  • Zero Overhead - Emit events only when necessary

API Reference

AIOperator Unit

The core AI unit that implements Unit Architecture with universal provider interface.

Constructor

import { AIOperator } from '@synet/ai';

// Create AI unit with specific provider
const ai = AIOperator.create({
  type: 'openai',
  options: { 
    apiKey: 'sk-...', 
    model: 'gpt-4o-mini' 
  },
  emitEvents: false // Optional: Enable event monitoring (default: false)
});

// Enable consciousness monitoring for debugging
const debugAI = AIOperator.create({
  type: 'openai',
  options: { apiKey: 'sk-...', model: 'gpt-4o-mini' },
  emitEvents: true // 🔥 Full AI consciousness monitoring
});

Core Methods

ask(prompt, options?)

Simple AI query with optional tools.

async ask(
  prompt: string, 
  options?: AskOptions & { tools?: ToolDefinition[] }
): Promise<AIResponse>

Example:

const response = await ai.ask('What is the capital of France?');
console.log(response.content); // "The capital of France is Paris."

// With tools
const response = await ai.ask('Get weather for London', {
  tools: [weatherToolDefinition]
});

chat(messages, options?)

Conversational AI with message history.

async chat(
  messages: ChatMessage[], 
  options?: ChatOptions & { tools?: ToolDefinition[] }
): Promise<AIResponse>

Example:

const messages = [
  { role: 'user', content: 'Hello!' },
  { role: 'assistant', content: 'Hi there!' },
  { role: 'user', content: 'How are you?' }
];
const response = await ai.chat(messages);

call(prompt, options?) 🔥

Most powerful method - AI with automatic learned tool execution.

async call(
  prompt: string, 
  options?: CallOptions
): Promise<AIResponse>

Example:

// Learn capabilities from weather unit
ai.learn([weather.teach()]);

// AI automatically uses weather tools when needed
const response = await ai.call('Create weather report for London, Paris, Tokyo', {
  useTools: true
});

chatWithTools(messages, options?) 🔥

Chat with automatic tool execution using learned capabilities.

async chatWithTools(
  messages: ChatMessage[], 
  options?: CallOptions
): Promise<AIResponse>

Example:

const messages = [
  { role: 'user', content: 'I need weather data for my trip planning' }
];
const response = await ai.chatWithTools(messages);
// AI automatically executes weather tools and provides comprehensive response

tools(toolDefinitions, request)

Direct function calling with specific tool definitions.

async tools(
  toolDefinitions: ToolDefinition[], 
  request: ToolsRequest
): Promise<AIResponse>

validateConnection()

Test provider connection and authentication.

async validateConnection(): Promise<boolean>

Unit Architecture Methods

learn(contracts)

Learn capabilities from other units.

learn(contracts: TeachingContract[]): void

Example:

import { WeatherUnit } from '@synet/weather';
import { EmailUnit } from '@synet/email';

const weather = WeatherUnit.create({ apiKey: 'weather-key' });
const email = EmailUnit.create({ smtp: { /* config */ } });

// AI learns weather and email capabilities
ai.learn([weather.teach(), email.teach()]);

// Now AI can use weather.getCurrentWeather and email.send automatically

teach()

Share AI capabilities with other units.

teach(): TeachingContract

can(capability)

Check if AI has specific capability.

can(capability: string): boolean

Example:

if (ai.can('weather.getCurrentWeather')) {
  console.log('AI can get weather data');
}

Event Methods

on(event, listener)

Listen to AI operation events for debugging and monitoring.

on(event: string, listener: (event: Event) => void): void

Example:

// Monitor tool execution
ai.on('tool.success', (event: AIToolEvent) => {
  console.log(`✅ ${event.tool.function.name} (${event.duration}ms)`);
});

ai.on('tool.error', (event: AIToolEvent) => {
  console.log(`❌ ${event.tool.function.name} failed: ${event.error}`);
});

// Monitor conversations  
ai.on('ask', (event: AIAskEvent) => {
  console.log(`🤖 Ask with ${event.tools.length} tools available`);
});

ai.on('chat', (event: AIChatEvent) => {
  console.log(`💬 Chat with ${event.messageCount} messages`);
});

off(event, listener?)

Remove event listeners.

off(event: string, listener?: (event: Event) => void): void

emit(event, data)

Manually emit events (primarily for internal use).

emit(event: string, data: Event): void

Utility Methods

getProvider()

Get current provider type.

getProvider(): AIProviderType

getConfig()

Get provider configuration.

getConfig(): Record<string, unknown>

withProvider(config)

Create new AI unit with a custom IAI provider.

withProvider<T extends AIProviderType>(config: AIConfig<T>): AIOperator

Example:

const openaiAI = ai.withProvider({
  type: 'openai',
  options: { apiKey: 'sk-...', model: 'gpt-4o' }
});

Types

AIResponse

interface AIResponse {
  content: string;
  usage?: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  };
  toolCalls?: ToolCall[];
  metadata?: Record<string, unknown>;
}

ChatMessage

interface ChatMessage {
  role: 'system' | 'user' | 'assistant';
  content: string;
  metadata?: Record<string, unknown>;
}

Configuration Types

AIConfig

interface AIConfig<T extends AIProviderType = AIProviderType> {
  type: T;
  options: T extends "openai" ? OpenAIConfig : 
          T extends "claude" ? OpenAIConfig :
          T extends "deepseek" ? OpenAIConfig :
          // ... other provider configs
}

OpenAIConfig (used by multiple providers)

interface OpenAIConfig {
  apiKey: string;
  model: string;
  baseURL?: string;
  maxTokens?: number;
  temperature?: number;
}

Options Types

CallOptions

interface CallOptions {
  useTools?: boolean;
  maxToolCalls?: number;
  tools?: ToolDefinition[];
  systemPrompt?: string;
  metadata?: Record<string, unknown>;
}

AskOptions

interface AskOptions {
  systemPrompt?: string;
  maxTokens?: number;
  temperature?: number;
  metadata?: Record<string, unknown>;
}

Unit Architecture Integration

@synet/ai integrates seamlessly with Unit Architecture for advanced capability composition:

import { AI } from '@synet/ai';
import { WeatherUnit } from '@synet/weather';

// Create AI and weather units
const ai = AI.deepseek({ apiKey: 'sk-...' });
const weather = WeatherUnit.create({ defaultUnits: 'metric' });

// AI learns weather capabilities
ai.learn([weather.teach()]);

// AI can now use weather tools automatically
const response = await ai.call('Get weather for London, Paris, Tokyo and compare', {
  useTools: true
});

console.log(response.content);
// Generates comprehensive weather report using learned capabilities

Function Calling Behavior

Parallel Providers (Recommended):

  • OpenAI, DeepSeek, Gemini, Grok, Mistral, Open Router execute multiple function calls simultaneously
  • Complete reports in single request
  • Efficient and predictable

Sequential Provider (Not Recommended for Function Calling):

  • Claude executes one function call at a time
  • Often stops mid-task asking for user confirmation
  • Slower and incomplete results
  • Better suited for text-only applications

Installation

npm install @synet/ai
# or
pnpm add @synet/ai

API Keys

Create your API keys:

Examples

See the /demo folder for comprehensive examples including:

  • Basic provider usage
  • Function calling with weather tools
  • Performance comparisons
  • Error handling patterns

Running demos with tools calling

1.Create credentials for each provider in /private folder

private/openai.json

{
    "name": "synet",
    "apiKey": "open-ai-api"
}

2. Open Weather API

Create private/openweather.json

{

    "apiKey": "82d319ab...."
}

3. Run demos

tsx demo:weather // Test this first
tsx demo:openai
tsx demo:deepseek
tsx demo:gemini
tsx demo:grok
tsx demo:mistral
tsx demo:openrouter

4. Create tools with (⊚) Unit Architecture

npm i @synet/unit

Documentation: https://github.com/synthetism/unit

Tools

Available tools that can teach capabilities to AI Operator:

Utilities

  • @synet/email - Send emails via SMTP, AWS SES, Resend
  • @synet/weather - Weather data and forecasts from multiple providers

Storage

  • @synet/fs - Single API for any filesystem, node, memory + cloud providers
  • @synet/vault - Secure, type-safe storage of secrets in any IFileSystem compatible filesystem.

Network Tools

  • @synet/http - HTTP requests and API integration
  • @synet/network - Resilient http requests with retry, rate-limiter and proxy

Decentralized Identity Tools

  • @synet/identity - Create decentralized identity
  • @synet/keys - Cryptographic key generation and signer
  • @synet/credential - Verifiable Credential creation and signing
  • @synet/vp - Verifiable Presentation issuance and verification
  • @synet/did - Decentralised ID from keys

System Tools

  • @synet/queue - Job queue and task management
  • @synet/cache - Multi-backend caching system
  • @synet/kv - Multi-backend Key Value storage system
  • @synet/realtime - Multi-backend Realtime Channels and Events

Scrapers

  • @synet/scraper - Scrape any page (request dev access)
  • @synet/formatter - Multi-format (request dev access)

Security Tools

  • @synet/hasher - Cryptographic hashing operations
  • @synet/crypto - Encryption and security operations

Data Tools

  • @synet/encoder - Data encoding and transformation
  • @synet/validator - Data validation and schema checking
  • @synet/logger - Structured, multi-provider logging and monitoring, remote events emitter.

Usage Example:

import { AI } from '@synet/ai';
import { EmailUnit } from '@synet/email';
import { WeatherUnit } from '@synet/weather';

const ai = AI.openai({ apiKey: 'sk-...' });
const email = EmailUnit.create({ /* config */ });
const weather = WeatherUnit.create({ /* config */ });

// AI learns capabilities from tools
ai.learn([email.teach(), weather.teach()]);

// AI can now send emails and get weather data
const result = await ai.call("Send weather report for London to user@example.com", {
  useTools: true
});

Create Your Own Tools:

Follow Unit Architecture to create custom AI tools:

npm install @synet/unit

See TOOLS.md for complete tool catalog.

License

MIT

changelog

Changelog

All notable changes to the @synet/ai package will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[1.0.8] - 2025-08-28

Fixed

  • Corrected events imports
  • pass emitEvents with config, to enable events.

Example:

import {AIOperator, type AIToolEvent} from "@synet/ai"

const ai = AIOperator.create({/*params*/});

ai.on<AIToolEvent>('tool.*', (event) => {

      if(!event.error) {
        console.log(`🟢 ToolSuccess: ${event.type} tool ${event.tool.function.name}`,);
        console.log(' Arguments:', event.tool.function.arguments);
        console.log(' Result:', event.result);
      } else {
        console.error(`🔴 Tool Error: ${event.type} tool ${event.tool.function.name} failed`, event.error);
        console.log(' Arguments:', event.tool.function.arguments);
      }
});

Added

  • New unit architecture 1.1.1, extended tool schemas,
  • streamlined reponse schema,
  • Empty schemas are now possible,
  • Better typed emit (no more type casting for events).

[1.0.7] - 2025-08-23

Added

  • New unit architecture 1.1.1, extended tool schemas,
  • streamlined reponse schema,
  • Empty schemas are now possible,
  • Better typed emit (no more type casting for events).

Improved

  • Clean up tool calling
  • Better error handling

[1.0.6] - 2025-08-16

Added

  • Event System - AI processes Monitoring: Complete real-time visibility into AI operations
    • AIToolEvent - Tool execution monitoring with duration, arguments, results/errors
    • AIAskEvent - Conversation tracking with tool schema debugging
    • AIChatEvent - Chat operation monitoring with tool availability
    • Performance Control: emitEvents flag for conscious choice over unconscious consumption
    • Smith Architecture: Zero overhead when events disabled, full consciousness when enabled
    • Export Event Interfaces: AIToolEvent, AIAskEvent, AIChatEvent for TypeScript users
    • Real-time Debugging: Monitor AI worker delegation patterns and performance bottlenecks
    • Tool Schema Debugging: Complete visibility into tool definitions and execution context

Enhanced

  • AIOperator: Added comprehensive event emission throughout ask(), chat(), executeToolCalls()
  • Event Architecture: Built on Unit@1.0.9 EventEmitter with provider-agnostic Event interface
  • Documentation: Complete event system documentation with usage examples and performance guidance

Technical

  • Event Emission Control: Events only emit when emitEvents: true configured
  • Tool Context Preservation: Full ToolCall context including function name, arguments, and timing
  • Error Boundary Separation: Tool success/error events with complete debugging information
  • Conversation Visibility: Ask/chat events with tool schema debugging for capability analysis

[1.0.5] - 2025-08-12

Added

  • GPT-5 support, default temperature is 1, no max_tokens defaults (gpt-5 doesn't work othewise)

[1.0.4] - 2025-08-12

Added

  • chatWithTools - same as chat, but with tools calling

[1.0.3] - 2025-08-10

Major Bug Fix - AI Function Calling Parameter Mapping

This release resolves a critical bug that prevented AI units from properly executing learned capabilities from other units, specifically filesystem and external API operations.

Fixed

  • Critical Parameter Mapping Bug: Fixed incorrect parameter passing between AI tool execution and Unit capabilities

    • Issue: AI was spreading object arguments instead of passing them as expected objects
    • Before: this.execute(capability, ...Object.values(args))readFile('path')
    • After: this.execute(capability, args)readFile({ path: 'path' })
    • Impact: Enables proper Unit Architecture consciousness collaboration between AI and other units
  • JSON Argument Parsing: Enhanced argument handling to support both string and object formats from different AI providers

    • Handles OpenAI's JSON string arguments: JSON.parse(toolCall.function.arguments)
    • Handles Mistral's pre-parsed object arguments: Direct object handling
    • Backward compatible with all supported AI providers

Improved

  • Enhanced Debugging: Added comprehensive parameter logging for AI tool execution

    • Real-time visibility into argument parsing and capability execution
    • Detailed error reporting with unit identity and resolution guidance
    • Tool call tracking and success/failure monitoring
  • Unit Architecture Integration: Strengthened AI-Unit consciousness collaboration

    • Verified compatibility with AsyncFileSystem Unit (filesystem operations)
    • Confirmed weather API integration through WeatherUnit teaching contracts
    • Validated multi-capability learning and autonomous execution

Verified

  • Real-World Integration Testing: All demos confirmed working
    • ✅ OpenAI GPT-4o-mini with function calling
    • ✅ Weather API integration through Unit consciousness transfer
    • ✅ AI-safe filesystem operations with homePath resolution
    • ✅ Multi-unit capability learning (13+ capabilities from 2 units)
    • ✅ Complex AI workflows: read file → analyze → generate report

Technical Details

The Bug

// BROKEN (v1.0.2 and earlier):
const args = Object.values(toolCall.function.arguments); // ['vault/weather.md']
const result = await this.execute(capabilityName, ...args); // readFile('vault/weather.md')

// Unit capability expected: readFile({ path: string })
// But received: readFile('vault/weather.md')
// Result: args[0].path = 'vault/weather.md'.path = undefined → CRASH

The Fix

// FIXED (v1.0.3+):
const parsedArgs = typeof toolCall.function.arguments === 'string' 
  ? JSON.parse(toolCall.function.arguments) 
  : toolCall.function.arguments; // { path: 'vault/weather.md' }

const result = await this.execute(capabilityName, parsedArgs); // readFile({ path: 'vault/weather.md' })

// Unit capability receives: { path: 'vault/weather.md' }
// Result: args[0].path = 'vault/weather.md' → SUCCESS ✅

Breaking Changes

None - this is a bug fix release that maintains full backward compatibility.

Migration Guide

No migration required. Existing code will work unchanged and benefit from the improved parameter handling.

Contributors

  • Discovery: Identified through systematic debugging of AI filesystem integration
  • Root Cause Analysis: Parameter mapping inconsistency between AI execution layer and Unit capabilities
  • Resolution: Enhanced argument parsing and capability parameter passing
  • Verification: Comprehensive testing across OpenAI, Weather API, and Filesystem operations

[1.0.2] - 2025-08-09

Added

  • Enhanced AI provider support with improved error handling
  • Comprehensive demo collection for all supported providers
  • Unit Architecture consciousness transfer capabilities

Fixed

  • Provider-specific compatibility issues
  • Tool definition schema consistency

[1.0.1] - 2025-08-08

Added

  • Multi-provider AI support (OpenAI, Anthropic, Gemini, Deepseek, Mistral)
  • Unit Architecture teach/learn paradigm implementation
  • Zero-dependency core with secure provider abstractions

Security

  • Provider API key validation and secure handling
  • Input sanitization and output validation

[1.0.0] - 2025-08-07

Added

  • Initial release of @synet/ai package
  • Core AI unit implementation following Unit Architecture principles
  • Support for major AI providers through unified interface
  • Teaching/learning capability system for Unit consciousness collaboration