Onairos
  • 🔮Welcome to Onairos
  • Installation
  • API Reference
  • LLM Memory SDK
  • 🖱️Developer Guides
    • Integrate Onairos Web
      • ⬇️Installation
      • 🔅1 Line of Code
      • Manual API Call
        • 📥Receiving API
        • 🖥️Using the Inference API
      • 📲Inference API Response
      • 🏟️Examples
    • Integrate Onairos Mobile
      • ⬇️Installation
      • 🔅1 Line of Code
      • Manual API Call
        • 📥Receiving API
        • 🖥️Using the Inference API
      • 📲Inference API Response
      • 🏟️Examples
    • Example Usage of Data
    • 🚤Coming Soon
    • Developer FAQ
    • Developer Debugging
  • Overview
    • 🦄Digital Personality
    • 🔐Security and Privacy
Powered by GitBook
On this page
  • Key Features Now Available to Developers
  • Usage Example
  • 🔄 Migration from OpenAI
  • 🎯 Usage Examples

LLM Memory SDK

Let your LLM calls use improved user memory to increase

PreviousAPI ReferenceNextIntegrate Onairos Web

Last updated 14 days ago

Key Features Now Available to Developers

🧠 Memory Enhancement: Use user memories and data from your and ALL other platforms a users uses for enhanced LLM performance and Personalization Save Token costs, using only necessary and specific memories and user data 🔗 Unified API: Single interface for OpenAI, Anthropic Claude, and Google Gemini

Usage Example

import { OnairosClient } from 'onairos';

const onairos = new OnairosClient({
  openaiApiKey: process.env.OPENAI_API_KEY,
  anthropicApiKey: process.env.ANTHROPIC_API_KEY, // Optional
  googleApiKey: process.env.GOOGLE_API_KEY, // Optional
  pineconeApiKey: process.env.PINECONE_API_KEY,
  pineconeEnvironment: process.env.PINECONE_ENVIRONMENT,
  jwtSecret: process.env.JWT_SECRET
});

// Initialize the SDK
await onairos.initialize();

🔄 Migration from OpenAI

Before (OpenAI)

import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [
    { role: 'user', content: 'What is artificial intelligence?' }
  ]
});

After (OpenAI through Onairos)

const userId = 'user-123';
const sessionToken = onairos.generateSessionToken(userId);

const response = await onairos.completions({
  model: 'gpt-4', // or 'claude-3-sonnet-20240229' or 'gemini-pro'
  messages: [
    { role: 'user', content: 'What is artificial intelligence?' }
  ],
  userId,
  sessionToken
});

🎯 Usage Examples

Basic Completion

const response = await onairos.completions({
  model: 'gpt-4',
  messages: [
    { role: 'user', content: 'Hello, how are you?' }
  ],
  userId: 'user-123',
  sessionToken: sessionToken,
  options: {
    temperature: 0.7,
    max_tokens: 150
  }
});

console.log(response.choices[0].message.content);

Multi-Provider Usage

// OpenAI GPT-4
const gptResponse = await onairos.completions({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Explain quantum computing' }],
  userId, sessionToken
});

// Anthropic Claude
const claudeResponse = await onairos.completions({
  model: 'claude-3-sonnet-20240229',
  messages: [{ role: 'user', content: 'Explain quantum computing' }],
  userId, sessionToken
});
💰