Human API - Add the right context/memories to any LLM
Let your LLM calls use improved user memory to increase performance and save tokens/$$
What is Human API?
Quick Example
// Standard OpenAI call
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{
role: 'user',
content: 'What should I work out today?'
}]
});
// Returns: Generic workout advice
// Just change these two lines in your existing code:
const onairos_client = new OpenAI({
apiKey: 'your_app_api_key_here', // Your developer key for the application
baseURL: 'https://developer.onairos.uk/v1' // Our endpoint
});
// Human API call with memory
const response = await onairos_client.chat.completions.create({
model: 'gpt-4o',
messages: [{
role: 'user',
content: 'Based on {onairos_memory}, what should I work out today?'
}]
});
// Returns: "Since you did legs yesterday and prefer morning cardio..."Quick Setup
1. Get Your API Keys
1.5 Key descriptions:
2. Update Your Code
3. Add Memory to Prompts
Getting Started
1. Get Your API Key
2. Update Your OpenAI Client
3. Add Memory to Your Prompts
4. Start Getting Personalized Responses
Real Example: Fitness App Integration
Dating App Example
Last updated