Onairos
  • 🔮Welcome to Onairos
  • Installation
  • API Reference
  • LLM Memory SDK
  • 🖱️Developer Guides
    • Integrate Onairos Web
      • ⬇️Installation
      • 🔅1 Line of Code
      • Manual API Call
        • 📥Receiving API
        • 🖥️Using the Inference API
      • 📲Inference API Response
      • 🏟️Examples
    • Integrate Onairos Mobile
      • ⬇️Installation
      • 🔅1 Line of Code
      • Manual API Call
        • 📥Receiving API
        • 🖥️Using the Inference API
      • 📲Inference API Response
      • 🏟️Examples
    • Example Usage of Data
    • 🚤Coming Soon
    • Developer FAQ
    • Developer Debugging
  • Overview
    • 🦄Digital Personality
    • 🔐Security and Privacy
Powered by GitBook
On this page
  1. Developer Guides
  2. Integrate Onairos Mobile
  3. Manual API Call

Using the Inference API

Learn insights into how your Users think, feel, act and react

The Inference API provides a machine learning model that can generate predictions based on the provided data. This documentation will guide you on how to properly format your input for the API and interpret the results received from the API.

Input Format

Send a POST request to the API endpoint with a JSON payload containing a set of entries for prediction. Each entry should include the following information:

  • text: The text input for the inference result (String) - required

  • category: The category to which the content belongs (String) - required

  • img_url: The URL of an image associated with the content (String) - optional

Example JSON body for the POST request:


  "Input": {
    "input1": {
      "text": "Example text input 1",
      "category": "Example Category 1",
      "img_url": "http://example.com/image1.jpg"
    },
    "input2": {
      "text": "Example text input 2",
      "category": "Example Category 2",
      "img_url": "http://example.com/image2.jpg"
    },
    "input3": {
      "text": "Example text input 3",
      "category": "Example Category 3",
      "img_url": "http://example.com/image3.jpg"
    },
  }
    // Additional entries can be added here
  

You can then call the Inference API with the Inference object created above.

Remember to include the access token in the Authorization header of your API request.

Future<void> onResolved(String apiUrl, String accessToken) async {
    try {
      final response = await http.post(Uri.parse(apiUrl), headers: {
        // If required, add headers for the request
        "Authorization": accessToken,
        "Content-Type": "application/json",
        },
        body: JSON.stringify(InputData),
      );

      if (response.statusCode == 200) {
        final Map<String, dynamic> data = jsonDecode(response.body);
        // process Onairos Data
  
PreviousReceiving APINextInference API Response

Last updated 1 year ago

🖱️
🖥️