@embedapi/core

1.0.9Β β€’Β PublicΒ β€’Β Published

EmbedAPIClient

πŸ”₯ ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! πŸš€

Visit embedapi.com to get your API key and start building!

Installation

Using npm:

npm install @embedapi/core

Using yarn:

yarn add @embedapi/core

Using pnpm:

pnpm add @embedapi/core

Initialization

const EmbedAPIClient = require('@embedapi/core');

# Regular API client
const client = new EmbedAPIClient('your-api-key');

# Agent mode client
const agentClient = new EmbedAPIClient('your-agent-id', { isAgent: true });

# Debug mode client
const debugClient = new EmbedAPIClient('your-api-key', { debug: true });

# Agent and debug mode client
const debugAgentClient = new EmbedAPIClient('your-agent-id', { 
    isAgent: true, 
    debug: true 
});

Constructor Parameters

  • apiKey (string): Your API key for regular mode, or agent ID for agent mode
  • options (object, optional): Configuration options
    • isAgent (boolean, optional): Set to true to use agent mode. Defaults to false
    • debug (boolean, optional): Set to true to enable debug logging. Defaults to false

Methods

1. generate({ service, model, messages, ...options })

Generates text using the specified AI service and model.

Parameters

  • service (string): The name of the AI service (e.g., 'openai')
  • model (string): The model to use (e.g., 'gpt-4o')
  • messages (array): An array of message objects containing conversation history
  • maxTokens (number, optional): Maximum number of tokens to generate
  • temperature (number, optional): Sampling temperature
  • topP (number, optional): Top-p sampling parameter
  • frequencyPenalty (number, optional): Frequency penalty parameter
  • presencePenalty (number, optional): Presence penalty parameter
  • stopSequences (array, optional): Stop sequences for controlling response generation
  • tools (array, optional): Array of function definitions for tool use
  • toolChoice (string|object, optional): Tool selection preferences
  • enabledTools (array, optional): List of enabled tool names
  • userId (string, optional): Optional user identifier for agent mode

Usage Example

// Regular mode
const response = await client.generate({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

// Agent mode
const agentResponse = await agentClient.generate({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

2. stream({ service, model, messages, ...options })

Streams text generation using the specified AI service and model.

Parameters

Same as generate(), plus:

  • streamOptions (object, optional): Stream-specific configuration options

Response Format

The stream emits Server-Sent Events (SSE) with two types of messages:

  1. Content Chunks:
{
    "content": "Generated text chunk",
    "role": "assistant"
}
  1. Final Statistics:
{
    "type": "done",
    "tokenUsage": 17,
    "cost": 0.000612
}

Usage Example

// Regular mode
const streamResponse = await client.stream({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

// Agent mode
const agentStreamResponse = await agentClient.stream({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

// Process the stream
const reader = streamResponse.body.getReader();
const decoder = new TextDecoder();

while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    
    const chunk = decoder.decode(value);
    const lines = chunk.split('\n');
    
    for (const line of lines) {
        if (line.startsWith('data: ')) {
            const data = JSON.parse(line.slice(6));
            if (data.type === 'done') {
                console.log('Stream stats:', {
                    tokenUsage: data.tokenUsage,
                    cost: data.cost
                });
            } else {
                console.log('Content:', data.content);
            }
        }
    }
}

3. listModels()

Lists all available models.

const models = await client.listModels();

4. testAPIConnection()

Tests the connection to the API.

const isConnected = await client.testAPIConnection();

Error Handling

All methods throw errors if the API request fails:

try {
    const response = await client.generate({
        service: 'openai',
        model: 'gpt-4o',
        messages: [{ role: 'user', content: 'Hello' }]
    });
} catch (error) {
    console.error('Error:', error.message);
}

Authentication

The client supports two authentication modes:

  1. Regular Mode (default)

    • Uses API key in request headers
    • Initialize with: new EmbedAPIClient('your-api-key')
  2. Agent Mode

    • Uses agent ID in request body
    • Initialize with: new EmbedAPIClient('your-agent-id', { isAgent: true })
    • Optional userId parameter available for request tracking

License

MIT

Package Sidebar

Install

npm i @embedapi/core

Weekly Downloads

3

Version

1.0.9

License

MIT

Unpacked Size

34.6 kB

Total Files

8

Last publish

Collaborators

  • embedapi.com