π₯ ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! π
Visit embedapi.com to get your API key and start building!
Using npm:
npm install @embedapi/core
Using yarn:
yarn add @embedapi/core
Using pnpm:
pnpm add @embedapi/core
const EmbedAPIClient = require('@embedapi/core');
# Regular API client
const client = new EmbedAPIClient('your-api-key');
# Agent mode client
const agentClient = new EmbedAPIClient('your-agent-id', { isAgent: true });
# Debug mode client
const debugClient = new EmbedAPIClient('your-api-key', { debug: true });
# Agent and debug mode client
const debugAgentClient = new EmbedAPIClient('your-agent-id', {
isAgent: true,
debug: true
});
-
apiKey
(string): Your API key for regular mode, or agent ID for agent mode -
options
(object, optional): Configuration options-
isAgent
(boolean, optional): Set to true to use agent mode. Defaults to false -
debug
(boolean, optional): Set to true to enable debug logging. Defaults to false
-
Generates text using the specified AI service and model.
-
service
(string): The name of the AI service (e.g., 'openai') -
model
(string): The model to use (e.g., 'gpt-4o') -
messages
(array): An array of message objects containing conversation history -
maxTokens
(number, optional): Maximum number of tokens to generate -
temperature
(number, optional): Sampling temperature -
topP
(number, optional): Top-p sampling parameter -
frequencyPenalty
(number, optional): Frequency penalty parameter -
presencePenalty
(number, optional): Presence penalty parameter -
stopSequences
(array, optional): Stop sequences for controlling response generation -
tools
(array, optional): Array of function definitions for tool use -
toolChoice
(string|object, optional): Tool selection preferences -
enabledTools
(array, optional): List of enabled tool names -
userId
(string, optional): Optional user identifier for agent mode
// Regular mode
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentResponse = await agentClient.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
Streams text generation using the specified AI service and model.
Same as generate()
, plus:
-
streamOptions
(object, optional): Stream-specific configuration options
The stream emits Server-Sent Events (SSE) with two types of messages:
- Content Chunks:
{
"content": "Generated text chunk",
"role": "assistant"
}
- Final Statistics:
{
"type": "done",
"tokenUsage": 17,
"cost": 0.000612
}
// Regular mode
const streamResponse = await client.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentStreamResponse = await agentClient.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Process the stream
const reader = streamResponse.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
if (data.type === 'done') {
console.log('Stream stats:', {
tokenUsage: data.tokenUsage,
cost: data.cost
});
} else {
console.log('Content:', data.content);
}
}
}
}
Lists all available models.
const models = await client.listModels();
Tests the connection to the API.
const isConnected = await client.testAPIConnection();
All methods throw errors if the API request fails:
try {
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
} catch (error) {
console.error('Error:', error.message);
}
The client supports two authentication modes:
-
Regular Mode (default)
- Uses API key in request headers
- Initialize with:
new EmbedAPIClient('your-api-key')
-
Agent Mode
- Uses agent ID in request body
- Initialize with:
new EmbedAPIClient('your-agent-id', { isAgent: true })
- Optional
userId
parameter available for request tracking
MIT