🚅 LiteLLM.js
JavaScript implementation of LiteLLM.
Usage
npm install litellm
import { completion } from 'litellm';
process.env['OPENAI_API_KEY'] = 'your-openai-key';
const response = await completion({
model: 'gpt-3.5-turbo',
messages: [{ content: 'Hello, how are you?', role: 'user' }],
});
// or stream the results
const stream = await completion({
model: "gpt-3.5-turbo",
messages: [{ content: "Hello, how are you?", role: "user" }],
stream: true
});
for await (const part of stream) {
process.stdout.write(part.choices[0]?.delta?.content || "");
}
Features
We aim to support all features that LiteLLM python package supports.
- Standardised completions
- Standardised embeddings
- Standardised input params 🚧 - List is here
- Caching ❌
- Proxy ❌
Supported Providers
Provider | Completion | Streaming | Embedding |
---|---|---|---|
openai | ✅ | ✅ | ✅ |
cohere | ✅ | ✅ | ❌ |
anthropic | ✅ | ✅ | ❌ |
ollama | ✅ | ✅ | ✅ |
ai21 | ✅ | ✅ | ❌ |
replicate | ✅ | ✅ | ❌ |
deepinfra | ✅ | ✅ | ❌ |
mistral | ✅ | ✅ | ✅ |
huggingface | ❌ | ❌ | ❌ |
together_ai | ❌ | ❌ | ❌ |
openrouter | ❌ | ❌ | ❌ |
vertex_ai | ❌ | ❌ | ❌ |
palm | ❌ | ❌ | ❌ |
baseten | ❌ | ❌ | ❌ |
azure | ❌ | ❌ | ❌ |
sagemaker | ❌ | ❌ | ❌ |
bedrock | ❌ | ❌ | ❌ |
vllm | ❌ | ❌ | ❌ |
nlp_cloud | ❌ | ❌ | ❌ |
aleph alpha | ❌ | ❌ | ❌ |
petals | ❌ | ❌ | ❌ |
Development
Clone the repo
git clone https://github.com/zya/litellmjs.git
Install dependencies
npm install
Run unit tests
npm t
Run E2E tests
First copy the example env file.
cp .example.env .env
Then fill the variables with your API keys to be able to run the E2E tests.
OPENAI_API_KEY=<Your OpenAI API key>
....
Then run the command below to run the tests
npm run test:e2e