llm-unify
TypeScript icon, indicating that this package has built-in type declarations

1.2.1 • Public • Published

LlmUnify

LlmUnify is a Typescript library designed to simplify and standardize interactions with multiple Large Language Model (LLM) providers. By offering a unified interface, it enables seamless integration and allows you to switch between providers or models without modifying your code. The library abstracts the complexity of invoking LLMs, supports streaming responses, and can be easily configured via environment variables or method arguments.

Supported Providers

  • Ollama (via direct API calls)
  • IBM WatsonX (via @ibm-cloud/watsonx-ai and ibm-cloud-sdk-core)
  • AWS Bedrock (via @aws-sdk/client-bedrock-runtime)

Future versions will include support for additional providers.

Installation

Install the core library from a remote repository:

To install the library directly from npm, run:

npm install llm-unify

Quickstart

Configuration and Authentication

LlmUnify retrieves provider-specific credentials and configuration from environment variables, with the option to override them using method arguments.

Example .env configuration:

LLM_UNIFY_OLLAMA_HOST=your_ollama_host
LLM_UNIFY_WATSONX_HOST=your_watsonx_endpoint
LLM_UNIFY_WATSONX_API_KEY=your_watsonx_apikey
LLM_UNIFY_WATSONX_PROJECT_ID=your_watsonx_projectid

Minimal Example

import { LlmOptions, LlmUnify } from 'llm-unify'
import * as dotenv from 'dotenv'

//this loads your .env variables where LLM_UNIFY_OLLAMA_HOST is configured
dotenv.config()

async function generate() {
    // Define options for text generation
    let options = new LlmOptions({
        temperature: 0.7,
        prompt: "Write a motivational poem:",
    })

    //Generate a response specifying provider and model
    let result = await LlmUnify.generate(
        "ollama:llama3.1",  // Provider and model name separated by ":" just change the provider name in to "watsonx" and the correct model name to call watsonx
        options,
    )
    console.log(result.generated_text)
}

generate()

Reusing Connectors

For repeated calls to the same provider, you can use a reusable connector:

import { LlmOptions, LlmUnify } from 'llm-unify'
import * as dotenv from 'dotenv'

//this loads your .env variables where LLM_UNIFY_OLLAMA_HOST is configured
dotenv.config()



async function generateStream() {

    // Create a connector for a specific provider
    let connector = LlmUnify.getConnector(
        "ollama", //just change it to "watsonx" to call watsox models
    )

    // Define options for text generation
    let options = new LlmOptions({ prompt: "List three ways to stay productive:" })

    //Generate a response in streaming mode

    for await (const response of connector.generateStream("llama3.1" /* just change it with the correct model name */, options)) {
        console.log(response.generated_text);
    }
}

generateStream()

Versions

Current Tags

VersionDownloads (Last 7 Days)Tag
1.2.15latest

Version History

VersionDownloads (Last 7 Days)Published
1.2.15
1.2.03
1.0.53
1.0.43
1.0.33
1.0.22
1.0.12
1.0.02

Package Sidebar

Install

npm i llm-unify

Weekly Downloads

23

Version

1.2.1

License

ISC

Unpacked Size

199 kB

Total Files

60

Last publish

Collaborators

  • sistinf_coc