pulse-ai-utils
TypeScript icon, indicating that this package has built-in type declarations

3.45.20 • Public • Published

Pulse AI Utils

A powerful TypeScript library for AI-powered applications with multi-provider LLM support. Provides unified interfaces for OpenAI, Gemini, Claude, and 100+ models via OpenRouter.

✨ Features

  • 🤖 Multi-Provider Support: OpenAI, Gemini, Claude, and 100+ models
  • 🔄 Unified Interface: Same API for all providers
  • 🔑 BYOK Support: Bring Your Own Key for provider-specific APIs
  • 📊 Structured Data: Built-in Zod schema validation
  • 🌐 Web Search: AI-powered web queries with caching
  • 🎯 Type-Safe: Full TypeScript support with proper types

Installation

npm install pulse-ai-utils

🚀 Quick Start

import { OpenAIHelper, OpenRouter } from 'pulse-ai-utils';

// Auto-loads API keys from your .env file
const openai = new OpenAIHelper();  // Uses OPENAI_API_KEY from .env

// Auto-loads keys for different providers
const gemini = OpenRouter.forGemini();  // Uses OPENROUTER_API_KEY + GEMINI_API_KEY from .env
const claude = OpenRouter.forClaude();  // Uses OPENROUTER_API_KEY + CLAUDE_API_KEY from .env

// Use remote config for dynamic model selection (recommended)
const smartOpenai = await OpenAIHelper.createWithRemoteConfig();
const smartGemini = await OpenRouter.forGeminiWithRemoteConfig();
const webOptimized = await OpenRouter.createForWebFetching();

// Or pass keys explicitly if needed
const customOpenai = new OpenAIHelper('your-openai-key');
const customGemini = OpenRouter.forGemini('openrouter-key', 'gemini-key');

Environment Configuration

The library automatically reads API keys from your project's root .env file. Simply create a .env file in your project root:

# .env file in your project root
OPENAI_API_KEY=your-openai-key
OPENROUTER_API_KEY=your-openrouter-key
GEMINI_API_KEY=your-gemini-key
CLAUDE_API_KEY=your-claude-key

No need to call dotenv.config() - the library handles this automatically!

Remote Configuration (Firebase Remote Config)

The library also supports dynamic model selection via Firebase Remote Config. This allows you to change models without code deployments:

# Firebase Remote Config Parameters (optional)
pulse-ai-util-openai-model=gpt-4o-mini                    # OpenAI model selection
pulse-ai-util-openrouter-model=google/gemini-2.0-flash-exp  # OpenRouter model
pulse-ai-util-gemini-model=google/gemini-2.0-flash-exp      # Gemini-specific model
pulse-ai-web-openrouter-model=claude-3-5-sonnet-20241022    # Optimized for web fetching

These remote config values take precedence over environment variables, providing dynamic configuration capabilities.

Required Environment Variables

At least one of these is required:

OPENAI_API_KEY=your-openai-key                    # For direct OpenAI access
OPENROUTER_API_KEY=your-openrouter-key            # For multi-provider access via OpenRouter

Optional Environment Variables

# Provider-specific keys for BYOK (Bring Your Own Key)
GEMINI_API_KEY=your-gemini-key                    # Google AI Studio key
CLAUDE_API_KEY=your-claude-key                    # Anthropic Claude key

# Model Selection (optional - smart defaults provided)
OPENROUTER_MODEL=google/gemini-2.0-flash-exp      # Default OpenRouter model
GEMINI_MODEL=google/gemini-2.0-flash-exp          # Default Gemini model

# Optional Configuration
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1  # Custom OpenRouter URL
LLM_CACHE_DB_ID=llmCache                          # Firestore cache collection

Usage

LLMQueryHandler

Fetch arbitrary structured data using web search.

import { LLMQueryHandler } from 'pulse-ai-utils';

const queryHandler = new LLMQueryHandler('your-api-key');

// Use in an Express route
app.post('/llm-query', (req, res) => queryHandler.query(req, res));

🤖 LLM Providers

OpenAI Helper - Direct OpenAI API

import { OpenAIHelper } from 'pulse-ai-utils';

// Auto-loads from OPENAI_API_KEY env var, or pass explicitly
const openai = new OpenAIHelper(undefined, undefined, 'gpt-4o-mini');
// Or with explicit key: new OpenAIHelper('your-api-key', undefined, 'gpt-4o-mini');

// 🌟 Recommended: Use remote config for dynamic model selection
const smartOpenai = await OpenAIHelper.createWithRemoteConfig();

// Update model from remote config for long-running processes
await openai.updateModelFromRemoteConfig();

// Fetch structured data from the web  
const result = await openai.fetchStructuredDataFromWeb({
  prompt: 'Find upcoming tech events in San Francisco',
  zodSchema: yourZodSchema,
  userLocation: { 
    type: 'approximate',
    country: 'US', 
    region: 'CA',
    city: 'San Francisco'
  },
  locationGranularity: 'city',
});

// Get available OpenAI models
const models = await openai.getAvailableModels();
// Returns: ['gpt-4o-mini', 'gpt-4', 'gpt-3.5-turbo', ...]

OpenRouter - Universal Multi-Provider Access

Access 100+ models from multiple providers through a unified interface:

import { OpenRouter } from 'pulse-ai-utils';

// Auto-loads from environment variables (.env file)
const router = new OpenRouter();

// Factory methods auto-load from .env - no keys needed!
const gemini = OpenRouter.forGemini();  // Uses OPENROUTER_API_KEY + GEMINI_API_KEY
const claude = OpenRouter.forClaude();  // Uses OPENROUTER_API_KEY + CLAUDE_API_KEY  
const gpt = OpenRouter.forGPT();        // Uses OPENROUTER_API_KEY

// 🌟 Recommended: Use remote config for dynamic model selection
const smartRouter = await OpenRouter.createWithRemoteConfig();
const smartGemini = await OpenRouter.forGeminiWithRemoteConfig();
const webOptimized = await OpenRouter.createForWebFetching(); // Uses pulse-ai-web-openrouter-model

// Update model from remote config for long-running processes
await router.updateModelFromRemoteConfig();
await router.updateModelFromRemoteConfig(true); // Use web-optimized model

// Or pass keys explicitly if needed
const customGemini = OpenRouter.forGemini('openrouter-key', 'gemini-key');
const customClaude = OpenRouter.forClaude('openrouter-key', 'claude-key');

// Get available models with provider info
const models = await router.getAvailableModels();
// Returns: [
//   { id: 'google/gemini-2.0-flash-exp', name: 'Gemini 2.0 Flash', provider: 'Google' },
//   { id: 'anthropic/claude-3.5-sonnet', name: 'Claude 3.5 Sonnet', provider: 'Anthropic' },
//   ...
// ]

Model Selection Guide

// 🌟 Best: Remote config with dynamic model selection (recommended)
const smartOpenai = await OpenAIHelper.createWithRemoteConfig();
const smartGemini = await OpenRouter.forGeminiWithRemoteConfig();
const webOptimized = await OpenRouter.createForWebFetching(); // Special web-optimized model

// ✅ Good: Environment variables (auto-loads from .env)
const openai = new OpenAIHelper();
const gemini = OpenRouter.forGemini();
const claude = OpenRouter.forClaude();

// ✅ Fallback: Explicit configuration
const customGemini = OpenRouter.forGemini(undefined, undefined, 'google/gemini-2.0-flash-exp');
const customClaude = OpenRouter.forClaude(undefined, undefined, 'anthropic/claude-3.5-sonnet');

Remote Config Priority Order

  1. Firebase Remote Config (highest priority) - pulse-ai-util-*-model
  2. Environment Variables (.env file) - OPENAI_MODEL, OPENROUTER_MODEL, etc.
  3. Default Values (fallback) - gpt-4o-mini, google/gemini-2.0-flash-exp, etc.

Utility Functions

import { 
  getSchemaByCategory, 
  sanitizeId, 
  zodToJsonSchema 
} from 'pulse-ai-utils';

// Get schema for a category
const schema = getSchemaByCategory('events');

// Sanitize an ID
const cleanId = sanitizeId('https://example.com/path/');
// Result: 'example.com-path'

// Convert Zod schema to JSON schema
const jsonSchema = zodToJsonSchema(myZodSchema);

License

0BSD

Versioning and Publishing

To release a new version of this package:

  1. Open package.json in this directory and update the version field to the desired version tag (for example, "3.4.1").
  2. In your terminal, ensure you're in this directory:
    cd lib
  3. Build and publish to npm:
    npm run build
    npm publish

Your new version will be published under the latest tag on npm.

Package Sidebar

Install

npm i pulse-ai-utils

Weekly Downloads

10

Version

3.45.20

License

0BSD

Unpacked Size

590 kB

Total Files

97

Last publish

Collaborators

  • anandkumarkurapati