npm

@e-llm-studio/streaming-response
TypeScript icon, indicating that this package has built-in type declarations

1.0.8 • Public • Published

eLLM Studio Chat Stream Package

Welcome to the eLLM Studio Chat Stream package! 🎉 This package enables streaming chat functionality with your AI assistant in eLLM Studio via WebSocket and GraphQL. It's designed for both frontend and backend implementations.

🚀 Features

  • Real-time Streaming: Receive messages in a streaming fashion.
  • AI Assistant Integration: Seamlessly connects with the AI models deployed within your organization.
  • Customizable: Pass custom prompts or previous messages to enhance conversations.
  • Error Handling: Catch and manage errors using callbacks.
  • Image Upload: Upload images to the server.
  • Image Deletion: Delete previously uploaded images from the server.

📦 Installation

npm i @e-llm-studio/streaming-response

🛠️ Streaming Usage

Here’s how you can use the startChatStream function to set up the AI chat stream:

import { startChatStream } from '@e-llm-studio/streaming-response';

const params = {
    WEBSOCKET_URL: 'wss://your-ellm-deployment/graphql', // Required: WebSocket URL for your deployment
    organizationName: 'TechOrg',                         // Required: Organization name where the assistant is created
    chatAssistantName: 'MyAssistant',                    // Required: Assistant name
    selectedAIModel: 'gpt-4',                            // Required: AI model selected for the assistant
    replyMessage: '',                                    // Optional: Pass the previous response from AI.
    userName: 'John Doe',                                // Required: Username of the person using the assistant
    userEmailId: 'john@example.com',                     // Required: User's email
    userId: 'user-123',                                  // Required: Unique identifier for the user
    query: 'What is the weather like today?',            // Required: The user's question or prompt
    requestId: `requestId-${uuid()}`,                    // Required: Unique ID for the request
    customPrompt: 'Personalized prompt here',            // Optional: Add custom context to your prompt
    enableForBackend: false,                             // Optional: Set to true if you're using it on the backend (Node.js)
    images: { 'image1': 'ImageName1.jpg', 'image2': 'ImageName2.jpg' } // Optional: Pass image data if relevant.
    isDocumentRetrieval:true,                            // To bypass the document retrieval.
    onStreamEvent: (data) => console.log('Stream event:', data),  // Required: Callback for handling stream data
    onStreamEnd: (data) => console.log('Stream ended:', data),    // Optional: Callback for when the stream ends
    onError: (error) => console.error('Stream error:', error),    // Optional: Callback for handling errors
};

startChatStream(params);

🔑 Parameters

Required Parameters:

  • WEBSOCKET_URL: WebSocket URL of your eLLM deployment. Example: wss://dev-egpt.techo.camp/graphql.
  • organizationName: Name of the organization where the assistant is created.
  • chatAssistantName: Name of the assistant you're interacting with.
  • selectedAIModel: The AI model used (e.g., GPT-4).
  • userName: The name of the user interacting with the assistant.
  • userEmailId: Email ID of the user.
  • userId: Unique user ID.
  • query: The question or prompt you want to send to AI.
  • requestId: Unique request ID, e.g., requestId-${uuid()}.
  • onStreamEvent: Callback function to capture incoming stream events.

Optional Parameters:

  • replyMessage: If you want to include a previous response with the new query, pass it here. Leave empty for normal chat scenarios.
  • customPrompt: Use this to add additional context to the prompt sent to the AI.
  • images: Pass image data if relevant.
  • isDocumentRetrieval: To bypass the document retrieval.
  • enableForBackend: Set to true if you're using this package in backend e.g. NodeJs. Defaults to false, which is suitable for frontend use e.g. React/Next.js.
  • onStreamEnd: Callback for when the stream ends. Useful for handling final events or cleanup.
  • onError: Callback for capturing any errors during the stream.

🛠️ Image Upload

You can upload an image to the server using the uploadImage function:

import { uploadImage } from './index';

const imageFile = document.querySelector('input[type="file"]').files[0];

uploadImage(baseURL, imageFile)
  .then((message) => console.log(message))
  .catch((error) => console.error(error));

🛠️ Image Deletion

To delete an image from the server, use the deleteImage function:

import { baseURL, name, deleteImage } from './index';

deleteImage('User Name', 'image.jpg')
  .then((message) => console.log(message))
  .catch((error) => console.error(error));

👥 Community & Support

For any questions or issues, feel free to reach out via our GitHub repository or join our community chat! We’re here to help. 😊

Dependents (0)

Package Sidebar

Install

npm i @e-llm-studio/streaming-response

Weekly Downloads

300

Version

1.0.8

License

ISC

Unpacked Size

19.8 kB

Total Files

4

Last publish

Collaborators

  • deveshpatel
  • saurabhgathade
  • yash27verma
  • rishabh-sonal