Welcome to the eLLM Studio Chat Stream package! 🎉 This package enables streaming chat functionality with your AI assistant in eLLM Studio via WebSocket and GraphQL. It's designed for both frontend and backend implementations.
- Real-time Streaming: Receive messages in a streaming fashion.
- AI Assistant Integration: Seamlessly connects with the AI models deployed within your organization.
- Customizable: Pass custom prompts or previous messages to enhance conversations.
- Error Handling: Catch and manage errors using callbacks.
- Image Upload: Upload images to the server.
- Image Deletion: Delete previously uploaded images from the server.
npm i @e-llm-studio/streaming-response
Here’s how you can use the startChatStream function to set up the AI chat stream:
import { startChatStream } from '@e-llm-studio/streaming-response';
const params = {
WEBSOCKET_URL: 'wss://your-ellm-deployment/graphql', // Required: WebSocket URL for your deployment
organizationName: 'TechOrg', // Required: Organization name where the assistant is created
chatAssistantName: 'MyAssistant', // Required: Assistant name
selectedAIModel: 'gpt-4', // Required: AI model selected for the assistant
replyMessage: '', // Optional: Pass the previous response from AI.
userName: 'John Doe', // Required: Username of the person using the assistant
userEmailId: 'john@example.com', // Required: User's email
userId: 'user-123', // Required: Unique identifier for the user
query: 'What is the weather like today?', // Required: The user's question or prompt
requestId: `requestId-${uuid()}`, // Required: Unique ID for the request
customPrompt: 'Personalized prompt here', // Optional: Add custom context to your prompt
enableForBackend: false, // Optional: Set to true if you're using it on the backend (Node.js)
images: { 'image1': 'ImageName1.jpg', 'image2': 'ImageName2.jpg' } // Optional: Pass image data if relevant.
guestSessionId: `organizationName-chatAssistantName-${uuid()}` // Optional: Custom unique id for storing chat history
chatPreviewId: `${uuid()}` //Optional: Custom unique Id for tracking chat sessions
aiModel: encodeParam(selectedAIModel),
isDocumentRetrieval:true, // To bypass the document retrieval.
onStreamEvent: (data) => console.log('Stream event:', data), // Required: Callback for handling stream data
onStreamEnd: (data) => console.log('Stream ended:', data), // Optional: Callback for when the stream ends
onError: (error) => console.error('Stream error:', error), // Optional: Callback for handling errors
customMetaData: ["test1", "test2"] || {""} // Optional: Add custom metadata for tracking or context purposes
};
startChatStream(params);
-
WEBSOCKET_URL: WebSocket URL of your eLLM deployment. Example:
wss://dev-egpt.techo.camp/graphql
. - organizationName: Name of the organization where the assistant is created.
- chatAssistantName: Name of the assistant you're interacting with.
- selectedAIModel: The AI model used (e.g., GPT-4).
- userName: The name of the user interacting with the assistant.
- userEmailId: Email ID of the user.
- userId: Unique user ID.
- query: The question or prompt you want to send to AI.
-
requestId: Unique request ID, e.g.,
requestId-${uuid()}
. - onStreamEvent: Callback function to capture incoming stream events.
- replyMessage: If you want to include a previous response with the new query, pass it here. Leave empty for normal chat scenarios.
- customPrompt: Use this to add additional context to the prompt sent to the AI.
- images: Pass image data if relevant.
- isDocumentRetrieval: To bypass the document retrieval.
- Annotations & Citations: Dynamically display annotations with tooltip-based citations.
- enableForBackend: Set to true if you're using this package in backend e.g. NodeJs. Defaults to false, which is suitable for frontend use e.g. React/Next.js.
- onStreamEnd: Callback for when the stream ends. Useful for handling final events or cleanup.
- onError: Callback for capturing any errors during the stream.
- guestSessionId: Custom unique id for storing chat history
- chatPreviewId: Custom unique Id for tracking chat sessions
You can upload an image to the server using the uploadImage function:
import { uploadImage } from "./index";
const imageFile = document.querySelector('input[type="file"]').files[0];
uploadImage(baseURL, imageFile)
.then((message) => console.log(message))
.catch((error) => console.error(error));
To delete an image from the server, use the deleteImage function:
import { baseURL, name, deleteImage } from "./index";
deleteImage("User Name", "image.jpg")
.then((message) => console.log(message))
.catch((error) => console.error(error));
The Consultative Mode allows you to toggle a specialized conversation style with the AI assistant. This mode ensures the AI follows a specific set of guidelines to better understand user requirements, ask clarifying questions, and provide tailored responses.
Default Prompt: If no custom prompt is provided, the following default prompt is used:
"You are responsible for understanding the user’s needs by making sure that you gather enough relevant information before responding to achieve the goal described above. To achieve this, you will always ask clarifying follow-up questions, making sure to ask only one question at a time to maintain a focused and organized conversation. The follow-up questions should be short and crisp within 2 lines only. You should always follow a CARE framework which is about Curious:- being curious and interested to know about user needs, Acknowledge the responses given by the user and Respond with Empathy. Do not ask too many unnecessary question
Example:
import { ChatbotService } from "@e-llm-studio/streaming-response";
const chatbotService = new ChatbotService(
"https://your-api-url.com", // API base URL
"username", // Your username
"password" // Your password
);
const toggleConsultativeMode = async () => {
try {
const chatBotId = "chatbot-id-123"; // Chatbot ID
const assistantName = "MyAssistant"; // Assistant Name
const organizationName = "TechOrg"; // Organization Name
const currentMode = false; // Current consultative mode state
const isFetch = false; // Whether to fetch or toggle
const customPrompt = "Custom consultative mode prompt here"; // Optional
const isModeEnabled = await chatbotService.toggleConsultativeMode(
chatBotId,
customPrompt,
assistantName,
organizationName,
isFetch,
currentMode
);
console.log("Consultative Mode:", isModeEnabled ? "Enabled" : "Disabled");
} catch (error) {
console.error("Error toggling consultative mode:", error);
}
};
toggleConsultativeMode();
For any questions or issues, feel free to reach out via our GitHub repository or join our community chat! We’re here to help. 😊