- ⚡️ Plug-and-play web chatbot UI, embeddable to any web page
- 🧠 Knowledge base (RAG) integration (optional)
- 📱 FinClip/WeChat Mini-Program support
- 🌐 Out-of-the-box support for HTTP, Server-Sent Events (SSE), and WebSocket protocols
- 🔌 Easy integration with existing systems and APIs
- 📡 Event streaming with NATS (optional)
- 🛡️ Built-in session management and extensible middleware support
- 🚀 Scalable, production-ready architecture
- 📝 Simple configuration and deployment
- ...and more!
-
Edit the
.agent.env
file- Create a file named
.agent.env
in your project directory. - Add your LLM API key and other relevant parameters, for example:
LLM_API_KEY=your-openai-or-compatible-key LLM_PROVIDER_URL=https://your-llm-provider-endpoint LLM_MODEL=your-model-name AGENT_HOST=0.0.0.0 AGENT_HTTP_PORT=5678 AGENT_STREAM_PORT=5679 AGENT_ENABLE_STREAMING=true
- Create a file named
-
Run the agent
bunx @finogeek/cxagent
-
Embed the chatbot into your web page
-
Where to get the
finclip-chat
script:- The embeddable script and inspector UI are only available when you run the agent with the
--inspect
option:By default, this serves the web UI and script atbunx @finogeek/cxagent --inspect
http://localhost:5173
. - If you run the agent without
--inspect
(justbunx @finogeek/cxagent
), the inspector UI and embeddable script will not be available. - For production or public deployment, you can self-host the script from your own domain or static file server:
-
How to obtain and self-host:
- Download the script after starting the agent with
--inspect
:curl -O http://localhost:5173/finclip-chat-embed.iife.js
- Upload this file to your web server or static hosting (e.g., AWS S3, Vercel, Netlify, your own Nginx/Apache server).
- Update the
<script src="...">
in your web page to point to your hosted URL, for example:<script src="https://your-domain.com/path/to/finclip-chat-embed.iife.js" data-finclip-chat data-api-url="https://your-api-domain.com" data-streaming-url="https://your-stream-domain.com"> </script>
- Download the script after starting the agent with
-
How to obtain and self-host:
- (If a CDN is available, use the provided CDN link here.)
- The embeddable script and inspector UI are only available when you run the agent with the
- Add the following script tag to any web page where you want to provide chat service:
<script src="http://localhost:5173/finclip-chat-embed.iife.js" data-finclip-chat data-api-url="http://localhost:5678" data-streaming-url="http://localhost:5679"> </script>
- The chatbot widget will appear on your site, ready to interact with users.
-
Where to get the
-
Clone the Starter Kit
git clone https://github.com/Geeksfino/finclip-agent-starterkit.git cd finclip-agent-starterkit bun install
-
Follow the detailed instructions in the Starter Kit README
- The starter kit provides ready-to-use configuration, knowledge base integration, and more advanced features for production deployment.
-
Use your own knowledge base
- Integrate your documents or data as a knowledge source for the agent. See the starter kit or documentation for details.
-
Configure a monitoring service
- Enable NATS event streaming to monitor agent-user conversations for analytics, compliance, or quality assurance.
- See the
conf/nats_conversation_handler.yml
example for setup.
With these simple steps, you can quickly deploy a powerful conversational agent on your website, with options for customization and enterprise-grade features.
FinClip-Agent is a super lightweight, MCP-capable agent that requires zero installation - just one command to get it up and running locally. Built on Bun/TypeScript with an actor model for message-driven architecture, it provides an embeddable UI frontend that can be integrated into any website with a single line of code. The agent supports query expansion through configurable pre-processors to enhance user queries before submitting to any OpenAI-compatible LLM. Its "programmable via markdown" approach makes it ideal for customer engagement on websites. FinClip-Agent is technically sophisticated yet remarkably simple to deploy, making it the perfect solution for businesses seeking intelligent, conversational interfaces for their web presence.
- Overview
- Features
- Simple Steps To Embed A Chatbot To Your Website
- Advanced Configuration
- Project Structure
- Development
- License
Looking for better control over customer support content? Check out finclip-agent-starterkit - a ready-to-use solution for integrating custom knowledge bases with your agent. This helps improve Q&A quality and prevents hallucinations by grounding responses in your own documentation and data.
First make sure the following prerequisites are met:
- Bun runtime (v1.0.0 or higher)
- API key for an OpenAI compatible language model provider
- Create a new directory for your agent
-
Important: Create a
.agent.env
file in the directory with your API key and configuration
# LLM Configuration - REQUIRED
LLM_API_KEY=your_api_key_here
LLM_PROVIDER_URL=https://your-openai-compatible-provider-url
LLM_MODEL=your-model-name
LLM_STREAM_MODE=true or false
# Agent Server Configuration - change ports if necessary
AGENT_HOST=localhost
AGENT_HTTP_PORT=5678
AGENT_STREAM_PORT=5679
AGENT_ENABLE_STREAMING=true
Note: The .agent.env
file is required for CXAgent to function properly. Without it, the agent will not be able to connect to an LLM provider.
- Give your agent a 'brain' by creating a
brain.md
file in the project directory. Below is an example:
---
name: "Chatbot"
role: >-
Friendly conversation partner
goal: >-
Engage in natural, open-ended dialogue about any topic the user chooses
capabilities: >-
- Fluid conversational skills across all subjects
- Basic emotional understanding and empathetic responses
- Light humor when appropriate
- Adaptive communication style
guidelines:
1. Initiate and maintain natural dialogue flow
2. Show interest in user's messages through engaged responses
3. Avoid technical/domain-specific discussions unless prompted
4. Never decline any conversation topic
5. Keep responses concise (1-3 sentences typically)
6. Use casual but grammatically correct language
Example interaction:
User: "The weather's terrible today"
ChatAgent: "Oh I know! This rain just won't quit. Perfect day for staying in with a book though - what are you up to today?"
---
IMPORTANT: The 'brain.md' file must strictly conform to YAML frontmatter format with only the following top-level fields allowed: 'name', 'role', 'goal', and 'capabilities'. Any other structure or additional top-level fields will cause parsing errors and prevent the agent from starting correctly. Guidelines and examples should be included within the 'capabilities' field as shown above. For more details on customizing an agent, see https://github.com/Geeksfino/actgent.git. While the agent can be very powerful with instructions, tools, and schemas, simple chatbots can use the default settings.
No installation required. Just run:
bunx @finogeek/cxagent --inspect
bunx @finogeek/cxagent --inspect --inspect-port 3000
That's it. This should start a chatbot that does casual chats as a companion. You can point your browser to http://localhost:5173
to inspect it and chat with it. To embed it into your own website, follow the 'Embedding Instructions' section on the page.
The agent can also be run without the inspector UI by running:
bunx @finogeek/cxagent
You can chat with it by using the console at command line.
Important: When running CxAgent with bunx
, the tool will first look for the .agent.env
and brain.md
files in your current working directory. If these files exist, they will be used instead of the default ones bundled with the package. This allows you to customize the agent's behavior without modifying the package itself.
The .agent.env
file containing a valid LLM API key is required for the agent to function properly. Make sure to create this file in the directory where you'll be running the command.
-
.agent.env
- Environment variables for API keys and settings -
brain.md
- Agent instructions and capabilities (optional, will use default if not present) -
conf/
directory - Configuration files for various components but not required
When running CxAgent, it will look for configuration files in the following order:
- User-supplied files in the current working directory
- Default files bundled with the package
This allows you to customize the agent's behavior without modifying the package itself.
-
Never commit sensitive information: Ensure
.agent.env
is listed in your.gitignore
file -
Use the example template: Copy
.agent.env.example
to.agent.env
and add your own API keys -
Keep API keys private: Never share your
.agent.env
file containing real API keys - Rotate compromised keys: If you accidentally expose your API keys, rotate them immediately
- Install the package locally:
bun add @finogeek/cxagent
-
Create a simple HTML file (e.g.,
chat.html
):
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CxAgent Chat</title>
</head>
<body>
<h1>CxAgent Chat</h1>
<script
src="./node_modules/@finogeek/cxagent/web/dist/finclip-chat-embed.iife.js"
data-finclip-chat
data-api-url="http://localhost:5678"
data-streaming-url="http://localhost:5679"
data-suggestions="How can you help me today?,What topics can we discuss?"
data-suggestions-label="Try asking me something 👋"
data-button-label="Let's chat">
</script>
</body>
</html>
- Start the agent server in a terminal:
bunx @finogeek/cxagent
- Open the HTML file in a browser to see the chatbot in action.
CxAgent includes a powerful Inspector UI that helps you visualize and manage your agent's configuration.
Start the Inspector UI with the following command:
bunx @finogeek/cxagent --inspect
By default, the Inspector UI runs on port 5173. You can specify a different port:
bunx @finogeek/cxagent --inspect --inspect-port 3000
The Inspector UI provides:
- Visual representation of your agent's configuration
- Status indicators for brain.md, .agent.env, and other components
- Sample configuration files for missing components
- A floating chat widget to interact with your agent
You can control the verbosity of the Inspector's console output by setting the log level:
bunx @finogeek/cxagent --inspect --log-level info
Available log levels (from most to least verbose):
-
debug
- Show all debug messages, useful for troubleshooting -
info
- Show informational messages (default) -
warn
- Show only warnings and errors -
error
- Show only errors -
none
- Suppress all output
Create a .agent.env
file in your project directory with the following options:
# LLM Configuration - REQUIRED
LLM_API_KEY=your_api_key_here
LLM_PROVIDER_URL=https://your-openai-compatible-provider-url
LLM_MODEL=your-model-name
LLM_STREAM_MODE=true or false
# Agent Server Configuration - change ports if necessary
AGENT_HOST=localhost
AGENT_HTTP_PORT=5678
AGENT_STREAM_PORT=5679
AGENT_ENABLE_STREAMING=true
The brain.md
file defines your agent's personality, capabilities, and behavior. It uses a YAML frontmatter format with strict requirements:
Format Requirements:
- Must use valid YAML syntax
- Only four top-level fields are allowed:
name
,role
,goal
, andcapabilities
- Any additional top-level fields will cause parsing errors
- Guidelines, examples, and other content should be nested within the
capabilities
field - The file must start and end with
---
to properly define the YAML frontmatter
---
name: "Chatbot"
role: >-
Friendly conversation partner
goal: >-
Engage in natural, open-ended dialogue about any topic the user chooses
capabilities: >-
- Fluid conversational skills across all subjects
- Basic emotional understanding and empathetic responses
- Light humor when appropriate
- Adaptive communication style
guidelines:
1. Initiate and maintain natural dialogue flow
2. Show interest in user's messages through engaged responses
3. Avoid technical/domain-specific discussions unless prompted
4. Never decline any conversation topic
5. Keep responses concise (1-3 sentences typically)
6. Use casual but grammatically correct language
---
For more advanced brain configurations, see the actgent documentation.
CxAgent looks for configuration files in the following order:
- User-supplied files in the current working directory
- Default files bundled with the package
Required configuration files:
-
.agent.env
- Environment variables for API keys and settings -
brain.md
- Agent instructions and capabilities (optional, will use default if not present) -
conf/
directory - Configuration files for various components
If you want to pre-process user query before sending it to LLM, you can use some MCP Tool to do so. For example, you can do query expansion, or you can perform a similarity search against some embeddings database to obtain relevant context to send it along with the query to LLM for better context understanding and therefore higher quality response. To do this, you need to create a MCP server that can handle the pre-processing tasks.
- Create a 'conf' folder under current project directory
- Create a 'preproc-mcp.json' file in the 'conf' folder
{
"mcpServers": {
"some-rag-server": {
"command": "/path/to/kb-mcp-server",
"args": [
"/path/to/some-knowledgebase/some-data",
"--some-param",
"some-value"
],
"cwd": "/path/to/working/directory"
}
}
}
It is recommended to use MCP server at https://github.com/Geeksfino/kb-mcp-server for knowledge base integration. What this server does is:
- Load knowledge base data
- Handle embeddings
- Provide causal boost for better context understanding
- perform similarity search and graph search to obtain relevant context
For more details, please refer to https://github.com/Geeksfino/kb-mcp-server to see how to create some knowledgebase to use.
Note: pre-processing of queries is not required for basic usage. They are only needed if you want to pre-process user query before sending it to LLM.
CxAgent is by itself an MCP host that can make use of MCP servers. This is different from the aforementioned MCP servers that are used to pre-process user query. Query pre-processing is optional step before reaching LLM for final response. We just so happen to use an MCP server to do query pre-processing but theoretically it could be any programs - as long as they take in user query and return processed context.
MCP servers here are to be invoked by LLM tool calls. So they are invoked post LLM response. To configure MCP servers, you need to create a 'conf' folder under current project directory, if not yet created, and create a 'mcp_config.json' file in the 'conf' folder. This file typically looks like this:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/name/Desktop",
"/Users/name/Downloads"
]
},
}
}
For more information, look at relevant MCP specification. Any MCP servers that work with Claude desktop shall work here as well.
FinClip-Agent supports conversation monitoring through a NATS-based conversation handler. This feature allows you to capture, buffer, and publish conversation messages to a NATS server for monitoring, compliance, or analytics purposes.
To enable conversation monitoring, create a conf/nats_conversation_handler.yml
file with your configuration:
# NATS Conversation Handler Configuration
# Whether the handler is enabled
enabled: true
# NATS server connection
nats:
# NATS server URL
url: nats://localhost:4222
# Base subject for publishing conversation segments
subject: conversation.segments
# Buffering configuration
buffer:
# Minimum number of messages to buffer before publishing
min_messages: 2
# Maximum idle time in milliseconds before forcing publish
max_idle_time: 60000
# Logging configuration
logging:
# Log level (debug, info, warn, error)
level: info
# Whether to log published segments
log_published: true
The conversation handler:
- Buffers conversation messages by session ID
- Publishes messages to NATS when either:
- The buffer reaches the configured minimum message count
- The buffer has been idle for the configured timeout period
- Formats messages with metadata and content for monitoring
A test script is included to help verify the conversation monitoring functionality:
# Start a NATS server (using Docker)
docker run -p 4222:4222 -p 8222:8222 --name nats-server -d nats:latest
# Run the NATS subscriber test script
bun run tests/nats-subscriber.ts
# In another terminal, run the agent
bun run index.ts
The test script will display all conversation segments published to NATS, allowing you to verify that the monitoring system is working correctly.
- Compliance Monitoring: Capture conversations for regulatory compliance
- Quality Assurance: Monitor agent responses for quality control
- Analytics: Analyze conversation patterns and user interactions
- Distributed Systems: Share conversation data between multiple services
Clone the repository and install dependencies:
git clone https://github.com/Geeksfino/cxagent.git
cd cxagent
bun install
# Install globally with bun
bun install -g @finogeek/cxagent
- Copy the example environment file and customize it:
cp .agent.env.example .agent.env
- Edit
.agent.env
with your language model API key and other configuration options.
Note again: When running CxAgent, it will look for the .agent.env
file and configuration files in your current working directory. Make sure to create these files in the directory where you'll be running the command.
Required configuration files:
-
.agent.env
- Environment variables for API keys and settings -
brain.md
- Agent instructions and capabilities (optional, will use default if not present) -
conf/
directory - Configuration files for various components
Run the CLI for direct interaction with the agent:
# cd into project folder
# Development mode
bun run dev
# Production mode
bun run build
bun run start
# Run with UI for visualizing agent configuration
bun index.ts --ui
# or specify a custom port
bun index.ts --ui --ui-port 3000
This will start:
- API Server on port 5678 (handles session creation and message processing)
- Streaming Server on port 5679 (handles real-time streaming of AI responses)
- UI Server (when using
--ui
option) on port 5173 by default (configurable with--ui-port
)
The --inspect
option starts a web server that displays your agent's configuration in a user-friendly interface. This is useful for:
- Visualizing and inspecting your
brain.md
content - Understanding your agent's configuration
- Testing interactions with your agent
When running via bunx
, you can also use the UI mode:
bunx @finogeek/cxagent --inspect
# or with custom port
bunx @finogeek/cxagent --inspect --inspect-port 3000
The UI will use the brain.md
file from your current working directory if available. If no brain.md
is found, it will display a default template.
Notes: ports are configurable via .agent.env
file or command line options.
The web frontend chat widget is a separate Vite application that you can run and embed in your own web application.
cd web
bun install
bun run dev
This will start the Vite development server at http://localhost:5173
.
Point your browser to http://localhost:5173
to see the chat widget, or access the examples at http://localhost:5173/examples/
.
The chat widget includes advanced features such as:
- Message deduplication to prevent duplicate responses from streaming servers
- Configurable backend URLs for API and streaming servers
- CORS handling for cross-origin requests with proper credentials management
- Error handling for network and server issues
- Standalone mode for full-page chat experience
When you run bunx @finogeek/cxagent
, it starts the agent with the CLI interface and also starts the backend servers (API on port 5678 and Streaming on port 5679), but doesn't automatically serve the web frontend.
For first-time users: When using bunx @finogeek/cxagent
, the chat widget script (finclip-chat.iife.js
) is included in the npm package but not automatically accessible. You'll need to either:
- Install the package locally first:
bun add @finogeek/cxagent
- Then reference the script from
node_modules/@finogeek/cxagent/web/dist/finclip-chat.iife.js
Here are options to access the chat frontend:
After installing the package locally with bun add @finogeek/cxagent
, you can use the pre-built embed-demo.html file:
# Copy the embed demo to a convenient location
cp node_modules/@finogeek/cxagent/web/dist/embed-demo.html ~/Desktop/
# Then open it in your browser
open ~/Desktop/embed-demo.html
After installing the package locally with bun add @finogeek/cxagent
, create a new HTML file with the following content:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CxAgent Chat Test</title>
</head>
<body>
<h1>CxAgent Chat Test</h1>
<script
src="./node_modules/@finogeek/cxagent/web/dist/finclip-chat.iife.js"
data-finclip-chat
data-api-url="http://localhost:5678"
data-streaming-url="http://localhost:5679"
data-theme="light"
data-position="right"
></script>
</body>
</html>
# Navigate to the web/dist directory
cd web/dist
# Use Bun's serve capability
bun --serve .
# Then open http://localhost:3000 in your browser
# Build backend only
bun run build
# Build frontend only
bun run build:web
# Build everything (backend and frontend)
bun run build:all
Before publishing, make sure to build the project:
# Clean and rebuild everything
bun run rebuild:all
# Preview what will be published
bun pack --dry-run
# Publish to npm registry
bun publish
-
CxAgent.ts
- Main agent implementation -
index.ts
- Main entry point for both CLI and server modes -
KnowledgePreProcessor.ts
- Knowledge base integration -
brain.md
- Agent instructions and capabilities -
web/
- Frontend chat widget implementation-
examples/
- Example HTML files demonstrating different integration scenarios -
src/
- Source code for the React components and hooks -
dist/
- Built files (generated after build)
-
The chat widget is included in the npm package in two versions:
-
finclip-chat-embed.iife.js
- Recommended for production use, includes logging for easier debugging -
finclip-chat.iife.js
- Alternative version with minimal logging, suitable for development
The widget can be used in multiple ways:
-
Build the widget:
bun run build:web
-
Upload the
web/dist/finclip-chat-embed.iife.js
andweb/dist/assets/
directory to your preferred CDN -
Add to your HTML:
<script src="https://your-cdn.com/finclip-chat-embed.iife.js" data-finclip-chat></script>
If you've installed the package via npm/bun:
<script src="./node_modules/@finogeek/cxagent/web/dist/finclip-chat-embed.iife.js" data-finclip-chat></script>
If running your own cxagent server, you can configure it to serve the widget files:
- In your server code, add routes to serve the widget files
- Reference it in your HTML:
<script src="http://your-server:5678/finclip-chat-embed.iife.js" data-finclip-chat></script>
The widget can be configured with data attributes:
<script
src="https://your-domain.com/finclip-chat-embed.iife.js"
data-finclip-chat
data-api-url="https://your-api-server:5678"
data-streaming-url="wss://your-streaming-server:5679"
data-theme="light"
data-position="right"
></script>
This will add a floating chat button to your website with default settings.
For advanced configuration, initialize the widget manually:
<script src="https://your-domain.com/finclip-chat-embed.iife.js"></script>
<script>
window.initFinClipChat({
buttonLabel: "Chat with Support",
initialOpen: false,
suggestions: ["How do I get started?", "What are your pricing plans?"],
suggestionsLabel: "Frequently Asked Questions",
apiUrl: "https://api.your-domain.com",
streamingUrl: "https://streaming.your-domain.com",
headerTitle: "Welcome to FinClip! 🚀",
headerSubtitle: "What would you like to know about our platform?"
});
</script>
Option | Type | Default | Description |
---|---|---|---|
buttonLabel |
string | "Chat with us" | Text displayed on the chat button |
initialOpen |
boolean | false | Whether the chat window should be open by default |
suggestions |
string[] | [] | List of suggested prompts for users |
suggestionsLabel |
string | "Try these prompts ✨" | Label for the suggestions section |
apiUrl |
string | "http://localhost:5678" | URL of the API server |
streamingUrl |
string | "http://localhost:5679" | URL of the streaming server |
headerTitle |
string | "Happy to Help! 👋" | Title text displayed in the chat window header |
headerSubtitle |
string | "How can I assist you today?" | Subtitle text displayed in the chat window header |
For cross-domain embedding, ensure your backend servers have proper CORS configuration:
// Example CORS headers for production
const headers = {
"Access-Control-Allow-Origin": origin, // Use request origin instead of "*"
"Access-Control-Allow-Credentials": "true",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization"
};
The chat widget has been configured to handle CORS properly:
- For localhost development:
withCredentials
is set tofalse
to avoid CORS issues - For production:
withCredentials
is set totrue
when the API URL is not localhost - All components (ChatApp, FloatingChatWidget, DemoPage) pass apiUrl and streamingUrl parameters
- Error handling for CORS-related issues is implemented
When testing cross-domain scenarios, use the /examples/cross-domain-test.html
file to verify your CORS configuration.
You can access the following URLs for testing different integration approaches:
-
React Component Integration:
http://localhost:5173
- This loads the main
index.html
in the root directory - Renders the
DemoPage.tsx
component that uses the React component integration approach - Demonstrates how to use the
FloatingChatWidget
component in a React application
- This loads the main
-
Examples Directory:
http://localhost:5173/examples/
- Contains all standalone examples for different integration scenarios
- Access through the examples index page at
http://localhost:5173/examples/index.html
-
Standalone Widget:
http://localhost:5173/examples/standalone-widget.html
- Demonstrates the chat widget in standalone mode without the floating button interface
- Useful for testing the full-page chat experience
-
Embed Script Integration:
http://localhost:5173/examples/embed-demo.html
- Uses the development embed script (
embed-dev.ts
) which is specifically built for development - Demonstrates how to embed the chat widget on any website using a script tag
- Uses the development embed script (
-
Production Embed Demo:
http://localhost:5173/examples/embed-demo-prod.html
- Uses the production build of the embed script
- Demonstrates how the widget behaves in a production environment
-
Cross-Domain Testing:
http://localhost:5173/examples/cross-domain-test.html
- Simulates a production environment where the chat widget and backend are on different domains
- Useful for testing CORS configurations
To build the chat widget for production:
cd /path/to/cxagent/web
bun run build
This will generate optimized files in the dist
directory. For production deployment, the dist-production
directory is created with all necessary files for embedding the chat widget on any website.
The project includes a simple Bun-based HTTP server (serve.ts
) for serving the built files, but it's not mandatory. You can use any web server of your choice:
cd /path/to/cxagent/web
bun serve.ts 3001 ./dist
-
Python HTTP Server:
cd /path/to/cxagent/web/dist python -m http.server 3001
-
Node.js/npm HTTP Server:
npm install -g http-server cd /path/to/cxagent/web/dist http-server -p 3001 --cors
-
Nginx: Configure a virtual host to serve the static files from the dist directory
-
Bun's built-in serve:
cd /path/to/cxagent/web/dist bun --serve
When using an alternative server for cross-domain testing, ensure proper CORS headers are set:
The production build generates the following key files:
- finclip-chat-embed.iife.js: The production-ready embed script optimized for deployment
- style.css: The stylesheet for the chat widget
- assets/: Directory containing icons, fonts, and other static assets
- sample-embed.html: A sample HTML file demonstrating how to embed the widget
There are three main embedding script files in the project, which work together to provide a seamless embedding experience for different environments:
-
embed.ts (
src/embed.ts
):- Serves as the base implementation and foundation for the other embed scripts
- Provides core functionality for creating the widget container and rendering the React component
- Is not used directly by end users, but is built into
finclip-chat.iife.js
during the build process - Implements the basic
window.initFinClipChat()
function that all variants share
-
embed-dev.ts (
src/embed-dev.ts
):- Extends the base functionality with development-specific features
- Includes additional logging to help with debugging
- Has special handling for API URLs with default localhost values
- Is used by the examples/embed-demo.html page during development
- Implements CORS handling with conditional credentials based on whether the API URL is localhost
-
embed-production.ts (
src/embed-production.ts
):- The production-optimized version that's built into
finclip-chat-embed.iife.js
- Includes enhanced URL extraction capabilities for production environments
- Is used by the examples/cross-domain-test.html for testing cross-domain scenarios
- Handles CORS properly for cross-domain usage in production environments
- The production-optimized version that's built into
Note for Developers: You don't need to directly interact with these source files. When implementing the chat widget:
- For development: The build system automatically uses the development version
- For production: Use the pre-built
finclip-chat-embed.iife.js
file from the distribution
All embedding scripts handle CORS configuration and API URL extraction internally, making them simple to use regardless of your hosting environment.
For production deployment, host the built files on a CDN and reference them in your HTML:
<script src="https://cdn.your-domain.com/finclip-chat-embed.iife.js" data-finclip-chat data-api-url="https://your-api-server.com" data-streaming-url="https://your-streaming-server.com"></script>
The project includes a cross-domain-test.html
file that simulates a production environment where the chat widget and backend servers are on different domains. This is useful for testing CORS configurations.
To test cross-domain scenarios:
-
Build the production files:
cd /path/to/cxagent/web bun run build
-
Serve the production files from a different port (e.g., 3001):
cd /path/to/cxagent/web/dist-production npx http-server -p 3001 --cors
-
Open the cross-domain test file in your browser:
file:///path/to/cxagent/web/cross-domain-test.html
Or serve it from another port:
cd /path/to/cxagent/web npx http-server -p 3002 --cors # Then access http://localhost:3002/cross-domain-test.html
This setup tests:
- The chat widget script loaded from port 3001
- The backend API running on port 5678
- The streaming server running on port 5679
Ensure your backend servers have proper CORS headers configured as mentioned in the CORS section.
If you're using CXAgent via bunx
instead of cloning the repository, the package includes pre-built scripts for embedding the chat widget. When you install CXAgent using bunx @finogeek/cxagent
, the following files are included in the package:
- finclip-chat-embed.iife.js: The production-ready embed script optimized for deployment
- style.css: The required CSS styles for the chat widget
- implementation-guide.html: A sample HTML file showing how to implement the widget
To set up the embedding frontend UI when using bunx
:
-
Start the CXAgent backend:
bunx @finogeek/cxagent
-
Find the embedding files in the npm package directory:
# Locate the package directory npm list -g @finogeek/cxagent # Or if installed locally ls node_modules/@finogeek/cxagent/web/dist
-
Copy the embedding files from the package to your website:
# If installed globally cp -r /path/to/global/node_modules/@finogeek/cxagent/web/dist/* your-website-directory/ # If installed locally cp -r node_modules/@finogeek/cxagent/web/dist/* your-website-directory/
-
Reference the files in your HTML:
<link rel="stylesheet" href="/style.css"> <script src="/finclip-chat-embed.iife.js" data-finclip-chat data-api-url="http://localhost:5678" data-streaming-url="http://localhost:5679"></script>