This is a demonstration of using AIGNE Framework to build a group chat workflow. The example now supports both one-shot and interactive chat modes, along with customizable model settings and pipeline input/output.
flowchart LR
manager(Group Manager)
user(User)
writer(Writer)
editor(Editor)
illustrator(Illustrator)
user --1 publish instruction--> manager
manager ==2 request to speak==> writer
manager --4 request to speak--> illustrator
writer -.3 group message.-> manager
writer -..-> editor
writer -..-> illustrator
writer -..-> user
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class manager inputOutput
class user processing
class writer processing
class editor processing
class illustrator processing
- Node.js (>=20.0) and npm installed on your machine
- An OpenAI API key for interacting with OpenAI's services
- Optional dependencies (if running the example from source code):
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY # Set your OpenAI API key
# Run in one-shot mode (default)
npx -y @aigne/example-workflow-group-chat
# Run in interactive chat mode
npx -y @aigne/example-workflow-group-chat --chat
# Use pipeline input
echo "Write a short story about space exploration" | npx -y @aigne/example-workflow-group-chat
git clone https://github.com/AIGNE-io/aigne-framework
cd aigne-framework/examples/workflow-group-chat
pnpm install
Setup your OpenAI API key in the .env.local
file:
OPENAI_API_KEY="" # Set your OpenAI API key here
You can use different AI models by setting the MODEL
environment variable along with the corresponding API key. The framework supports multiple providers:
-
OpenAI:
MODEL="openai:gpt-4.1"
withOPENAI_API_KEY
-
Anthropic:
MODEL="anthropic:claude-3-7-sonnet-latest"
withANTHROPIC_API_KEY
-
Google Gemini:
MODEL="gemini:gemini-2.0-flash"
withGEMINI_API_KEY
-
AWS Bedrock:
MODEL="bedrock:us.amazon.nova-premier-v1:0"
with AWS credentials -
DeepSeek:
MODEL="deepseek:deepseek-chat"
withDEEPSEEK_API_KEY
-
OpenRouter:
MODEL="openrouter:openai/gpt-4o"
withOPEN_ROUTER_API_KEY
-
xAI:
MODEL="xai:grok-2-latest"
withXAI_API_KEY
-
Ollama:
MODEL="ollama:llama3.2"
withOLLAMA_DEFAULT_BASE_URL
For detailed configuration examples, please refer to the .env.local.example
file in this directory.
pnpm start # Run in one-shot mode (default)
# Run in interactive chat mode
pnpm start -- --chat
# Use pipeline input
echo "Write a short story about space exploration" | pnpm start
The example supports the following command-line parameters:
Parameter | Description | Default |
---|---|---|
--chat |
Run in interactive chat mode | Disabled (one-shot mode) |
--model <provider[:model]> |
AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
--temperature <value> |
Temperature for model generation | Provider default |
--top-p <value> |
Top-p sampling value | Provider default |
--presence-penalty <value> |
Presence penalty value | Provider default |
--frequency-penalty <value> |
Frequency penalty value | Provider default |
--log-level <level> |
Set logging level (ERROR, WARN, INFO, DEBUG, TRACE) | INFO |
--input , -i <input>
|
Specify input directly | None |
# Run in chat mode (interactive)
pnpm start -- --chat
# Set logging level
pnpm start -- --log-level DEBUG
# Use pipeline input
echo "Write a short story about space exploration" | pnpm start
This project is licensed under the MIT License.