@aigne/example-workflow-handoff

1.16.2 • Public • Published

Workflow Handoff Demo

This is a demonstration of using AIGNE Framework to build a handoff workflow. The example now supports both one-shot and interactive chat modes, along with customizable model settings and pipeline input/output.

flowchart LR

in(In)
out(Out)
agentA(Agent A)
agentB(Agent B)

in --> agentA --transfer to b--> agentB --> out

classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class in inputOutput
class out inputOutput
class agentA processing
class agentB processing

Workflow of a handoff between user and two agents:

sequenceDiagram

participant User
participant A as Agent A
participant B as Agent B

User ->> A: transfer to agent b
A ->> B: transfer to agent b
B ->> User: What do you need, friend?
loop
  User ->> B: It's a beautiful day
  B ->> User: Sunshine warms the earth,<br />Gentle breeze whispers softly,<br />Nature sings with joy.
end

Prerequisites

  • Node.js (>=20.0) and npm installed on your machine
  • An OpenAI API key for interacting with OpenAI's services
  • Optional dependencies (if running the example from source code):
    • Bun for running unit tests & examples
    • Pnpm for package management

Quick Start (No Installation Required)

export OPENAI_API_KEY=YOUR_OPENAI_API_KEY # Set your OpenAI API key

# Run in one-shot mode (default)
npx -y @aigne/example-workflow-handoff

# Run in interactive chat mode
npx -y @aigne/example-workflow-handoff --chat

# Use pipeline input
echo "transfer to agent b" | npx -y @aigne/example-workflow-handoff

Installation

Clone the Repository

git clone https://github.com/AIGNE-io/aigne-framework

Install Dependencies

cd aigne-framework/examples/workflow-handoff

pnpm install

Setup Environment Variables

Setup your OpenAI API key in the .env.local file:

OPENAI_API_KEY="" # Set your OpenAI API key here

Using Different Models

You can use different AI models by setting the MODEL environment variable along with the corresponding API key. The framework supports multiple providers:

  • OpenAI: MODEL="openai:gpt-4.1" with OPENAI_API_KEY
  • Anthropic: MODEL="anthropic:claude-3-7-sonnet-latest" with ANTHROPIC_API_KEY
  • Google Gemini: MODEL="gemini:gemini-2.0-flash" with GEMINI_API_KEY
  • AWS Bedrock: MODEL="bedrock:us.amazon.nova-premier-v1:0" with AWS credentials
  • DeepSeek: MODEL="deepseek:deepseek-chat" with DEEPSEEK_API_KEY
  • OpenRouter: MODEL="openrouter:openai/gpt-4o" with OPEN_ROUTER_API_KEY
  • xAI: MODEL="xai:grok-2-latest" with XAI_API_KEY
  • Ollama: MODEL="ollama:llama3.2" with OLLAMA_DEFAULT_BASE_URL

For detailed configuration examples, please refer to the .env.local.example file in this directory.

Run the Example

pnpm start # Run in one-shot mode (default)

# Run in interactive chat mode
pnpm start -- --chat

# Use pipeline input
echo "transfer to agent b" | pnpm start

Run Options

The example supports the following command-line parameters:

Parameter Description Default
--chat Run in interactive chat mode Disabled (one-shot mode)
--model <provider[:model]> AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' openai
--temperature <value> Temperature for model generation Provider default
--top-p <value> Top-p sampling value Provider default
--presence-penalty <value> Presence penalty value Provider default
--frequency-penalty <value> Frequency penalty value Provider default
--log-level <level> Set logging level (ERROR, WARN, INFO, DEBUG, TRACE) INFO
--input, -i <input> Specify input directly None

Examples

# Run in chat mode (interactive)
pnpm start -- --chat

# Set logging level
pnpm start -- --log-level DEBUG

# Use pipeline input
echo "transfer to agent b" | pnpm start

Example

The following example demonstrates how to build a handoff workflow:

import { AIAgent, AIGNE } from "@aigne/core";
import { OpenAIChatModel } from "@aigne/core/models/openai-chat-model.js";

const { OPENAI_API_KEY } = process.env;

const model = new OpenAIChatModel({
  apiKey: OPENAI_API_KEY,
});

function transfer_to_b() {
  return agentB;
}

const agentA = AIAgent.from({
  name: "AgentA",
  instructions: "You are a helpful agent.",
  outputKey: "A",
  skills: [transfer_to_b],
});

const agentB = AIAgent.from({
  name: "AgentB",
  instructions: "Only speak in Haikus.",
  outputKey: "B",
});

const aigne = new AIGNE({ model });

const userAgent = aigne.invoke(agentA);

const result1 = await userAgent.invoke("transfer to agent b");
console.log(result1);
// Output:
// {
//   B: "Transfer now complete,  \nAgent B is here to help.  \nWhat do you need, friend?",
// }

const result2 = await userAgent.invoke("It's a beautiful day");
console.log(result2);
// Output:
// {
//   B: "Sunshine warms the earth,  \nGentle breeze whispers softly,  \nNature sings with joy.  ",
// }

License

This project is licensed under the MIT License.

Readme

Keywords

none

Package Sidebar

Install

npm i @aigne/example-workflow-handoff

Weekly Downloads

341

Version

1.16.2

License

MIT

Unpacked Size

17.7 kB

Total Files

7

Last publish

Collaborators

  • li-yechao
  • wangshijun