A TypeScript client library for interacting with the Vercel Queue Service API, designed for seamless integration with Vercel deployments.
- Automatic Queue Triggering: Vercel automatically triggers your API routes when messages are ready
- Next.js Integration: Built-in support for Next.js API routes and Server Actions
- Generic Payload Support: Send and receive any type of data with type safety
- Pub/Sub Pattern: Topic-based messaging with consumer groups
- Type Safety: Full TypeScript support with generic types
- Streaming Support: Handle large payloads efficiently
- Customizable Serialization: Use built-in transports (JSON, Buffer, Stream) or create your own
npm install @vercel/queue
For local development, you'll need to pull your Vercel environment variables:
# Install Vercel CLI if you haven't already
npm i -g vercel
# Pull environment variables from your Vercel project
vc env pull
Update your tsconfig.json
to use "bundler"
module resolution for proper package export resolution:
{
"compilerOptions": {
"moduleResolution": "bundler"
// ... other options
}
}
The send
function can be used anywhere in your codebase to publish messages to a queue:
import { send } from "@vercel/queue";
// Send a message to a topic
await send("my-topic", {
message: "Hello world",
});
// With additional options
await send(
"my-topic",
{
message: "Hello world",
},
{
idempotencyKey: "unique-key", // Optional: prevent duplicate messages
retentionSeconds: 3600, // Optional: override retention time (defaults to 24 hours)
},
);
Example usage in an API route:
// app/api/send-message/route.ts
import { send } from "@vercel/queue";
export async function POST(request: Request) {
const body = await request.json();
const { messageId } = await send("my-topic", {
message: body.message,
});
return Response.json({ messageId });
}
Messages are consumed using API routes that Vercel automatically triggers when messages are available.
The recommended approach is to handle multiple topics and consumers in a single API route to keep your vercel.json
configuration simple:
// app/api/queue/route.ts
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback({
// Single topic with one consumer
"my-topic": {
"my-consumer": async (message, metadata) => {
// metadata includes: { messageId, deliveryCount, createdAt }
console.log("Processing message:", message);
// If this throws an error, the message will be automatically retried
await processMessage(message);
},
},
// Multiple consumers for different purposes
"order-events": {
fulfillment: async (order, metadata) => {
// By default, errors will trigger automatic retries
// But you can control retry timing if needed:
if (!isSystemReady()) {
// Override default retry with a 5 minute delay
return { timeoutSeconds: 300 };
}
await processOrder(order);
},
analytics: async (order, metadata) => {
try {
await trackOrder(order);
} catch (error) {
// Optional: Custom exponential backoff instead of default retry timing
const timeoutSeconds = Math.pow(2, metadata.deliveryCount) * 60;
return { timeoutSeconds };
}
},
},
});
While you can split handlers into separate routes if needed (e.g., for code organization or deployment flexibility), consolidating them in one route is recommended for simpler configuration.
Configure which topics and consumers your API route handles:
{
"functions": {
"app/api/queue/route.ts": {
"experimentalTriggers": [
{
"type": "queue/v1beta",
"topic": "my-topic",
"consumer": "my-consumer",
"maxAttempts": 3, // Optional: Maximum number of delivery attempts (default: 3)
"retryAfterSeconds": 60, // Optional: Delay between retries (default: 60)
"initialDelaySeconds": 0 // Optional: Initial delay before first delivery (default: 0)
},
{
"type": "queue/v1beta",
"topic": "order-events",
"consumer": "fulfillment"
},
{
"type": "queue/v1beta",
"topic": "order-events",
"consumer": "analytics",
"maxAttempts": 5, // Retry up to 5 times
"retryAfterSeconds": 300 // Wait 5 minutes between retries
}
]
}
}
}
- Topics: Named message channels that can have multiple consumer groups
-
Consumer Groups: Named groups of consumers that process messages in parallel
- Different consumer groups for the same topic each get a copy of every message
- Multiple consumers in the same group share/split messages for load balancing
- Automatic Triggering: Vercel triggers your API routes when messages are available
- Message Processing: Your API routes receive message metadata via headers
-
Configuration: The
vercel.json
file tells Vercel which routes handle which topics/consumers
The queue client supports customizable serialization through the Transport
interface:
- JsonTransport (Default): For structured data that fits in memory
- BufferTransport: For binary data that fits in memory
- StreamTransport: For large files and memory-efficient processing
Example:
import { send, JsonTransport } from "@vercel/queue";
// JsonTransport is the default
await send("json-topic", { data: "example" });
// Explicit transport configuration
await send(
"json-topic",
{ data: "example" },
{ transport: new JsonTransport() },
);
Use Case | Recommended Transport | Memory Usage | Performance |
---|---|---|---|
Small JSON objects | JsonTransport | Low | High |
Binary files < 100MB | BufferTransport | Medium | High |
Large files > 100MB | StreamTransport | Very Low | Medium |
Real-time streams | StreamTransport | Very Low | High |
The queue client provides specific error types:
-
QueueEmptyError
: No messages available (204) -
MessageLockedError
: Message temporarily locked (423) -
MessageNotFoundError
: Message doesn't exist (404) -
MessageNotAvailableError
: Message exists but unavailable (409) -
MessageCorruptedError
: Message data corrupted -
BadRequestError
: Invalid parameters (400) -
UnauthorizedError
: Authentication failure (401) -
ForbiddenError
: Access denied (403) -
InternalServerError
: Server errors (500+)
Example error handling:
import {
BadRequestError,
ForbiddenError,
InternalServerError,
UnauthorizedError,
} from "@vercel/queue";
try {
await send("my-topic", payload);
} catch (error) {
if (error instanceof UnauthorizedError) {
console.log("Invalid token - refresh authentication");
} else if (error instanceof ForbiddenError) {
console.log("Environment mismatch - check configuration");
} else if (error instanceof BadRequestError) {
console.log("Invalid parameters:", error.message);
} else if (error instanceof InternalServerError) {
console.log("Server error - retry with backoff");
}
}
Note: The
receive
function is not intended for use in Vercel deployments. It's designed for use in the Vercel Sandbox environment or alternative server setups where you need direct message processing control.
// Process next available message
await receive<T>(topicName, consumerGroup, handler);
// Process specific message by ID
await receive<T>(topicName, consumerGroup, handler, {
messageId: "message-id"
});
// Process message with options
await receive<T>(topicName, consumerGroup, handler, {
messageId?: string; // Process specific message by ID
skipPayload?: boolean; // Skip payload download (requires messageId)
transport?: Transport<T>; // Custom transport (defaults to JsonTransport)
visibilityTimeoutSeconds?: number; // Message visibility timeout
refreshInterval?: number; // Refresh interval for long-running operations
});
// Handler function signature
type MessageHandler<T = unknown> = (
message: T,
metadata: MessageMetadata
) => Promise<MessageHandlerResult> | MessageHandlerResult;
// Handler result types
type MessageHandlerResult = void | MessageTimeoutResult;
interface MessageTimeoutResult {
timeoutSeconds: number; // seconds before message becomes available again
}
- Message Throughput: Each topic can handle up to 1,000 messages per second
- Payload Size: Maximum payload size is 4.5MB (this limit will be increased soon)
- Number of Topics: No limit on the number of topics you can create
If you need more than 1,000 messages per second, you can create multiple topics (e.g., user-specific or shard-based topics) and handle them with a single consumer using wildcards in your vercel.json
:
{
"functions": {
"app/api/queue/route.ts": {
"experimentalTriggers": [
{
"type": "queue/v1beta",
"topic": "user-*",
"consumer": "processor"
}
]
}
}
}
This allows you to:
- Create topics like
user-1
,user-2
, etc. - Process messages from all user topics with a single handler
- Each topic gets its own 1,000 messages per second quota
MIT