Use the AI Projects client library (in preview) to:
- Enumerate connections in your Azure AI Foundry project and get connection properties. For example, get the inference endpoint URL and credentials associated with your Azure OpenAI connection.
- Develop Agents using the Azure AI Agent Service, leveraging an extensive ecosystem of models, tools, and capabilities from OpenAI, Microsoft, and other LLM providers. The Azure AI Agent Service enables the building of Agents for a wide range of generative AI use cases. The package is currently in private preview.
- Enable OpenTelemetry tracing.
| Samples | Package (npm) | API reference documentation
- Getting started
- Key concepts
- Examples
- Troubleshooting
- Contributing
- LTS versions of Node.js
- An Azure subscription.
- A project in Azure AI Foundry.
- The project connection string. It can be found in your Azure AI Foundry project overview page, under "Project details". Below we will assume the environment variable
AZURE_AI_PROJECTS_CONNECTION_STRING
was defined to hold this value. - Entra ID is needed to authenticate the client. Your application needs an object that implements the TokenCredential interface. Code samples here use DefaultAzureCredential. To get that working, you will need:
- The
Contributor
role. Role assigned can be done via the "Access Control (IAM)" tab of your Azure AI Project resource in the Azure portal. - Azure CLI installed.
- You are logged into your Azure account by running
az login
. - Note that if you have multiple Azure subscriptions, the subscription that contains your Azure AI Project resource must be your default subscription. Run
az account list --output table
to list all your subscription and see which one is the default. Runaz account set --subscription "Your Subscription ID or Name"
to change your default subscription.
- The
npm install @azure/ai-projects @azure/identity
The class factory method fromConnectionString
is used to construct the client. To construct a client:
import { AIProjectsClient } from "@azure/ai-projects";
import { DefaultAzureCredential } from "@azure/identity";
const connectionString = process.env.AZURE_AI_PROJECTS_CONNECTION_STRING ?? "<connectionString>";
const client = AIProjectsClient.fromConnectionString(
connectionString,
new DefaultAzureCredential(),
);
Your Azure AI Foundry project has a "Management center". When you enter it, you will see a tab named "Connected resources" under your project. The .connections
operations on the client allow you to enumerate the connections and get connection properties. Connection properties include the resource URL and authentication credentials, among other things.
Below are code examples of the connection operations. Full samples can be found under the "connections" folder in the package samples.
To list the properties of all the connections in the Azure AI Foundry project:
const connections = await client.connections.listConnections();
for (const connection of connections) {
console.log(connection);
}
To list the properties of connections of a certain type (here Azure OpenAI):
const connections = await client.connections.listConnections({ category: "AzureOpenAI" });
for (const connection of connections) {
console.log(connection);
}
To get the connection properties of a connection named connectionName
:
const connection = await client.connections.getConnection("connectionName");
console.log(connection);
To get the connection properties with its authentication credentials:
const connection = await client.connections.getConnectionWithSecrets("connectionName");
console.log(connection);
Agents in the Azure AI Projects client library are designed to facilitate various interactions and operations within your AI projects. They serve as the core components that manage and execute tasks, leveraging different tools and resources to achieve specific goals. The following steps outline the typical sequence for interacting with Agents. See the "agents" folder in the package samples for additional Agent samples.
Agents are actively being developed. A sign-up form for private preview is coming soon.
Here is an example of how to create an Agent:
const agent = await client.agents.createAgent("gpt-4o", {
name: "my-agent",
instructions: "You are a helpful assistant",
});
To allow Agents to access your resources or custom functions, you need tools. You can pass tools to createAgent
through the tools
and toolResources
arguments.
You can use ToolSet
to do this:
import { ToolSet } from "@azure/ai-projects";
const toolSet = new ToolSet();
toolSet.addFileSearchTool([vectorStore.id]);
toolSet.addCodeInterpreterTool([codeInterpreterFile.id]);
const agent = await client.agents.createAgent("gpt-4o", {
name: "my-agent",
instructions: "You are a helpful agent",
tools: toolSet.toolDefinitions,
toolResources: toolSet.toolResources,
});
console.log(`Created agent, agent ID: ${agent.id}`);
To perform file search by an Agent, we first need to upload a file, create a vector store, and associate the file to the vector store. Here is an example:
import { ToolUtility } from "@azure/ai-projects";
const localFileStream = fs.createReadStream(filePath);
const file = await client.agents.uploadFile(localFileStream, "assistants", {
fileName: "sample_file_for_upload.txt",
});
console.log(`Uploaded file, ID: ${file.id}`);
const vectorStore = await client.agents.createVectorStore({
fileIds: [file.id],
name: "my_vector_store",
});
console.log(`Created vector store, ID: ${vectorStore.id}`);
const fileSearchTool = ToolUtility.createFileSearchTool([vectorStore.id]);
const agent = await client.agents.createAgent("gpt-4o", {
name: "SDK Test Agent - Retrieval",
instructions: "You are helpful agent that can help fetch data from files you know about.",
tools: [fileSearchTool.definition],
toolResources: fileSearchTool.resources,
});
console.log(`Created agent, agent ID : ${agent.id}`);
Here is an example to upload a file and use it for code interpreter by an Agent:
import { ToolUtility } from "@azure/ai-projects";
const localFileStream = fs.createReadStream(filePath);
const localFile = await client.agents.uploadFile(localFileStream, "assistants", {
fileName: "localFile",
});
console.log(`Uploaded local file, file ID : ${localFile.id}`);
const codeInterpreterTool = ToolUtility.createCodeInterpreterTool([localFile.id]);
// Notice that CodeInterpreter must be enabled in the agent creation, otherwise the agent will not be able to see the file attachment
const agent = await client.agents.createAgent("gpt-4o-mini", {
name: "my-agent",
instructions: "You are a helpful agent",
tools: [codeInterpreterTool.definition],
toolResources: codeInterpreterTool.resources,
});
console.log(`Created agent, agent ID: ${agent.id}`);
To enable your Agent to perform search through Bing search API, you use ToolUtility.createConnectionTool()
along with a connection.
Here is an example:
import { ToolUtility, connectionToolType } from "@azure/ai-projects";
const bingConnection = await client.connections.getConnection(
process.env.BING_CONNECTION_NAME ?? "<connection-name>",
);
const connectionId = bingConnection.id;
const bingTool = ToolUtility.createConnectionTool(connectionToolType.BingGrounding, [connectionId]);
const agent = await client.agents.createAgent("gpt-4o", {
name: "my-agent",
instructions: "You are a helpful agent",
tools: [bingTool.definition],
});
console.log(`Created agent, agent ID : ${agent.id}`);
Azure AI Search is an enterprise search system for high-performance applications. It integrates with Azure OpenAI Service and Azure Machine Learning, offering advanced search technologies like vector search and full-text search. Ideal for knowledge base insights, information discovery, and automation
Here is an example to integrate Azure AI Search:
import { ToolUtility } from "@azure/ai-projects";
const connectionName =
process.env.AZURE_AI_SEARCH_CONNECTION_NAME ?? "<AzureAISearchConnectionName>";
const connection = await client.connections.getConnection(connectionName);
const azureAISearchTool = ToolUtility.createAzureAISearchTool(connection.id, connection.name);
const agent = await client.agents.createAgent("gpt-4-0125-preview", {
name: "my-agent",
instructions: "You are a helpful agent",
tools: [azureAISearchTool.definition],
toolResources: azureAISearchTool.resources,
});
console.log(`Created agent, agent ID : ${agent.id}`);
You can enhance your Agents by defining callback functions as function tools. These can be provided to createAgent
via the combination of tools
and toolResources
. Only the function definitions and descriptions are provided to createAgent
, without the implementations. The Run
or event handler of stream
will raise a requires_action
status based on the function definitions. Your code must handle this status and call the appropriate functions.
Here is an example:
import {
FunctionToolDefinition,
ToolUtility,
RequiredToolCallOutput,
FunctionToolDefinitionOutput,
ToolOutput,
} from "@azure/ai-projects";
class FunctionToolExecutor {
private functionTools: {
func: Function;
definition: FunctionToolDefinition;
}[];
constructor() {
this.functionTools = [
{
func: this.getUserFavoriteCity,
...ToolUtility.createFunctionTool({
name: "getUserFavoriteCity",
description: "Gets the user's favorite city.",
parameters: {},
}),
},
{
func: this.getCityNickname,
...ToolUtility.createFunctionTool({
name: "getCityNickname",
description: "Gets the nickname of a city, e.g. 'LA' for 'Los Angeles, CA'.",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "The city and state, e.g. Seattle, Wa" },
},
},
}),
},
{
func: this.getWeather,
...ToolUtility.createFunctionTool({
name: "getWeather",
description: "Gets the weather for a location.",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "The city and state, e.g. Seattle, Wa" },
unit: { type: "string", enum: ["c", "f"] },
},
},
}),
},
];
}
private getUserFavoriteCity(): {} {
return { location: "Seattle, WA" };
}
private getCityNickname(location: string): {} {
return { nickname: "The Emerald City" };
}
private getWeather(location: string, unit: string): {} {
return { weather: unit === "f" ? "72f" : "22c" };
}
public invokeTool(
toolCall: RequiredToolCallOutput & FunctionToolDefinitionOutput,
): ToolOutput | undefined {
console.log(`Function tool call - ${toolCall.function.name}`);
const args = [];
if (toolCall.function.parameters) {
try {
const params = JSON.parse(toolCall.function.parameters);
for (const key in params) {
if (Object.prototype.hasOwnProperty.call(params, key)) {
args.push(params[key]);
}
}
} catch (error) {
console.error(`Failed to parse parameters: ${toolCall.function.parameters}`, error);
return undefined;
}
}
const result = this.functionTools
.find((tool) => tool.definition.function.name === toolCall.function.name)
?.func(...args);
return result
? {
toolCallId: toolCall.id,
output: JSON.stringify(result),
}
: undefined;
}
public getFunctionDefinitions(): FunctionToolDefinition[] {
return this.functionTools.map((tool) => {
return tool.definition;
});
}
}
const functionToolExecutor = new FunctionToolExecutor();
const functionTools = functionToolExecutor.getFunctionDefinitions();
const agent = await client.agents.createAgent("gpt-4o", {
name: "my-agent",
instructions:
"You are a weather bot. Use the provided functions to help answer questions. Customize your responses to the user's preferences as much as possible and use friendly nicknames for cities whenever possible.",
tools: functionTools,
});
console.log(`Created agent, agent ID: ${agent.id}`);
OpenAPI specifications describe REST operations against a specific endpoint. Agents SDK can read an OpenAPI spec, create a function from it, and call that function against the REST endpoint without additional client-side execution. Here is an example creating an OpenAPI tool (using anonymous authentication):
import { ToolUtility } from "@azure/ai-projects";
// Read in OpenApi spec
const filePath = "./data/weatherOpenApi.json";
const openApiSpec = JSON.parse(fs.readFileSync(filePath, "utf-8"));
// Define OpenApi function
const openApiFunction = {
name: "getWeather",
spec: openApiSpec,
description: "Retrieve weather information for a location",
auth: {
type: "anonymous",
},
default_params: ["format"], // optional
};
// Create OpenApi tool
const openApiTool = ToolUtility.createOpenApiTool(openApiFunction);
// Create agent with OpenApi tool
const agent = await client.agents.createAgent("gpt-4o-mini", {
name: "myAgent",
instructions: "You are a helpful agent",
tools: [openApiTool.definition],
});
console.log(`Created agent, agent ID: ${agent.id}`);
To enable your Agent to answer queries using Fabric data, use FabricTool
along with a connection to the Fabric resource.
Here is an example:
import { ToolUtility } from "@azure/ai-projects";
const fabricConnection = await client.connections.getConnection(
process.env["FABRIC_CONNECTION_NAME"] || "<connection-name>",
);
const connectionId = fabricConnection.id;
// Initialize agent Microsoft Fabric tool with the connection id
const fabricTool = ToolUtility.createFabricTool(connectionId);
// Create agent with the Microsoft Fabric tool and process assistant run
const agent = await client.agents.createAgent("gpt-4o", {
name: "my-agent",
instructions: "You are a helpful agent",
tools: [fabricTool.definition],
});
console.log(`Created agent, agent ID : ${agent.id}`);
For each session or conversation, a thread is required. Here is an example:
const thread = await client.agents.createThread();
In some scenarios, you might need to assign specific resources to individual threads. To achieve this, you provide the toolResources
argument to createThread
. In the following example, you create a vector store and upload a file, enable an Agent for file search using the tools
argument, and then associate the file with the thread using the toolResources
argument.
import { ToolUtility } from "@azure/ai-projects";
const localFileStream = fs.createReadStream(filePath);
const file = await client.agents.uploadFile(localFileStream, "assistants", {
fileName: "sample_file_for_upload.csv",
});
console.log(`Uploaded file, ID: ${file.id}`);
const vectorStore = await client.agents.createVectorStore({
fileIds: [file.id],
});
console.log(`Created vector store, ID: ${vectorStore.id}`);
const fileSearchTool = ToolUtility.createFileSearchTool([vectorStore.id]);
const agent = await client.agents.createAgent("gpt-4o", {
name: "myAgent",
instructions: "You are helpful agent that can help fetch data from files you know about.",
tools: [fileSearchTool.definition],
});
console.log(`Created agent, agent ID : ${agent.id}`);
// Create thread with file resources.
// If the agent has multiple threads, only this thread can search this file.
const thread = await client.agents.createThread({ toolResources: fileSearchTool.resources });
To list all threads attached to a given agent, use the list_threads API:
const threads = await client.agents.listThreads();
console.log(`Threads for agent ${agent.id}:`);
for await (const t of (await threads).data) {
console.log(`Thread ID: ${t.id}`);
console.log(`Created at: ${t.createdAt}`);
console.log(`Metadata: ${t.metadata}`);
console.log(`---- `);
}
To create a message for assistant to process, you pass user
as role
and a question as content
:
const message = await client.agents.createMessage(thread.id, {
role: "user",
content: "hello, world!",
});
console.log(`Created message, message ID: ${message.id}`);
To attach a file to a message for content searching, you use ToolUtility.createFileSearchTool()
and the attachments
argument:
import { ToolUtility } from "@azure/ai-projects";
const fileSearchTool = ToolUtility.createFileSearchTool();
const message = await client.agents.createMessage(thread.id, {
role: "user",
content: "What feature does Smart Eyewear offer?",
attachments: {
fileId: file.id,
tools: [fileSearchTool.definition],
},
});
To attach a file to a message for data analysis, you use ToolUtility.createCodeInterpreterTool()
and the attachment
argument.
Here is an example:
import { ToolUtility } from "@azure/ai-projects";
// notice that CodeInterpreter must be enabled in the agent creation,
// otherwise the agent will not be able to see the file attachment for code interpretation
const codeInterpreterTool = ToolUtility.createCodeInterpreterTool();
const agent = await client.agents.createAgent("gpt-4-1106-preview", {
name: "my-assistant",
instructions: "You are helpful assistant",
tools: [codeInterpreterTool.definition],
});
console.log(`Created agent, agent ID: ${agent.id}`);
const thread = await client.agents.createThread();
console.log(`Created thread, thread ID: ${thread.id}`);
const message = await client.agents.createMessage(thread.id, {
role: "user",
content:
"Could you please create bar chart in TRANSPORTATION sector for the operating profit from the uploaded csv file and provide file to me?",
attachments: {
fileId: file.id,
tools: [codeInterpreterTool.definition],
},
});
console.log(`Created message, message ID: ${message.id}`);
You can send messages to Azure agents with image inputs in following ways:
- Using an image stored as a uploaded file
- Using a public image accessible via URL
- Using a base64 encoded image string
The following examples demonstrate each method:
// Upload the local image file
const fileStream = fs.createReadStream(imagePath);
const imageFile = await client.agents.uploadFile(fileStream, "assistants", {
fileName: "image_file.png",
});
console.log(`Uploaded file, file ID: ${imageFile.id}`);
// Create a message with both text and image content
console.log("Creating message with image content...");
const inputMessage = "Hello, what is in the image?";
const content = [
{
type: "text",
text: inputMessage,
},
{
type: "image_file",
image_file: {
file_id: imageFile.id,
detail: "high",
},
},
];
const message = await client.agents.createMessage(thread.id, {
role: "user",
content: content,
});
console.log(`Created message, message ID: ${message.id}`);
// Specify the public image URL
const imageUrl =
"https://github.com/Azure/azure-sdk-for-js/blob/0aa88ceb18d865726d423f73b8393134e783aea6/sdk/ai/ai-projects/data/image_file.png?raw=true";
// Create content directly referencing image URL
const inputMessage = "Hello, what is in the image?";
const content = [
{
type: "text",
text: inputMessage,
},
{
type: "image_url",
image_url: {
url: imageUrl,
detail: "high",
},
},
];
const message = await client.agents.createMessage(thread.id, {
role: "user",
content: content,
});
console.log(`Created message, message ID: ${message.id}`);
function imageToBase64DataUrl(imagePath: string, mimeType: string): string {
try {
// Read the image file as binary
const imageBuffer = fs.readFileSync(imagePath);
// Convert to base64
const base64Data = imageBuffer.toString("base64");
// Format as a data URL
return `data:${mimeType};base64,${base64Data}`;
} catch (error) {
console.error(`Error reading image file at ${imagePath}:`, error);
throw error;
}
}
// Convert your image file to base64 format
const imageDataUrl = imageToBase64DataUrl(filePath, "image/png");
// Create a message with both text and image content
const inputMessage = "Hello, what is in the image?";
const content = [
{
type: "text",
text: inputMessage,
},
{
type: "image_url",
image_url: {
url: imageDataUrl,
detail: "high",
},
},
];
const message = await client.agents.createMessage(thread.id, {
role: "user",
content: content,
});
console.log(`Created message, message ID: ${message.id}`);
Here is an example of createRun
and poll until the run is completed:
let run = await client.agents.createRun(thread.id, agent.id);
// Poll the run as long as run status is queued or in progress
while (
run.status === "queued" ||
run.status === "in_progress" ||
run.status === "requires_action"
) {
// Wait for a second
await new Promise((resolve) => setTimeout(resolve, 1000));
run = await client.agents.getRun(thread.id, run.id);
}
To have the SDK poll on your behalf, use the createThreadAndRun
method.
Here is an example:
const run = await client.agents.createThreadAndRun(agent.id, {
thread: {
messages: [
{
role: "user",
content: "hello, world!",
},
],
},
});
With streaming, polling also need not be considered.
Here is an example:
const streamEventMessages = await client.agents.createRun(thread.id, agent.id).stream();
Event handling can be done as follows:
import {
RunStreamEvent,
ThreadRunOutput,
MessageStreamEvent,
MessageDeltaChunk,
MessageDeltaTextContent,
DoneEvent,
} from "@azure/ai-projects";
const streamEventMessages = await client.agents.createRun(thread.id, agent.id).stream();
for await (const eventMessage of streamEventMessages) {
switch (eventMessage.event) {
case RunStreamEvent.ThreadRunCreated:
console.log(`ThreadRun status: ${(eventMessage.data as ThreadRunOutput).status}`);
break;
case MessageStreamEvent.ThreadMessageDelta:
{
const messageDelta = eventMessage.data as MessageDeltaChunk;
messageDelta.delta.content.forEach((contentPart) => {
if (contentPart.type === "text") {
const textContent = contentPart as MessageDeltaTextContent;
const textValue = textContent.text?.value || "No text";
console.log(`Text delta received:: ${textValue}`);
}
});
}
break;
case RunStreamEvent.ThreadRunCompleted:
console.log("Thread Run Completed");
break;
case ErrorEvent.Error:
console.log(`An error occurred. Data ${eventMessage.data}`);
break;
case DoneEvent.Done:
console.log("Stream completed.");
break;
}
}
To retrieve messages from agents, use the following example:
import { MessageContentOutput, isOutputOfType, MessageTextContentOutput } from "@azure/ai-projects";
const messages = await client.agents.listMessages(thread.id);
while (messages.hasMore) {
const nextMessages = await client.agents.listMessages(currentRun.threadId, {
after: messages.lastId,
});
messages.data = messages.data.concat(nextMessages.data);
messages.hasMore = nextMessages.hasMore;
messages.lastId = nextMessages.lastId;
}
// The messages are following in the reverse order,
// we will iterate them and output only text contents.
for (const dataPoint of messages.data.reverse()) {
const lastMessageContent: MessageContentOutput = dataPoint.content[dataPoint.content.length - 1];
console.log(lastMessageContent);
if (isOutputOfType<MessageTextContentOutput>(lastMessageContent, "text")) {
console.log(
`${dataPoint.role}: ${(lastMessageContent as MessageTextContentOutput).text.value}`,
);
}
}
Files uploaded by Agents cannot be retrieved back. If your use case needs to access the file content uploaded by the Agents, you are advised to keep an additional copy accessible by your application. However, files generated by Agents are retrievable by getFileContent
.
Here is an example retrieving file ids from messages:
import {
isOutputOfType,
MessageTextContentOutput,
MessageImageFileContentOutput,
} from "@azure/ai-projects";
const messages = await client.agents.listMessages(thread.id);
// Get most recent message from the assistant
const assistantMessage = messages.data.find((msg) => msg.role === "assistant");
if (assistantMessage) {
const textContent = assistantMessage.content.find((content) =>
isOutputOfType<MessageTextContentOutput>(content, "text"),
) as MessageTextContentOutput;
if (textContent) {
console.log(`Last message: ${textContent.text.value}`);
}
}
const imageFile = (messages.data[0].content[0] as MessageImageFileContentOutput).imageFile;
const imageFileName = (await client.agents.getFile(imageFile.fileId)).filename;
const fileContent = await (
await client.agents.getFileContent(imageFile.fileId).asNodeStream()
).body;
if (fileContent) {
const chunks: Buffer[] = [];
for await (const chunk of fileContent) {
chunks.push(Buffer.from(chunk));
}
const buffer = Buffer.concat(chunks);
fs.writeFileSync(imageFileName, buffer);
} else {
console.error("Failed to retrieve file content: fileContent is undefined");
}
console.log(`Saved image file to: ${imageFileName}`);
To remove resources after completing tasks, use the following functions:
await client.agents.deleteVectorStore(vectorStore.id);
console.log(`Deleted vector store, vector store ID: ${vectorStore.id}`);
await client.agents.deleteFile(file.id);
console.log(`Deleted file, file ID: ${file.id}`);
client.agents.deleteAgent(agent.id);
console.log(`Deleted agent, agent ID: ${agent.id}`);
You can add an Application Insights Azure resource to your Azure AI Foundry project. See the Tracing tab in your studio. If one was enabled, you can get the Application Insights connection string, configure your Agents, and observe the full execution path through Azure Monitor. Typically, you might want to start tracing before you create an Agent.
Make sure to install OpenTelemetry and the Azure SDK tracing plugin via
npm install @opentelemetry/api \
@opentelemetry/instrumentation \
@opentelemetry/sdk-trace-node \
@azure/opentelemetry-instrumentation-azure-sdk \
@azure/monitor-opentelemetry-exporter
You will also need an exporter to send telemetry to your observability backend. You can print traces to the console or use a local viewer such as Aspire Dashboard.
To connect to Aspire Dashboard or another OpenTelemetry compatible backend, install OTLP exporter:
npm install @opentelemetry/exporter-trace-otlp-proto \
@opentelemetry/exporter-metrics-otlp-proto
Here is a code sample to be included above createAgent
:
import {
NodeTracerProvider,
SimpleSpanProcessor,
ConsoleSpanExporter,
} from "@opentelemetry/sdk-trace-node";
import { trace } from "@opentelemetry/api";
import { AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter";
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.register();
const tracer = trace.getTracer("Agents Sample", "1.0.0");
let appInsightsConnectionString =
process.env.APP_INSIGHTS_CONNECTION_STRING ?? "<appInsightsConnectionString>";
if (appInsightsConnectionString == "<appInsightsConnectionString>") {
appInsightsConnectionString = await client.telemetry.getConnectionString();
}
if (appInsightsConnectionString) {
const exporter = new AzureMonitorTraceExporter({
connectionString: appInsightsConnectionString,
});
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
}
await tracer.startActiveSpan("main", async (span) => {
client.telemetry.updateSettings({ enableContentRecording: true });
// ...
});
Client methods that make service calls raise an RestError for a non-success HTTP status code response from the service. The exception's code
will hold the HTTP response status code. The exception's error.message
contains a detailed message that may be helpful in diagnosing the issue:
import { RestError } from "@azure/core-rest-pipeline";
try {
const result = await client.connections.listConnections();
} catch (e) {
if (e instanceof RestError) {
console.log(`Status code: ${e.code}`);
console.log(e.message);
} else {
console.error(e);
}
}
For example, when you provide wrong credentials:
Status code: 401 (Unauthorized)
Operation returned an invalid status 'Unauthorized'
To report issues with the client library, or request additional features, please open a GitHub issue here
Have a look at the package samples folder, containing fully runnable code.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.