Configuring nodes
Configure nodes¶
This reference covers the configuration options for every node type available in the workflow builder. Nodes are listed in the order they appear in the Nodes panel.
For a conceptual overview of how nodes work, how to connect them, and how to apply guardrails, see Working with nodes.
Note
To open a node's configuration panel, click the node on the canvas to reveal its toolbar, then click the Settings () icon. After making changes, click Save changes to apply them.
LLM nodes¶
LLM nodes call hosted language models to generate, transform, or reason over text. They're typically the final step in a workflow but can also be used at intermediate steps.
Shared configuration¶
All LLM nodes share the following settings.
| Field | Required | Description |
|---|---|---|
| Node name | A human-readable label for this node instance (for example, Clinical Research Summarizer). Use a name that reflects the node's role, not just its type. | |
| Model name / Deployment name | ✓ | The model the node should use. How this is specified varies by node — see the node-specific tabs below. |
| Temperature | Controls how deterministic or varied the model's responses are. Lower values (0.0–0.3) produce more predictable outputs; higher values (0.7+) allow more variation. Leave at the default if you're unsure. | |
| Max tokens | ✓ | The maximum number of tokens the model can generate in a single response. |
| API key | ✓ | Authentication credentials for the model service. API keys can only be updated by the person who entered them. |
| Context prompt | A fixed instruction prepended to every request sent to the model. Use this to define the node's role, set boundaries, or specify output format. |
Example context prompt
You are a helpful assistant. Use only the provided context to answer.
If the answer is not in the context, say "I don't know."
Tips for effective context prompts
Context prompts strongly influence how nodes behave. They’re especially important for LLM nodes, where small changes can lead to very different outputs. In general:
- Be explicit about the node's role and tone.
- Set clear rules ("Use only the provided context" or "Avoid speculation").
- Add fallback behavior ("Say 'I don't know' if unsure").
- Keep prompts concise—long prompts can cause truncation when combined with lengthy user inputs.
Ports
- Prompt (input): The main content the model should process. Typically connected from the Start node, a retriever node, or an Agent node.
- Context (optional input): Dynamic context passed alongside the fixed context prompt.
- Output: The model's generated response, which can be connected to downstream nodes or to the End node.
Node-specific configurations¶
Select a tab below for settings specific to each node.
Use this node to call Claude models hosted by Anthropic.
- API URL: The endpoint for the Anthropic API. This is typically
https://api.anthropic.comunless your organization routes requests through a proxy. - Model name: The Anthropic model to use (for example,
claude-3-7-sonnet-latest). Select from the available options in the dropdown.
All other settings follow the shared configuration above.
Use this node to call OpenAI models deployed through Azure OpenAI.
- Azure endpoint URL: The endpoint for your Azure OpenAI resource (for example,
https://<resource-name>.openai.azure.com/). - API version: The Azure OpenAI API version to use.
- Deployment name: The name of your Azure OpenAI deployment. This is the deployment name defined in your Azure OpenAI resource — it may or may not match the underlying model name.
Note
Azure allows deployments to have arbitrary names. For this reason, the deployment field is free-form rather than a predefined list.
All other settings follow the shared configuration above.
Use this node to call OpenAI-hosted models such as GPT-4 or GPT-3.5. This is one of the most commonly used model nodes for RAG-style workflows.
- Model name: The OpenAI model to use (for example,
gpt-4orgpt-3.5-turbo). Select from the available options in the dropdown.
All other settings follow the shared configuration above.
Use this node to call private or self-hosted models via vLLM. This is useful when running models inside controlled infrastructure or when you need high-throughput inference without relying on external APIs.
- API URL: The full endpoint URL for your vLLM instance. This is typically provided or approved by your organization.
- Model name: The identifier for the model served by your vLLM instance.
All other settings follow the shared configuration above.
Data connector nodes¶
Data connector nodes retrieve structured or unstructured data from external systems and pass it to downstream nodes—most commonly LLMs or agents—to ground responses in trusted data sources. Data connectors do not generate text themselves.
Azure AI search nodes¶
The Azure AI Search Retriever, HyDE Retriever, Multi-Query Retriever, and Smart Retriever all connect to Azure AI Search as their data source. HyDE, Multi-Query, and Smart Retriever are retrieval strategy nodes: they extend the Azure AI Search Retriever with different approaches for improving retrieval quality, but share the same underlying connection requirements.
Choosing a retriever
If you're unsure which retriever to use, start with the Azure AI Search Retriever. Switch to one of the strategy nodes when you need to improve retrieval quality for vague queries, low-recall results, or complex question types.
Queries an Azure AI Search index and returns relevant results for use in downstream workflow steps. Supports both keyword-based and embedding-based retrieval.
Typical use: Retrieve relevant documents or records from an indexed data source to provide grounding context for an LLM or Agent node.
Configuration
| Field | Required | Description |
|---|---|---|
| Node name | A descriptive label for this node instance (for example, Search Customer Records). | |
| API key | ✓ | The API key used to authenticate to the Azure AI Search service. |
| API version | The Azure AI Search API version to use. | |
| Index name | ✓ | The name of the Azure AI Search index to query. |
| Search service name | ✓ | The name of the Azure AI Search service that hosts the index. |
| Top K results | Maximum number of results to return. Smaller values (e.g., 5) return more focused results; larger values (e.g., 20) improve recall but may include more noise. | |
| Record filter | An OData filter expression to narrow results based on indexed fields or metadata. | |
| Content field name | ✓ | The field in the index that contains the main text content to return. |
| LLM API key | API key for the LLM service used for embedding-based retrieval. Required if using vector search. | |
| LLM endpoint | Endpoint for the LLM service used for embeddings. | |
| LLM API version | API version for the LLM service used for embeddings. | |
| Embedding model | The embedding model used to generate or compare vector embeddings. | |
| Content vector field | The name of the vector field in the index used for vector-based retrieval. |
Ports
- Query (input): The search query, supplied at runtime from an upstream node.
- Results (output): Retrieved content, passed to downstream nodes such as an LLM or Agent.
Improves retrieval quality by first generating a hypothetical answer to the query using an LLM, then using that generated answer—rather than the original query—to search the index. This approach works well when user queries are phrased differently from the way content is indexed.
Typical use: Use instead of the Azure AI Search Retriever when direct keyword or semantic matching produces poor results for vague, indirect, or differently worded queries.
Note
Requires an LLM connection to generate the hypothetical answer used for retrieval.
Configuration
Shares the following fields with the Azure AI Search Retriever: Node name, Search API key, Search API version, Index name, Search service name, Top K results, Content field name, LLM API key, LLM endpoint, LLM API version, Embedding model, Content vector field.
The following fields are specific to this node:
| Field | Required | Description |
|---|---|---|
| LLM model | ✓ | The model used to generate the hypothetical answer. |
| HyDE generation prompt | ✓ | The prompt used to instruct the LLM to generate a hypothetical answer from the input query. This generated answer is then used as the basis for retrieval. |
Ports
- Query (input): The original user query, supplied at runtime from an upstream node.
- Results (output): Retrieved content, passed to downstream nodes such as an LLM or Agent.
Improves retrieval coverage by generating multiple variations of the original query using an LLM, retrieving results for each variation, and returning a consolidated, ranked set of results. This approach captures relevant content that a single query phrasing might miss.
Typical use: Use instead of the Azure AI Search Retriever when a single query formulation may not fully capture the ways relevant information is expressed in the indexed content.
Note
Requires an LLM connection to generate the alternative query variants used for retrieval.
Configuration
Shares the following fields with the Azure AI Search Retriever: Node name, Search API key, Search API version, Index name, Search service name, Content field name, LLM API key, LLM endpoint, LLM API version, Embedding model, Content vector field.
The following fields are specific to this node:
| Field | Required | Description |
|---|---|---|
| LLM model | ✓ | The model used to generate query variations. |
| Number of query variants | The number of alternative query formulations to generate from the original query. | |
| Top K results | The number of results to retrieve for each generated query variation. | |
| Final Top K results | The maximum number of results returned after combining and ranking results across all query variations. |
Ports
- Query (input): The original user query, supplied at runtime from an upstream node.
- Results (output): The combined and ranked results, passed to downstream nodes such as an LLM or Agent.
Uses an LLM to help interpret the query and adaptively select retrieval strategies, returning a broader and better-matched set of results than a single fixed approach. If no LLM is configured, the node falls back to simple keyword retrieval.
Typical use: Use when you want a more adaptive retriever that can handle a wider range of query styles and retrieval scenarios without manually choosing between retrieval strategies.
Note
An LLM connection is optional. Without one, the node performs simple keyword retrieval only.
Configuration
Shares the following fields with the Azure AI Search Retriever: Node name, Search API key, Search API version, Index name, Search service name, Content field name, LLM API key, LLM endpoint, LLM API version, Embedding model, Content vector field.
The following fields are specific to this node:
| Field | Required | Description |
|---|---|---|
| LLM model | The model used by the retriever for query interpretation and re-ranking. | |
| Enable LLM re-ranking | When enabled, uses the LLM to re-rank retrieved results before returning them downstream. | |
| Min Top K results | The minimum number of results the retriever should return. | |
| Max Top K results | The maximum number of results the retriever should return. |
Ports
- Query (input): The search query, supplied at runtime from an upstream node.
- Results (output): Retrieved content, passed to downstream nodes such as an LLM or Agent.
Neo4j-Cypher¶
Runs Cypher queries against a Neo4j graph database and returns the results to your workflow. Useful when your workflow needs to analyze relationships and connections between entities that are difficult to model in relational databases.
Typical use: Query graph-structured data—such as relationships between users, accounts, or transactions—and provide the results to an Agent node for reasoning or downstream processing.
Note
You must connect an LLM node to the Model input port for this connector to run.
Configuration
| Field | Required | Description |
|---|---|---|
| Node name | A descriptive label for this node instance (for example, Customer Relationship Graph). | |
| URL | ✓ | The URL of the Neo4j server (for example, http://your-neo4j-host:7474). |
| Username | ✓ | The Neo4j username used for authentication. |
| Password | ✓ | The password for the specified Neo4j user. |
Ports
- Query (input): The query to run, supplied at runtime from an upstream node.
- Model (input): An LLM node must be connected here. Required for the connector to run.
- Results (output): Query results, passed to downstream nodes.
PostgreSQL Connector¶
Queries a PostgreSQL database and returns structured results as JSON.
Typical use: Fetch transactional or relational data—such as claims, orders, or user profiles—for use by an Agent node.
Note
PostgreSQL is currently available in tool mode only. Its inputs and outputs are accessible exclusively through an Agent node. See Working with agents for details.
Configuration
| Field | Required | Description |
|---|---|---|
| Node name | A descriptive label for this node instance (for example, Claims Database Lookup). | |
| Host | ✓ | The hostname or IP address of the PostgreSQL server. |
| Port | The PostgreSQL port to connect to. Default is 5432. |
|
| Database | ✓ | The name of the database to query. |
| User | ✓ | The database user the connector should authenticate as. |
| Password | ✓ | The password for the specified database user. |
| SSL mode | The SSL requirement for the connection (for example, require). |
|
| Connect timeout (sec) | Maximum time to wait when establishing a connection before failing. | |
| Max pool size | The maximum number of connections maintained in the connection pool. | |
| Default row limit | The default maximum number of rows returned per query, unless overridden by the query itself. | |
| Tool description | ✓ | A short description of what this connector does in your workflow (for example, "Look up claims and policy details from the claims database"). Used by the Agent when deciding whether to invoke this tool. |
Agent¶
The Agent node performs goal-directed reasoning. It uses a language model to interpret the current input, evaluate context, and decide how to proceed, including dynamically invoking other nodes registered as tools.
Unlike standard nodes, which perform a single fixed operation, an Agent can make decisions at runtime based on its goal, prompt, and available tools.
Agent nodes support advanced behaviors including tool mode, dynamic tool invocation, and coordination with MCP-based tools. These are covered in detail in Working with agents.
Configuration
| Field | Required | Description |
|---|---|---|
| Node name | A descriptive label for this node in your workflow. | |
| Agent goal | ✓ | A concise description of what the agent is trying to accomplish. This guides the model's reasoning and decision-making during execution. |
The Agent's underlying model and tool behavior are configured through connected model and tool nodes, not directly in this panel.
Ports
- Prompt (input): The input text the Agent should reason over. Typically connected from the Start node, a retriever, or another processing step.
- Output: The Agent's final response, passed to downstream nodes or to the End node.
- Model (input): The LLM node the Agent uses for reasoning.
- Tools (input): One or more nodes configured in tool mode that the Agent can invoke during execution.
Utility nodes¶
Utility nodes perform supporting operations within a workflow — calling external tools, or protecting and restoring sensitive data.
MCP Tool¶
Configures an external tool that can be invoked by an Agent node using the Model Context Protocol (MCP). This is a configuration-only node: it has no input or output ports, cannot be connected to other nodes directly, and performs no computation on its own. It must be paired with an Agent node that has tool mode enabled.
Typical use: Allow an Agent to dynamically call external APIs or services during execution, where the Agent decides when and how to invoke the tool based on its reasoning.
Configuration
| Field | Required | Description |
|---|---|---|
| Node name | A custom label for this tool instance (for example, Clinical Trial Lookup Tool). | |
| OpenAPI spec URL | A URL pointing to an OpenAPI specification describing the tool's interface. | |
| OpenAPI spec | Upload an OpenAPI JSON file instead of providing a URL. | |
| OpenAPI base URL | ✓ | The base URL of the API server that implements the tool. |
| Startup timeout (ms) | How long OPAQUE waits for the tool service to become available. | |
| Call timeout (ms) | Maximum time allowed for a single tool invocation. | |
| Tool description | ✓ | A natural-language description of what the tool does. Used by the Agent when deciding whether to invoke it. |
OPAQUE Redact¶
Detects and replaces sensitive text using configurable regular expressions before data is passed to an LLM or external service. Outputs both the redacted text and an encrypted map of the redactions, which can be used later by the OPAQUE Unredact node to restore the original values.
Place this node before any LLM or external service node that should not receive sensitive data in plain text.
Known limitation
When chaining a RAG-based Agent to the OPAQUE Redact node, the workflow may fail due to a mismatch in expected input field names. This issue is known and will be addressed in a future release.
Configuration
| Field | Required | Description |
|---|---|---|
| Node name | A custom label for this node instance (for example, PII Redactor). | |
| Base URL | The URL of the redaction service. Leave blank to use the default OPAQUE endpoint. | |
| Regexes | ✓ | A JSON array defining which patterns to redact and their categories. See example below. |
Regex example
Ports
- Text to redact (input): The plain text to be scanned for matches.
- Redacted (output): The text with sensitive values replaced.
- Redactions (output): An encrypted map linking each redacted span to its original value. Pass this to a downstream OPAQUE Unredact node.
OPAQUE Unredact¶
Restores sensitive text that was previously masked by an upstream OPAQUE Redact node. Accepts the redacted text and the encrypted redaction map as inputs, then reconstructs the original text securely within the enclave.
Place this node after any LLM or processing node that received redacted input, when the final output needs to include the original sensitive values.
Configuration
| Field | Required | Description |
|---|---|---|
| Node name | A custom label for this node instance (for example, Restore Sensitive Fields). | |
| Base URL | The URL of the unredaction service. Leave blank to use the default OPAQUE endpoint. |
Ports
- Redacted text (input): The redacted version of the original content.
- Redactions (input): The encrypted map produced by the upstream OPAQUE Redact node.
- Text (output): The final output with original values restored.
For guidance on how to combine and connect nodes in a workflow, see Working with nodes.