Skip to content

Working with nodes

Nodes are the building blocks of every agentic workflow. Each node performs a specific function—such as retrieving data, invoking a language model, coordinating tools, or applying utilities—and passes its output to the next step in the workflow.

This guide explains how to work with nodes in detail, including how to:

  • Add and configure nodes on the canvas.
  • Define prompts, inputs, and context.
  • Apply guardrails to enforce safety and compliance.
  • Connect inputs and outputs between nodes.

Whether you’re building a simple single-model workflow or a multi-step retrieval-augmented or agent-driven pipeline, every workflow in OPAQUE is composed of nodes like these.

If you’re new to building workflows, start with Get started, which walks through the full workflow lifecycle from draft to launch. This page is intended as a deeper, node-focused reference you can return to as you design, extend, and refine workflows.

What is a node?

A node represents a single execution step within an agentic workflow. Each node takes structured input, performs a well-defined operation, and produces structured output that can be passed to the next node in the workflow.

Nodes are intentionally atomic: they do one thing and do it predictably. This makes workflows easier to build, inspect, review, and reason about during approval.

In OPAQUE, nodes can perform different kinds of work, such as:

  • Calling a language model to generate or transform text
  • Retrieving data from an external system or index
  • Coordinating tool calls or multi-step reasoning (Agent nodes)
  • Applying utilities such as redaction or unredaction

Nodes do not execute in isolation. They always run as part of a workflow, in the order defined by the connections between them, starting at the Start node and ending at the End node.

Node categories and types

Nodes are grouped by category based on the kind of capability they provide. Each category serves a distinct role in how workflows process input, reason over data, and produce output.

You’ll find these categories in the Nodes panel on the left side of the workflow builder.

The Nodes panel shows the default agent types, plus any preconfigured data connectors or LLMs shared with your workspace.

The Nodes panel shows the default agent types, plus any preconfigured data connectors or LLMs shared with your workspace.

  • LLM nodes invoke large language models to generate, transform, or reason over text. These nodes are typically responsible for producing the final response returned by a workflow, but they can also be used at intermediate steps. LLM nodes are configured with prompts, parameters, and (optionally) guardrails that control how the model behaves during execution. OPAQUE currently provides the following LLM nodes:

    • OpenAI Service: Call OpenAI-hosted models such as GPT-4 or GPT-3.5.
    • Anthropic Service: Call Claude models hosted by Anthropic.
    • vLLM Service: Run private or self-hosted models using vLLM for high-throughput inference.
  • Data connector nodes retrieve structured or unstructured data from approved external systems and make that data available to downstream nodes—most commonly LLMs or agents. They are typically used to ground model responses in trusted data sources.

    Data connectors do not generate text themselves. Instead, they fetch relevant data based on an input query and pass the results forward in a structured form.

    OPAQUE currently supports the following data connector nodes:

    • Azure AI Retriever: Retrieve relevant documents from content indexed in Azure AI Search.
    • PostgreSQL Connector: Query structured data from a PostgreSQL database using SQL.
  • The Agent node performs goal-directed reasoning. Instead of executing a single fixed operation, this node can decide which steps to take at runtime, including invoking other nodes configured in tool mode. The Agent node is a central to more advanced agentic workflows and are covered in detail in Working with agents.

  • Utility nodes perform supporting operations that modify, validate, or route data within a workflow. Utilities are often used alongside models and agents to enforce safety, compliance, or integration requirements. Available utility nodes include:

    • MCP API Tool: Call external tools or services that implement the Model Context Protocol (MCP).
    • OPAQUE Redact: Remove or mask sensitive values before data leaves a trusted environment.
    • OPAQUE Unredact: Restore redacted values when data returns to a trusted environment.

Note

If your admin has shared preconfigured integrations with your workspace, you’ll also see additional data connectors and LLMs listed. Preconfigured integrations are marked by icon. For more details, see Using integrations.

You don’t need to use every category in a single workflow. Some workflows consist of a single model node, while others combine retrievers, agents, tools, and utilities into multi-step pipelines.

Connecting nodes

Connections define the execution order of your workflow and how data flows between steps. During execution, data moves from Start to End by following the connections between nodes.

Each node exposes:

  • Input ports (on the left), which receive data
  • Output ports (on the right), which emit data

To connect nodes, drag from an output port of an upstream node and drop it onto an input port of a downstream node. The resulting connection defines both execution order and data dependency.

Once connected, the workflow executes sequentially along these paths, passing outputs from one node into the next.

Note

You don’t need to connect every node to every other node—only define the paths required for the data and execution flow you want.

Combining nodes

You can combine nodes of the same type or mix different types to create workflows that range from simple pipelines to more flexible, agentic systems.

The following table shows common patterns and when to use them.

If you want to... Use this approach
Pull documents or results from a search index. A single data connector
Ask questions, generate summaries, or reason over text. A single LLM
Retrieve context and reason over it. Data connector + LLM in the same workflow
Use different data sources for different tasks. Multiple data connectors
Chain multiple reasoning steps or compare outputs. Multiple LLMs
Protect or restore sensitive data before and after model processing. OPAQUE Redact → processing node → OPAQUE Unredact

Examples

  • Retrieve HR and Finance documents from two indexes, then summarize them with an LLM
  • Use OpenAI for generation and vLLM for classification in the same workflow
  • Apply different retrievers with different filters (for example, by region or department)

These patterns help you think in terms of capabilities rather than individual nodes—what data you need, what processing is required, and where results should flow next.

Standard mode versus tool mode

Some nodes in OPAQUE can operate in either standard mode or tool mode. The mode you choose determines how and when the node executes within a workflow.

In standard mode, a node is part of the primary workflow graph. It executes as data flows from Start to End, following the connections you define on the canvas.

In this mode:

  • Nodes are connected directly using input and output ports.
  • Execution follows a deterministic path from Start to End.
  • Inputs and outputs are explicitly wired between nodes.
  • The node runs exactly once per workflow invocation, in sequence.

Standard mode is typically used for linear or branching workflows such as retrieval-augmented generation (RAG), data preprocessing, or fixed multi-step pipelines.

In tool mode, a node is not executed as part of the main Start-to-End sequence. Instead, it is registered as a tool that can be invoked dynamically by an Agent node during execution.

When a node is enabled in tool mode:

  • It does not participate in the primary execution path.
  • Its input and output ports ts input and output ports disappear and a new handle appears in the center top appears that allows it to be connected to the Agent node.
  • It does not need to be connected between Start and End.
  • It is called only if and when an Agent decides to use it.

Tool mode enables agentic behavior, where the Agent reasons about which tools to call and when, rather than following a fixed execution path.

Which nodes support tool mode

The following table provides an overview of nodes and the modes they support.

Node Standard mode Tool mode
Anthropic
OpenAI
vLLM
Azure AI Retriever
PostgreSQL
Agent
MCP API Tool
OPAQUE Redact
OPAQUE Unredact

When enabled in tool mode, nodes cannot be placed directly in the Start-to-End flow and are accessible exclusively through an Agent during execution.

Configure node behavior

After adding a node to the canvas, you configure how it behaves during execution. Configuration determines what the node does, what inputs it expects, and what constraints apply when it runs.

To configure a node, click it on the canvas to open its toolbar. Every node provides the same three controls:

  • Settings (⚙): Define the node’s core behavior.
  • Guardrails (): Apply policy or safety constraints.
  • Delete (): Remove the node from the canvas.

Changes made in Settings or Guardrails are not applied until you click Save changes.

Anthropic, OpenAI, vLLM

All three LLM nodes behave similarly in a workflow and share the same conceptual role: they take text input (often enriched with retrieved context) and produce a model-generated output.

Typical use

  • Generate answers, summaries, or classifications from input text
  • Reason over retrieved or structured context
  • Transform or reformat content between workflow steps
  • Act as a tool callable by an Agent in tool mode

Configuration fields

LLM nodes largely share the same settings; configure the following fields:

  • Node name: A human-readable label for this node in the workflow (for example, Clinical Research Summarizer).
  • Model name: Select the specific model to use, such as:
    • gpt-4 or gpt-3.5 (OpenAI)
    • claude-3-7-sonnet-latest (Anthropic)
    • llama-2-13b (vLLM)
  • Temperature (optional): Controls how deterministic or creative the response is.
    • Lower values (0.0–0.3) produce more predictable outputs.
    • Higher values (0.7+) allow more variation and creativity.
  • API URL (Anthropic and vLLM only): The endpoint URL for the LLM service. This is typically provided or approved by your organization.
  • API key: Authentication credentials for the service (for example, an OpenAI or Anthropic API key).
  • Context prompt: A fixed instruction prepended to every request sent to the model.

    Example:

    You are a helpful assistant. Summarize the provided input into three clear bullet points.
    

Tips on writing effective prompts

Context prompts strongly influence how nodes behave. They’re especially important for LLM nodes, where small changes can lead to very different outputs. In general:

  • Be explicit about the node’s role and tone.
  • Set clear rules ("Use only the provided context," "Avoid speculation").
  • Use formatting like numbered instructions or caps for emphasis.
  • Add fallback behavior ("Say 'I don’t know' if unsure").
  • Keep prompts short enough to avoid truncation, especially with long user inputs.

LLM nodes expose the following ports:

  • Prompt (input): The main content the model should process. This often comes from:
    • The Start node
    • A retriever node
    • An Agent node invoking the model as a tool
  • Context (optional input): Optional dynamic context passed alongside the fixed context prompt.
  • Output: The generated text, which can be connected to downstream nodes or to the End node.

Use this node to call Claude models hosted by Anthropic. Configuration fields mirror those of other LLM nodes, with model selection limited to Anthropic-supported models.

The configuration panel for the Anthropic Service node.

The configuration panel for the Anthropic Service node.

Use this node to call OpenAI-hosted models such as GPT-4 or GPT-3.5. This is one of the most commonly used model nodes for RAG-style workflows.

The configuration panel for the OpenAI Service node.

The configuration panel for the OpenAI Service node.

Use this node to call private or self-hosted models via vLLM. This is useful when running models inside controlled infrastructure or when you want high-throughput inference without relying on external SaaS APIs.

The configuration panel for the vLLM Service node.

The configuration panel for the vLLM Service node.

Azure AI Retriever

The Azure AI Retriever node queries an Azure AI Search (formerly Cognitive Search) index and returns ranked documents for use in downstream steps. It is most commonly used in retrieval-augmented generation (RAG) workflow.

Typical use

  • Accept a natural-language query
  • Retrieve the most relevant documents or passages
  • Pass retrieved context to an LLM for response generation

This node is often placed directly upstream of an LLM node.

Configuration

Configure the following fields:

  • Node name: A descriptive label for this node (e.g., Search Customer Records).
  • API key: The authentication key for your Azure Search index.
  • API version (optional): Defaults to the latest supported version; override only if you need compatibility with an earlier release.
  • Index name: The name of the Azure AI Search index to query (for example, hr-rag-index).
  • Search service name: The name of the Azure AI Search service that hosts the index.
  • TopK results (optional): The number of top-ranked results the retriever should return. A smaller value (e.g., 5) returns fewer, more focused results; a larger value (e.g., 20) may improve recall but can include more noise.
  • Record Filter (optional): An OData filter expression used to narrow results based on indexed metadata.

The Query input is not set in this panel. Instead, it is provided through the node’s input port. Connect an upstream output (such as Start.output) to the Query port to supply the search text at runtime. The retriever returns ranked results through its output port, which can then be connected to downstream nodes such as LLMs or Agents.

The configuration panel for the Azure AI Search Retriever node

The configuration panel for the Azure AI Search Retriever agent.

PostgreSQL

The PostgreSQL Connector node queries a PostgreSQL database and and returns structured query results as JSON.

Typical use

  • Query transactional or relational data (for example, claims, orders, user profiles)
  • Fetch structured records for downstream steps (such as an LLM or Agent node)
  • Combine database results with retrieved documents or model reasoning

This connector is useful when your workflow needs access to authoritative system-of-record data.

Note

PostgreSQL is currently available in tool mode only, meaning its inputs and outputs are accessible exclusively to the Agent node.

Configuration

Configure the following fields:

  • Node name: A descriptive label for this node (for example, Claims Database Lookup).
  • Host (required): The hostname or IP address of the PostgreSQL server your workflow should connect to.
  • Port: The PostgreSQL port to connect to. The default is 5432.
  • Database (required): The name of the database to query.
  • User (required): The database user the connector should authenticate as.
  • Password (required): The password for the specified database user.
  • SSL mode: The SSL requirement used when connecting to the database (for example, require).
  • Connect timeout (sec): Maximum time (in seconds) to wait when establishing a database connection before failing.
  • Max pool size: The maximum number of connections the connector maintains in its connection pool.
  • Default row limit: The default maximum number of rows returned per query (unless overridden by the query itself).
  • Tool description (required): A short description of what this connector is used for in your workflow (for example, “Look up claims and policy details from the claims database”).

The configuration panel for the PostgreSQL Connector node

The configuration panel for the PostgreSQL Connector node.

Agent

The Agent node represents a reasoning step in a workflow. It uses a language model to interpret the current input, evaluate context, and decide how to proceed. When tool mode is enabled, an Agent can dynamically invoke other nodes that are registered as tools.

Unlike standard nodes, which perform a single, fixed operation, an Agent can make decisions at runtime based on its goal, prompt, and available tools.

Configuration fields

Configure the following fields:

  • Node name: A descriptive label for this node in your workflow.
  • Agent goal (required): A concise description of what the agent is trying to accomplish. This goal guides the model’s reasoning and decision-making during execution.

The Agent’s underlying model selection and tool behavior are configured through connected model and tool nodes, not directly in this panel.

Agent nodes support advanced behaviors such as tool mode, dynamic tool invocation, and coordination with MCP-based tools. These behaviors are covered in detail in Working with agents.

The configuration panel for the Agent node.

The configuration panel for the Agent node.

Ports

  • Prompt: Input text the Agent should reason over. This is typically connected from the Start node, a retriever, or another processing step.
  • Output: The Agent’s final response, which can be passed to downstream nodes or to End.

MCP API Tool

The MCP API Tool node configures an external tool that can be invoked by an Agent node running in tool mode using the Model Context Protocol (MCP).

This node defines how OPAQUE connects to an MCP server (JSON-RPC 2.0 over standard input/output), but it does not execute as part of the workflow graph itself.

The MCP API Tool is a configuration-only node:

  • It has no input or output ports.
  • It cannot be connected to other nodes.
  • It is only usable by an Agent node in tool mode.

On its own, this node performs no computation.

Typical use

Use the MCP API Tool when you want an agentic workflow to:

  • Dynamically call external APIs or services
  • Interact with structured systems through MCP
  • Allow the Agent to decide when and how to invoke tools during execution

The MCP API Tool must be paired with an Agent node that has tool mode enabled.

Configuration fields

Configure the following fields:

  • Node name: A custom label for this tool instance (for example, Clinical Trial Lookup Tool).
  • OpenAPI spec URL (optional): A URL pointing to an OpenAPI specification that describes the tool’s interface.
  • OpenAPI spec (optional): **Upload an OpenAPI JSON file instead of using a URL.
  • OpenAPI base URL: The base URL of the API server that implements the tool.
  • Startup timeout (ms): How long OPAQUE waits for the tool service to become available.
  • Call timeout (ms): Maximum time allowed for a single tool invocation.
  • Tool description: A natural-language description of what the tool does. This description is used by the Agent when deciding whether to invoke the tool.

Screenshot of the configuration panel for the MCP API Tool.

The configuration panel for the MCP API Tool node.

Execution model

The MCP API Tool is not connected in the workflow graph.

Instead:

  • The tool is registered with the workflow.
  • The Agent node, when running in tool mode, can select and invoke the tool during execution.
  • Tool calls are driven by the Agent’s reasoning and prompt configuration, not by fixed graph connections

For details on enabling tool mode, structuring prompts, and designing agent-driven tool use, see Working with agents.

OPAQUE Redact and Unredact

OPAQUE provides two complementary agents for handling sensitive text within a workflow: OPAQUE Redact and OPAQUE Unredact. Together, they let you mask sensitive information before it leaves a trusted node and later restore it securely when authorized.

OPAQUE Redact

The OPAQUE Redact agent detects and replaces sensitive text using configurable regular expressions (Regex). It takes in plain text, applies the specified redaction patterns, and outputs both the redacted text and an encrypted mapping of redactions.

A Redact node can follow any other node that produces text output. It’s often placed before an LLM or external service to prevent sensitive information from leaving the enclave unmasked.

Configure the following fields:

  • Node name: Custom name for this instance (for example, PII Redactor).
  • Base URL: The URL of the redaction service. Leave blank to use the default OPAQUE endpoint.
  • Regexes: JSON array defining which patterns to redact and their categories. For example,

    Example Regex

    [
      { "regex_pattern": "\\b\\d{5}(?:-\\d{4})?\\b", "category": "Zip Code" }
    ]
    

Screenshot of the configuration panel for the OPAQUE Redact agent.

The configuration panel for the OPAQUE Redact agent.

Ports

  • Text to redact: Input text to be scanned for matches.
  • Redacted: Output containing the redacted text.
  • Redactions: Output containing an encrypted map linking each redacted span to its original value. This map is used later by the Unredact service.

Known limitation

When chaining a RAG-based Agent to the OPAQUE Redact node, the workflow may fail due to a mismatch in expected input field names. This can prevent RAG outputs from being passed correctly into the Redact service. This issue is known and will be addressed in a future release.

OPAQUE Unredact

The OPAQUE Unredact agent restores sensitive text that was previously redacted by an upstream Redact node. It accepts both the redacted text and the corresponding encrypted redaction map as inputs, then reconstructs the original text securely within the enclave.

An Unredact node typically appears after an LLM or other processing service to restore sensitive data into the final output. It must receive both the Redacted text and Redactions outputs from a preceding Redact node.

Configure the following fields:

  • Node name: Custom name for this instance (for example, Restore Sensitive Fields).
  • Base URL: The URL of the unredaction service. Leave blank to use the default OPAQUE endpoint.

Screenshot of the configuration panel for the OPAQUE Unredact agent.

The configuration panel for the OPAQUE Unredact agent.

Ports

  • Redacted text: The redacted version of the original content.
  • Redactions: The encrypted map created by the Redact service.
  • Text: The final output with redactions restored.

Add guardrails for safety

Guardrails let you enforce safety, policy, or formatting constraints for each node. All node types expose the same Guardrails panel, so once you learn how guardrails work, you can apply them consistently across your workflow.

OPAQUE guardrails are powered by NeMo Guardrails. You’ll define:

  • Configuration (YAML): Includes the model your rails use.
  • Input rails (Colang): Logic that runs before the node executes.
  • Output rails (Colang): Logic that runs after the node executes.

Note

In this release, rails block execution if they return anything other than the original text or an empty string. This behavior may expand in future releases.

To enable guardrails:

  1. Select a node and click the icon.

    To use guardrails, enable them on your agents.

    The guardrails panel is the same for every node. You can configure YAML once, and add Colang rules for input and output as needed.

  2. Toggle Enable guardrails.

  3. Enter your config in one or more of the following sections:

    • Configuration: NeMo YAML config, including the LLM used by the rails.
    • Input rails: Colang code to validate or transform inputs.
    • Output rails: Colang code to validate or transform outputs.

You can chain multiple nodes with guardrails. Each node runs its rails independently as the workflow progresses.

Example: Configuration (YAML)

config:
  models:
    - type: main
      engine: openai
      model: gpt-4
      parameters:
        api_key: ${OPENAI_API_KEY}
colang_version: "2.x"

Note

Guardrails are currently not supported with:

  • Anthropic models. There is no workaround at this time.
  • GPT-5 (model limitation). Use GPT-4 when rails are required.

Example: Input rails (Colang) — block PII

import core
import llm

flow main
  activate llm continuation

flow input rails $input_text
  $contains_pii = await check user utterance $input_text
  if $contains_pii
    bot say "Input blocked: PII detected."
    abort
  bot say $input_text
  abort

flow check user utterance $input_text -> $contains_pii
  $contains_pii = ... "Return True if the text contains PII, else False."
  return $contains_pii

Example: Output rails (Colang) — require JSON

import core
import llm

flow main
  activate llm continuation

flow output rails $model_output
  $is_json = ... "Return True if $model_output is valid JSON, else False."
  if not $is_json
    bot say "Output must be valid JSON."
    abort
  bot say $model_output
  abort

Note

Colang syntax and flow control are defined by NeMo Guardrails. The examples above show simple patterns, but for more advanced use cases, refer to the NeMo documentation.

Best practices

  • Use a dedicated guardrails model (e.g., GPT-4) separate from your task LLM. Guardrails are not supported on GPT-5.
  • Start simple: add either input or output rails first, then expand.
  • Keep rails concise—long prompts slow the workflow.
  • Log blocked inputs/outputs in production for auditing.

Troubleshooting

Issue Resolution
Rails never fire Make sure your Colang defines a flow main and that code is placed in the correct box (Input or Output).
Everything is blocked Remember: returning anything other than the original or an empty string aborts execution. Echo the original text to allow it to pass.
Timeouts Lower the LLM’s temperature or increase the SDK timeout.

Notes

Guardrails apply per agent. You can combine them across the full workflow so every step meets your safety and compliance needs.