Skip to content

OPAQUE 2.6.0

January 6, 2026

OPAQUE 2.6 brings new ways to verify trust, debug test runs, and connect AI agents to real tools and data—making it easier to build secure, production-grade workflows.

New features and enhancements

  • Attestation reporting in the UI: You can now view attestation evidence right in the UI. It’s an easy way to confirm that workloads, infrastructure, and runtime components are running in trusted, verified environments.
  • Workflow trace logs for testing: Trying to figure out what happened during a test run? You can now see detailed trace logs—including steps, timings, inputs/outputs, and any errors—so it’s easier to debug and improve workflows.
  • Tool mode vs standard mode: Choose how your AI responds: go with standard mode for quick answers, or tool mode if you want the AI to verify responses using approved tools and data before replying.
  • New tool integrations via MCP: Want your agents to interact with real data and systems? You can now plug in new tools using the Model Context Protocol (MCP) to power production-ready workflows.

Known issues

Issue Suggested workaround
Attestation report header may cause errors under high load. Enabling the header can trigger a race condition when multiple concurrent requests (≥3) attempt to generate reports, leading to failures. Enable the header only once per workflow session, not per request. A server-side fix is under consideration.
Incorrect attested workflow count. In Trust → Attestation, the displayed number of attested workflows may be inaccurate when multiple workspaces share the same name. None. This will be fixed in a future release.
RAG output fails to connect to Redact Service in workflows. When chaining a RAG Agent to the OPAQUE Redact Service, the workflow fails due to a mismatch in expected input field names. None. This will be fixed in a future release.
Session may hang after returning to the app: If you close the app and return after ~10 minutes, the app may get stuck loading. Refresh your browser window.
Guardrails are not supported when using GPT-5 (a model limitation). Use GPT-4 as the engine when guardrails are required.
Guardrails are not supported when using Anthropic. None.
Workflows may time out when running with high temperature settings. Lower the temperature value, or increase the timeout setting in the SDK.
The Context input port on the OpenAI agent is not currently functional. None. This will be fixed in a future release.