Skip to content

Deployed resources

This document outlines the resources provisioned in your Azure subscription when you deploy Opaque through the Azure Marketplace offering.

Deployment overview

Upon initiating deployment from the Marketplace, a provisioning virtual machine is created to automate setup. This VM pulls Opaque’s deployment artifacts — including Terraform modules and Helm charts — from a secured artifact registry. It then orchestrates the deployment of all required infrastructure and application components to support secure, scalable AI and data processing workflows.

The following diagram provides an overview of the process and deployed resources.

Diagram of the process and deployed resources when deploying Opaque through Azure Marketplace

Kubernetes clusters: Client and data planes

Two managed AKS clusters are deployed with confidential compute capabilities:

  • Client plane AKS Cluster
    • Hosts the user-facing interface, REST API, and encryption/decryption services.
    • Runs on AMD SEV-SNP enabled node pools to ensure runtime memory encryption and hardware-backed integrity.
  • Data plane AKS cluster
    • Executes data processing and service workflows.
    • Also uses AMD SEV-SNP nodes for confidential workloads.

Core Azure services deployed

To support runtime operations and state persistence, the deployment includes:

  • Azure Cache for Redis
    • Used for low-latency access and coordination between services.
    • Deployed with private networking, including:
      • A Private DNS Zone
      • A Private Endpoint associated with the same VNet(s) as the AKS clusters.
  • Azure Blob Storage
    • Stores both Terraform state and application data artifacts.
    • Provisioned with private access controls.

Client plane services

The Kubernetes-based client plane runs multiple services for user interaction and encryption:

  • Frontend service
    • Serves a React-based UI from an NGINX pod.
    • Exposed internally via Kubernetes Service on port 8080.
    • Integrates with DNS, TLS, and ingress as needed.
  • REST API
    • Processes authenticated requests from browsers and external clients.
    • Facilitates programmatic access to the platform.
  • Encryption/Decryption Service (EDS)
    • Manages encryption of uploaded data and decryption during download.
    • Communicates with the REST API over internal DNS.

Opaque data plane

The Kubernetes-based data plane consists of Opaque-authored components and supporting orchestration services that together execute analytics jobs and manage service requests. These orchestration services include Argo Workflows (for job orchestration) and Spark Operator (for batch data processing).

Data plane composition

The data plane is deployed as a composite workload using the dataplane chart, which defines the following core Opaque-authored components:

  • job-operator: A Kubernetes controller that renders a Deployment and a CustomResourceDefinition (CRD) called JobRun. It processes Spark-based analytics jobs.
  • servicehost: Renders a Deployment and Service. It handles service requests such as data ingestion and redaction.

Job operator

The job operator listens on Azure Service Bus for incoming job requests. When a request is received, it orchestrates the necessary Kubernetes resources to run the job through to completion.

Detailed execution behavior for both components is described later in this document.

Service host

The service host also listens on Service Bus but handlesservice-side operations without additional orchestration. It runs as a long-running deployment that processes tasks such as data ingestion, redaction, and un-redaction.

Each service host pod includes workload-specific containers as well as a set of injected components that collectively form Opaque’s attested TLS (aTLS) mesh. These components ensure all service communication is mutually attested, encrypted, and policy-enforced.

Argo Workflows

Opaque uses Argo Workflows to orchestrate multi-step analytics jobs. It operates in the same Kubernetes namespace as the data plane and monitors Workflow CRDs scoped to that namespace.

Spark Operator

For batch data processing, Opaque uses Spark Operator to run distributed Spark jobs as Kubernetes-native workloads. Each job spawns a Driver pod and one or more Executor pods, which are equipped with injected components that collectively enforce confidential communication and workload integrity through Opaque’s attested TLS (aTLS) mesh.

The Driver pod includes two additional components that support control plane coordination:

  • Job verifier (init container): Ensure the validity of each the job run request before execution, based on instructions received from the control plane.
  • Job heartbeat (sidecar): Sends periodic heartbeats to the control plane. If these heartbeats stop, the control plane marks the job as failed due to timeout.

aTLS mesh

To protect all data throughout its lifecycle, Opaque uses an attested TLS (aTLS) service mesh to secure communication between all deployed components. This mesh ensures that no service communicates with another unless both parties have cryptographically verified each other's identity and runtime integrity through remote attestation. This mechanism, know as mutual aTLS (maTLS), enforces:

  • Strong cryptographic verification before any connection is established.
  • End-to-end encryption for all traffic between services.
  • Hardware-rooted trust boundaries at the network layer.

All necessary components to enforce this mesh—such as proxies, policy engines, and certificate managers—are automatically injected into workloads that handle confidential data. These protections are applied transparently and do not require user configuration. (A more detailed overview of the aTLS mesh design is available upon request.)

Control plane integration

Opaque’s control plane, hosted in a separate Azure subscription, integrates securely via:

  • Azure Service Bus, which transports job coordination signals and heartbeats.
  • Private Link, which establishes a secure, VNet-scoped connection to the control plane.

Deploying Opaque through Azure Marketplace sets up a full Confidential AI stack — from secure infrastructure to application services — without requiring any manual orchestration. All components are provisioned and connected using automated infrastructure-as-code workflows triggered by a purpose-built VM.

Next step

Deploy Opaque via Azure Markeplace.