Skip to content

Enclave Deployment

Before deploying, ensure your EKS cluster has:

  1. Nitro Enclave-capable nodes — use instance types with Nitro Enclave support (e.g., m5.xlarge, c5.xlarge, r5.xlarge). Enable Nitro Enclaves in the launch template (EnclaveOptions.Enabled: true).

  2. Nitro Enclaves Allocator — configure /etc/nitro_enclaves/allocator.yaml on each node to pre-allocate CPUs and memory for enclaves. These resources are taken offline from the host at boot.

    /etc/nitro_enclaves/allocator.yaml
    cpu_count: 4
    memory_mib: 2048
  3. Device plugin DaemonSet — install the Nitro Enclaves Kubernetes device plugin. Enable CPU advertisement so the scheduler can properly account for enclave CPU allocation:

    Terminal window
    kubectl apply -f https://raw.githubusercontent.com/aws/aws-nitro-enclaves-k8s-device-plugin/main/aws-nitro-enclaves-k8s-ds.yaml

    Set ENCLAVE_CPU_ADVERTISEMENT=true in the DaemonSet env to advertise offline CPUs as a schedulable resource (aws.ec2.nitro/nitro_enclaves_cpus). This prevents K8s from over-scheduling enclave pods on a node.

  4. Node labels:

    Terminal window
    kubectl label node <node-name> aws-nitro-enclaves-k8s-dp=enabled
  5. Hugepages — configure on each node:

    Terminal window
    echo 'vm.nr_hugepages=1024' | sudo tee /etc/sysctl.d/99-hugepages.conf
    sudo sysctl -p /etc/sysctl.d/99-hugepages.conf

If using Cluster Autoscaler, add these tags to the enclave-capable ASG so CA knows what resources the ASG provides:

k8s.io/cluster-autoscaler/node-template/resources/aws.ec2.nitro/nitro_enclaves = "4"
k8s.io/cluster-autoscaler/node-template/resources/aws.ec2.nitro/nitro_enclaves_cpus = "4"

With Karpenter, use a NodePool constrained to enclave-capable instance types — Karpenter reads device resources from existing nodes automatically.

The chart uses a single image repository with a -nitro tag suffix for enclave mode:

TagContentsWhere It Runs
v1.0.0Signer binary (scratch base)Standard mode — EC2, ECS, K8s
v1.0.0-nitroPod controller + vsock proxies + EIF baked inParent EC2 instance, manages the enclave

The -nitro image contains the EIF (Enclave Image File) baked in — no S3 download at boot.

The standard Containment Chamber chart supports enclave mode via enclave.enabled: true:

Terminal window
helm install containment-chamber \
oci://ghcr.io/unforeseen-consequences/charts/containment-chamber \
--set enclave.enabled=true \
--set enclave.aws.region=us-east-1 \
--set enclave.cpuCount=2 \
--set enclave.memoryMib=512 \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME \
--set-json 'config={"network":"mainnet","key_sources":{"dynamodb":{"table":"containment-keys","enclave":{"enabled":true,"vsock_port":9000,"metrics_vsock_port":3000}}}}'

When enclave.enabled=true, the chart automatically:

  • Uses the -nitro image tag suffix
  • Requests aws.ec2.nitro/nitro_enclaves: "1" device resource
  • Requests aws.ec2.nitro/nitro_enclaves_cpus matching enclave.cpuCount
  • Allocates hugepages for the enclave memory allocator
  • Sets enclave environment variables (CPU count, memory, vsock ports, egress ports)
  • Adds a preStop hook to terminate the enclave on pod shutdown

The enclave uses three separate resource types in K8s:

ResourceWhat it representsPod value
aws.ec2.nitro/nitro_enclavesEnclave slots (max 4 per node)Always "1"
aws.ec2.nitro/nitro_enclaves_cpusCPUs dedicated to the enclaveenclave.cpuCount (e.g., "2")
resources.requests.cpuHost-side CPUs for proxy processesSmall (e.g., "250m")

Enclave pods scale like any other K8s workload:

  • Multiple replicas — set replicaCount > 1 for HA. Replicas can run on the same node (up to 4 enclave slots per node) or across different nodes.
  • HPA — works normally. New pods trigger Cluster Autoscaler if no node has a free enclave slot.
  • RollingUpdate — works when there’s a free enclave slot on any node. The new pod starts on an available slot while the old one drains.
Terminal window
# Check pod is running
kubectl get pods -l app.kubernetes.io/name=containment-chamber
# Check the enclave is running inside the pod
kubectl exec -it <pod-name> -- nitro-cli describe-enclaves
# Test the signing API
kubectl port-forward svc/containment-chamber 9000:9000
containment-chamber operator status \
--auth-token "$AUTH_TOKEN" \
--signer-url http://localhost:9000