Enclave Deployment
Prerequisites
Section titled “Prerequisites”Before deploying, ensure your EKS cluster has:
-
Nitro Enclave-capable nodes — use instance types with Nitro Enclave support (e.g.,
m5.xlarge,c5.xlarge,r5.xlarge). Enable Nitro Enclaves in the launch template (EnclaveOptions.Enabled: true). -
Nitro Enclaves Allocator — configure
/etc/nitro_enclaves/allocator.yamlon each node to pre-allocate CPUs and memory for enclaves. These resources are taken offline from the host at boot./etc/nitro_enclaves/allocator.yaml cpu_count: 4memory_mib: 2048 -
Device plugin DaemonSet — install the Nitro Enclaves Kubernetes device plugin. Enable CPU advertisement so the scheduler can properly account for enclave CPU allocation:
Terminal window kubectl apply -f https://raw.githubusercontent.com/aws/aws-nitro-enclaves-k8s-device-plugin/main/aws-nitro-enclaves-k8s-ds.yamlSet
ENCLAVE_CPU_ADVERTISEMENT=truein the DaemonSet env to advertise offline CPUs as a schedulable resource (aws.ec2.nitro/nitro_enclaves_cpus). This prevents K8s from over-scheduling enclave pods on a node. -
Node labels:
Terminal window kubectl label node <node-name> aws-nitro-enclaves-k8s-dp=enabled -
Hugepages — configure on each node:
Terminal window echo 'vm.nr_hugepages=1024' | sudo tee /etc/sysctl.d/99-hugepages.confsudo sysctl -p /etc/sysctl.d/99-hugepages.conf
Cluster Autoscaler
Section titled “Cluster Autoscaler”If using Cluster Autoscaler, add these tags to the enclave-capable ASG so CA knows what resources the ASG provides:
k8s.io/cluster-autoscaler/node-template/resources/aws.ec2.nitro/nitro_enclaves = "4"k8s.io/cluster-autoscaler/node-template/resources/aws.ec2.nitro/nitro_enclaves_cpus = "4"With Karpenter, use a NodePool constrained to enclave-capable instance types — Karpenter reads device resources from existing nodes automatically.
Images
Section titled “Images”The chart uses a single image repository with a -nitro tag suffix for enclave mode:
| Tag | Contents | Where It Runs |
|---|---|---|
v1.0.0 | Signer binary (scratch base) | Standard mode — EC2, ECS, K8s |
v1.0.0-nitro | Pod controller + vsock proxies + EIF baked in | Parent EC2 instance, manages the enclave |
The -nitro image contains the EIF (Enclave Image File) baked in — no S3 download at boot.
Helm Chart
Section titled “Helm Chart”The standard Containment Chamber chart supports enclave mode via enclave.enabled: true:
helm install containment-chamber \ oci://ghcr.io/unforeseen-consequences/charts/containment-chamber \ --set enclave.enabled=true \ --set enclave.aws.region=us-east-1 \ --set enclave.cpuCount=2 \ --set enclave.memoryMib=512 \ --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME \ --set-json 'config={"network":"mainnet","key_sources":{"dynamodb":{"table":"containment-keys","enclave":{"enabled":true,"vsock_port":9000,"metrics_vsock_port":3000}}}}'When enclave.enabled=true, the chart automatically:
- Uses the
-nitroimage tag suffix - Requests
aws.ec2.nitro/nitro_enclaves: "1"device resource - Requests
aws.ec2.nitro/nitro_enclaves_cpusmatchingenclave.cpuCount - Allocates hugepages for the enclave memory allocator
- Sets enclave environment variables (CPU count, memory, vsock ports, egress ports)
- Adds a
preStophook to terminate the enclave on pod shutdown
Resource Accounting
Section titled “Resource Accounting”The enclave uses three separate resource types in K8s:
| Resource | What it represents | Pod value |
|---|---|---|
aws.ec2.nitro/nitro_enclaves | Enclave slots (max 4 per node) | Always "1" |
aws.ec2.nitro/nitro_enclaves_cpus | CPUs dedicated to the enclave | enclave.cpuCount (e.g., "2") |
resources.requests.cpu | Host-side CPUs for proxy processes | Small (e.g., "250m") |
Scaling
Section titled “Scaling”Enclave pods scale like any other K8s workload:
- Multiple replicas — set
replicaCount > 1for HA. Replicas can run on the same node (up to 4 enclave slots per node) or across different nodes. - HPA — works normally. New pods trigger Cluster Autoscaler if no node has a free enclave slot.
- RollingUpdate — works when there’s a free enclave slot on any node. The new pod starts on an available slot while the old one drains.
Verifying the Deployment
Section titled “Verifying the Deployment”# Check pod is runningkubectl get pods -l app.kubernetes.io/name=containment-chamber
# Check the enclave is running inside the podkubectl exec -it <pod-name> -- nitro-cli describe-enclaves
# Test the signing APIkubectl port-forward svc/containment-chamber 9000:9000containment-chamber operator status \ --auth-token "$AUTH_TOKEN" \ --signer-url http://localhost:9000