Kubernetes
Prerequisites
Section titled “Prerequisites”- Helm 3+ installed
- Access to a Kubernetes cluster (1.24+)
- A PostgreSQL instance for slashing protection (recommended for multi-instance deployments)
- Validator keystores available as files or Kubernetes Secrets
Installation
Section titled “Installation”Add your configuration overrides to a my-values.yaml file, then install:
helm install containment-chamber oci://ghcr.io/unforeseen-consequences/charts/containment-chamber \ --namespace validators \ --create-namespace \ -f my-values.yamlEssential Configuration
Section titled “Essential Configuration”Image Settings
Section titled “Image Settings”image: repository: ghcr.io/unforeseen-consequences/containment-chamber pullPolicy: IfNotPresent # Defaults to the chart appVersion if empty tag: ""Server and Metrics
Section titled “Server and Metrics”The chart generates a config file from these values:
config: server: listen_address: "0.0.0.0" listen_port: 9000 metrics: listen_address: "0.0.0.0" listen_port: 9001 network: mainnet key_sources: filesystem: paths: []Environment Variables
Section titled “Environment Variables”Use env to inject secrets — the antislashing database URL, signing auth tokens, and any other sensitive values:
env: - name: CONTAINMENT_ANTISLASHING__URL valueFrom: secretKeyRef: name: cc-secrets key: database-url - name: CONTAINMENT_ANTISLASHING__BACKEND value: "postgres"Volume Mounts for Keystores
Section titled “Volume Mounts for Keystores”Mount your validator keystores into the container using extraVolumes and extraVolumeMounts:
config: key_sources: filesystem: paths: - /keystores
extraVolumeMounts: - name: keystores mountPath: /keystores readOnly: true
extraVolumes: - name: keystores secret: secretName: validator-keystoresResources
Section titled “Resources”resources: requests: cpu: 100m memory: 64Mi limits: memory: 256MiServiceMonitor
Section titled “ServiceMonitor”If you run the Prometheus Operator, enable the ServiceMonitor to scrape metrics automatically:
serviceMonitor: enabled: true additionalLabels: release: prometheus namespace: "" namespaceSelector: {} scrapeInterval: 60s targetLabels: [] metricRelabelings: []PodDisruptionBudget
Section titled “PodDisruptionBudget”Configure a PDB to control voluntary disruptions during node drains and cluster upgrades:
pdb: minAvailable: 1 # Or use maxUnavailable instead: # maxUnavailable: 50%Security Context
Section titled “Security Context”The chart ships with secure defaults. The container runs as a non-root user with a read-only filesystem:
Pod-level (podSecurityContext):
podSecurityContext: runAsNonRoot: true runAsUser: 65534Container-level (securityContext):
securityContext: readOnlyRootFilesystem: true allowPrivilegeEscalation: false capabilities: drop: - ALLThese defaults satisfy most Pod Security Standards (restricted profile). You should not need to change them.
Health Probes
Section titled “Health Probes”All probes target the /upcheck endpoint on the HTTP port:
startupProbe: httpGet: path: /upcheck port: http failureThreshold: 10 periodSeconds: 5
livenessProbe: httpGet: path: /upcheck port: http
readinessProbe: httpGet: path: /upcheck port: httpThe startup probe allows up to 50 seconds (10 x 5s) for initial key loading — important when loading thousands of encrypted keystores. Once the startup probe succeeds, the liveness and readiness probes take over with default Kubernetes intervals.
Network Policies
Section titled “Network Policies”The chart includes network policy support to restrict traffic:
netpolicies: ingress: # Only allow traffic from pods in these namespaces allowedNamespaces: - consensus-layer
egress: # Allow DNS resolution allowDns: enabled: true cidr: "0.0.0.0/0"
# Restrict outbound traffic restrictEgress: enabled: true allowedEgressNamespaces: - database allowedEgressCIDRs: - 10.0.0.0/8Additional Options
Section titled “Additional Options”Deployment Strategy
Section titled “Deployment Strategy”strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 25%
terminationGracePeriodSeconds: 30Topology Spread Constraints
Section titled “Topology Spread Constraints”topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: containment-chamberExtra Kubernetes Objects
Section titled “Extra Kubernetes Objects”Deploy additional resources (ConfigMaps, Secrets, etc.) alongside the chart:
extraDeploy: - apiVersion: v1 kind: ConfigMap metadata: name: extra-config data: key: valueUpgrading
Section titled “Upgrading”helm upgrade containment-chamber ./k8s/charts/containment-chamber \ --namespace validators \ -f my-values.yamlCheck the rollout status:
kubectl -n validators rollout status deployment/containment-chamberIRSA Setup
Section titled “IRSA Setup”When running on EKS, use IAM Roles for Service Accounts (IRSA) to grant the pod access to DynamoDB and KMS without static credentials.
1. Create an IAM role with the required permissions (see AWS IAM Permissions).
2. Annotate the ServiceAccount in your values file:
serviceAccount: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/containment-chamber-role3. Trust policy for the IAM role:
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::ACCOUNT_ID:oidc-provider/oidc.eks.REGION.amazonaws.com/id/CLUSTER_ID" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.REGION.amazonaws.com/id/CLUSTER_ID:sub": "system:serviceaccount:NAMESPACE:containment-chamber" } } }]}DynamoDB Key Source
Section titled “DynamoDB Key Source”To use DynamoDB as the key source, pass the configuration through the config: block and set AWS credentials via IRSA (see above):
serviceAccount: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/containment-chamber-role
config: key_sources: dynamodb: table: containment-keys status_filter: - active key_refresh_interval_minutes: 60 keygen: enabled: true max_items_per_request: 100 writable: enabled: true request_timeout_seconds: 600See DynamoDB + KMS Key Management for the full setup guide.
DynamoDB Anti-Slashing
Section titled “DynamoDB Anti-Slashing”To use DynamoDB as the anti-slashing backend:
config: antislashing: backend: dynamodb table: containment-slashing-protectionSee AWS IAM Permissions for the required DynamoDB permissions.
Scaling for Large Deployments
Section titled “Scaling for Large Deployments”Resource recommendations by validator count:
| Validators | CPU Request | CPU Limit | Memory Request | Memory Limit |
|---|---|---|---|---|
| < 1,000 | 100m | 500m | 128Mi | 256Mi |
| 1,000–10,000 | 500m | 2000m | 256Mi | 512Mi |
| 10,000–100,000 | 1000m | 4000m | 512Mi | 1Gi |
| 100,000+ | 2000m | 8000m | 1Gi | 2Gi |
Example values for 100K+ validators:
resources: requests: cpu: "2000m" memory: "1Gi" limits: cpu: "8000m" memory: "2Gi"The default concurrency limits are tuned for large deployments:
max_concurrent_signing_jobs: 500 (configurable via--max-concurrent-signing-jobs)signing_queue_buffer_size: 10,000 (configurable via--signing-queue-buffer-size)
For PostgreSQL anti-slashing at scale, increase the pool size:
config: antislashing: backend: postgres pool_size: 32