Skip to content

Security Model

Containment Chamber is designed for zero-trust environments where validator private keys must never leave the signer process. All secrets — BLS private keys, authentication tokens, and the DynamoDB master key — are zeroized on drop. The process disables core dumps at startup and locks memory pages to prevent key material from reaching disk.

This page describes what the signer trusts, what it protects against, and where its protections end.

  • The host operating system. The signer runs as a userspace process. A compromised kernel or root-level attacker can read process memory directly. OS-level isolation (containers, VMs, dedicated hosts) is your first line of defense.
  • The configured KMS service. When using the DynamoDB key source, the master key is split via Shamir secret sharing and each share is encrypted by a different AWS KMS key. The signer trusts that KMS correctly encrypts and decrypts shares.
  • The configured anti-slashing backend. The signer trusts that its anti-slashing database (PostgreSQL, SQLite, or DynamoDB) returns accurate historical data. A corrupted or tampered database could allow double-signing.
  • Operators to keep unseal passphrases secret. When the seal/unseal feature is enabled, operators hold passphrases that protect Shamir shares of the master key. The signer trusts that those passphrases are not disclosed to unauthorized parties.

When the DynamoDB key source is configured, the signer starts in a sealed state. The master key does not exist in memory until enough operators submit their unseal shares via the API. Once the threshold is met, the master key is reconstructed, verified, and held in memory for the lifetime of the process.

Sealing the signer zeroizes the master key from memory. Even if the host is compromised after sealing, the attacker cannot recover the master key from the running process. The key material only exists in memory during the Unsealed or AwaitingRotation states.

For the full state machine, init ceremony, and operational procedures, see Seal & Unseal.

  • Validator clients. All signing requests are authenticated via HMAC-hashed tokens when auth policies are configured. Each token is scoped to specific keys, operations, and API routes. No client gets implicit access.
  • The network. Private keys exist only in process memory. They are never serialized to logs, HTTP responses, or config files. Only public keys appear in API responses and log output.
  • Its own crash dumps. On Linux, the process calls prctl::set_dumpable(false) at startup to prevent core dumps from leaking key material. It also calls mlockall() to prevent memory pages containing keys from being swapped to disk.

Double-signing and slashing. Every signing request passes through an anti-slashing backend before the BLS signature is computed. The check-before-sign pattern is enforced in code — there is no path to a signature without passing the slashing check. Three production backends are available: PostgreSQL (recommended, multi-instance with advisory locks), SQLite (single-instance with exclusive locking), and DynamoDB (cloud-native with conditional writes).

Unauthorized signing. Auth policies provide per-token control over which public keys can be signed with, which signing operations are allowed, and which API scopes are accessible. Tokens are validated using HMAC-SHA256 hashes — the plaintext token is never stored in memory after startup. A random 32-byte HMAC secret is generated per process lifetime.

Key theft via memory dumps. All secret key material is wrapped in Zeroizing<T> from the zeroize crate, which overwrites memory on drop. The process disables core dumps via prctl::set_dumpable(false) and locks memory pages via mlockall() to prevent swap. The #![deny(unsafe_code)] attribute is set globally — there are no unsafe blocks in application code.

Key theft via logs. Private keys are never logged. The AuthPolicy Debug implementation redacts the HMAC secret. Token values are replaced with truncated hashes in audit logs. Only public keys and key ARNs appear in log output.

Timing attacks on token validation. Token comparison uses subtle::ConstantTimeEq (ct_eq()), which executes in constant time regardless of how many bytes match. This prevents attackers from inferring valid token prefixes by measuring response times.

Denial of service via request flooding. Signing routes are protected by Tower middleware layers: LoadShedLayer rejects requests when the service is saturated, ConcurrencyLimitLayer caps concurrent signing operations, and TimeoutLayer enforces per-request deadlines. The health endpoint (/upcheck) bypasses all backpressure layers so monitoring always gets a response.

Single point of failure for the master key. When using the DynamoDB key source, the 32-byte master key is split into N Shamir shares with an M-of-N threshold. Each share is encrypted by a different AWS KMS key, potentially in different AWS accounts. No single KMS key compromise reveals the master key. The master key is verified via HMAC-SHA256 after reconstruction and is never persisted in plaintext.

Compromised host OS. If an attacker has root access to the host, they can read process memory directly, attach a debugger, or replace the binary. Core dump protection and memory zeroization raise the bar but cannot prevent a root-level attacker from extracting keys from a running process. Use dedicated hosts, hardware security modules, or confidential computing for defense in depth.

Compromised KMS keys. If all M-of-N KMS keys used for Shamir share encryption are compromised, the attacker can reconstruct the master key and decrypt all validator private keys stored in DynamoDB. Distribute KMS keys across separate AWS accounts with independent access controls to limit blast radius.

Logic bugs in Ethereum consensus. The signer computes signing roots and domains according to the Ethereum consensus specification. It validates the genesis validators root to prevent cross-network replay. However, it relies on the correctness of the consensus spec implementation (via Lighthouse types) and the fork choice provided by the validator client.

Compromised anti-slashing database. If an attacker can modify the anti-slashing database (truncate history, lower watermarks), the signer may produce slashable signatures. Protect database access with network isolation, authentication, and backups.

ProtectionMechanismConfiguration
Memory zeroizationZeroizing<T> on all secret key bytes, tokens, and master keyAlways on
Core dump preventionprctl::set_dumpable(false) at startup (Linux)Always on
Memory lockingmlockall(MCL_CURRENT | MCL_FUTURE) (Linux)Always on (requires CAP_IPC_LOCK)
Unsafe code prohibition#![deny(unsafe_code)] globallyAlways on
Token hashingHMAC-SHA256 with per-process random secretAlways on when auth policies configured
Constant-time token comparisonsubtle::ConstantTimeEqAlways on when auth policies configured
Anti-slashing checksCheck-before-sign with PostgreSQL, SQLite, or DynamoDBRequires backend configuration
Request backpressureLoadShedLayer + ConcurrencyLimitLayer + TimeoutLayerAlways on (configurable limits)
Network GVR validationRejects signing requests with wrong genesis validators rootAlways on
Master key splittingShamir M-of-N across multiple KMS keysWhen DynamoDB key source configured
Master key integrityHMAC-SHA256 verification after Shamir reconstructionWhen DynamoDB key source configured
Seal/unseal state machineMaster key zeroized when sealed; signing gated on Unsealed/AwaitingRotationWhen DynamoDB key source configured
Graceful shutdown25s connection drain with SIGTERM handlingAlways on

For deployment-specific hardening — network exposure, file permissions, systemd sandboxing, and Docker security — see the Production Hardening guide.