A Kubernetes Secret is not actually secret.
That’s a hard sentence to sit with, especially if you’ve been dutifully creating Secret objects and patting yourself on the back for not hardcoding credentials in your ConfigMap. The problem runs deeper than most teams realize, and it doesn’t get fixed by following the basic Kubernetes documentation. This post is about what actually works, at different scales, with honest tradeoffs for each approach.
A quick note on experience level: I’ve worked with Sealed Secrets and External Secrets Operator in production contexts and can speak to those from hands-on experience. HashiCorp Vault I’ve used in lab and staging environments. Where I’m drawing primarily on research rather than production war stories, I’ll say so.
The Problem With Default Kubernetes Secrets
Are Kubernetes Secrets Secure by Default? (The Short Answer)
No, Kubernetes Secrets are not secure by default because they are only Base64-encoded, not encrypted. Anyone with API access or access to the underlying etcd storage can easily decode them. To secure secrets, you must enable encryption at rest in etcd and use external management tools like HashiCorp Vault, Sealed Secrets, or AWS Secrets Manager.
Base64 is an encoding scheme, not an encryption scheme. It’s reversible by anyone with access to the encoded string. No key required.
# This is how "secret" your Kubernetes Secret actually is
echo "c3VwZXJzZWNyZXRwYXNzd29yZA==" | base64 --decode
# Output: supersecretpassword
The actual data lives in etcd, Kubernetes’ backing key-value store. By default on many cluster configurations, etcd stores data at rest in plain text. If an attacker compromises your etcd storage, they have everything: every secret in every namespace, in readable form.
What makes this particularly dangerous is the blast radius. Kubernetes Secrets can contain database passwords, API keys, TLS certificates, OAuth tokens, and service account credentials. A single etcd breach can expose your entire infrastructure.
Three Common Failure Modes
Committed secrets in Git. Even if the cluster is secure, secrets often leak at the source. A developer commits a .env file or a values.yaml with credentials. That file is now in version history forever, even if deleted later. Tools like git-secrets or trufflehog can detect this, but prevention is better than detection.
Overly permissive RBAC. Kubernetes RBAC controls who can read secrets, but many teams configure RBAC too broadly. If a service account can get secrets in a namespace, and that pod gets compromised, the attacker can now read those secrets directly via the Kubernetes API.
# Check who can read secrets in your cluster
kubectl auth can-i get secrets --as=system:serviceaccount:default:my-service-account -n production
Unencrypted etcd. Even production clusters at major companies run etcd without encryption at rest. The Kubernetes documentation covers enabling this, but it’s an optional step that many teams skip.
Fixing the Basics: Encryption at Rest and Tight RBAC
Before reaching for external tooling, make sure the fundamentals are solid.
Enabling etcd Encryption at Rest
Kubernetes supports encrypting etcd data via an EncryptionConfiguration resource. This requires changes to the API server configuration, so it’s easier in managed Kubernetes (EKS, GKE, AKS all support this natively) than self-hosted clusters.
# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {}
Pass this config to the API server with --encryption-provider-config. After enabling it, you need to force-rewrite all existing secrets to encrypt them:
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
This is table stakes. It doesn’t make secrets management easy, but it means etcd data at rest is actually encrypted.
RBAC: Least Privilege for Secrets
The principle of least privilege applies directly to secrets access. Workloads should only be able to read the specific secrets they need, not all secrets in a namespace.
# Give a service account access to one specific secret
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: db-credentials-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["db-credentials"] # Specific secret, not wildcard
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: db-credentials-reader-binding
namespace: production
subjects:
- kind: ServiceAccount
name: api-service
namespace: production
roleRef:
kind: Role
name: db-credentials-reader
apiGroup: rbac.authorization.k8s.io
Naming specific secret resources instead of using wildcards is a small change with a significant security benefit.
Three Approaches to Real Secrets Management
With the basics covered, let’s talk about the tooling that actually makes secrets management sustainable.
Option 1: Sealed Secrets (Bitnami)
Sealed Secrets solves a specific problem cleanly: how do you store encrypted secrets safely in Git?
The setup involves two components. A controller runs in your cluster and holds a private key. A CLI tool called kubeseal uses the corresponding public key to encrypt secrets. The resulting SealedSecret object can be safely committed to Git. Only your cluster’s controller can decrypt it.
# Install the controller (version 0.27.x as of early 2026)
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm install sealed-secrets sealed-secrets/sealed-secrets \
--namespace kube-system \
--version 2.16.2
# Install kubeseal CLI
brew install kubeseal
Creating a sealed secret looks like this:
# Create a regular secret manifest (don't apply it)
kubectl create secret generic db-credentials \
--from-literal=username=admin \
--from-literal=password=supersecretpassword \
--dry-run=client \
-o yaml | \
kubeseal \
--controller-name=sealed-secrets \
--controller-namespace=kube-system \
--format yaml > sealed-db-credentials.yaml
The resulting sealed-db-credentials.yaml is safe to commit. It’s encrypted with your cluster’s public key. No one reading it can recover the original values without access to the cluster controller’s private key.
When Sealed Secrets makes sense:
- Teams running GitOps workflows (ArgoCD, Flux) where everything lives in Git
- Smaller teams without existing cloud secret store infrastructure
- When you want encrypted secrets in version control without operational overhead
The honest tradeoffs:
Sealed secrets are cluster-specific. If you have multiple clusters, each cluster has its own key pair. A secret sealed for cluster A cannot be unsealed by cluster B without re-sealing. This creates operational friction in multi-cluster environments.
Key rotation is also something to plan for deliberately. Sealed Secrets supports key rotation, but it requires re-sealing all existing secrets after rotation. The controller keeps old keys to decrypt previously sealed secrets, but new sealing uses the current key. Track this process or it becomes a mess.
Option 2: External Secrets Operator (ESO)
If your organization already uses a cloud-native secret store, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or HashiCorp Vault, External Secrets Operator (ESO) bridges those stores into Kubernetes native objects.
The model is clean: your secrets live in the external store as the source of truth. ESO syncs them into Kubernetes Secret objects on a configurable schedule. Your workloads consume standard Kubernetes Secrets, so nothing in your application changes.
# Install ESO (version 0.10.x as of early 2026)
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
--namespace external-secrets \
--create-namespace \
--version 0.10.7
Configure ESO to connect to AWS Secrets Manager:
# SecretStore defines connection to external store
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secrets-manager
namespace: production
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
secretRef:
accessKeyIDSecretRef:
name: aws-credentials
key: access-key-id
secretAccessKeySecretRef:
name: aws-credentials
key: secret-access-key
Then define what to sync:
# ExternalSecret pulls values from the store and creates a Kubernetes Secret
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: db-credentials # Name of the Kubernetes Secret to create
creationPolicy: Owner
data:
- secretKey: username # Key in the Kubernetes Secret
remoteRef:
key: production/db/credentials # Key in AWS Secrets Manager
property: username
- secretKey: password
remoteRef:
key: production/db/credentials
property: password
ESO will create and maintain the Kubernetes Secret, refreshing it on the interval you define. When you rotate a secret in AWS Secrets Manager, ESO picks it up automatically on the next refresh cycle.
When ESO makes sense:
- Teams already invested in cloud-native secret stores
- Multi-cluster environments where secrets need to be consistent across clusters
- Organizations with compliance requirements for centralized secret management
- When secret rotation needs to happen at the source and propagate automatically
The honest tradeoffs:
You now have an external dependency. If your cloud secret store is unavailable, ESO can’t refresh secrets. Existing Kubernetes Secrets remain functional, but rotation events won’t propagate. Plan for this in your runbooks.
The IRSA (IAM Roles for Service Accounts) configuration for production AWS setups is more involved than the example above, which uses static credentials. Production use should use pod identity or IRSA rather than static keys to avoid the ironic situation of storing AWS credentials to access your secrets store.
Option 3: HashiCorp Vault with Kubernetes Auth
Vault is the enterprise-grade choice. It’s purpose-built for secrets management, supports dynamic secrets (generating database credentials on-demand that expire), has sophisticated audit logging, and integrates with virtually every secret consumer you might have.
The Kubernetes auth method is particularly elegant. Instead of storing credentials to authenticate to Vault, pods authenticate using their Kubernetes service account JWT token. Vault validates the token with the Kubernetes API and returns a Vault token with appropriate policies.
# Enable Kubernetes auth in Vault
vault auth enable kubernetes
# Configure it with your cluster's details
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://kubernetes.default.svc" \
kubernetes_ca_cert="$(cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt)"
# Create a policy
vault policy write db-read - <<EOF
path "secret/data/production/db" {
capabilities = ["read"]
}
EOF
# Bind the policy to a Kubernetes service account
vault write auth/kubernetes/role/api-service \
bound_service_account_names=api-service \
bound_service_account_namespaces=production \
policies=db-read \
ttl=1h
To inject secrets into pods, you have two options. The Vault Agent Injector uses annotations to inject a sidecar that fetches and renders secrets to the pod’s filesystem. The Secrets Store CSI Driver (covered below) provides an alternative approach.
When Vault makes sense:
- Large teams or enterprises with compliance requirements (SOC 2, PCI-DSS, HIPAA)
- Multi-cloud or hybrid environments needing a single secrets authority
- When dynamic secrets are valuable (database credentials, cloud credentials that expire)
- Organizations that can absorb the operational overhead
The honest tradeoffs:
Vault is operationally significant. Running Vault in production means operating a highly available, highly sensitive service. It needs its own storage backend (Consul or integrated Raft), careful upgrade procedures, regular backup testing, and on-call capability. The documentation is excellent, but the learning curve is real.
I’ve run Vault in staging, not production at scale. Teams that run it successfully in production tend to have dedicated platform engineering resources. Trying to run Vault as a side project in a small team is a recipe for a 2 AM incident.
Option 4: Kubernetes Secrets Store CSI Driver
Worth mentioning as a complement to the above: the Secrets Store CSI Driver provides a standardized way to mount secrets from external stores directly into pod filesystems, without requiring the Kubernetes Secret object to exist at all.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: db-credentials-aws
namespace: production
spec:
provider: aws
parameters:
objects: |
- objectName: "production/db/credentials"
objectType: "secretsmanager"
jmesPath:
- path: "username"
objectAlias: "db-username"
- path: "password"
objectAlias: "db-password"
Pods mount the secret via CSI volume, and the values appear as files in the filesystem. This avoids the Kubernetes Secret object entirely, which means the secret never lands in etcd. For high-compliance environments, this matters.
Choosing the Right Approach
Here’s a practical decision framework:
Start here: Regardless of which approach you choose, encrypt etcd at rest and tighten RBAC first. These are free wins that require no additional tooling.
Team of 1-5, GitOps workflow, single cluster: Sealed Secrets. Low operational overhead, secrets safely in Git, integrates cleanly with ArgoCD or Flux.
Team of any size, existing cloud secret store (AWS/GCP/Azure): External Secrets Operator. You’re already paying for the secret store and managing secrets there. ESO bridges it into Kubernetes cleanly.
Large team, multi-cluster, compliance requirements, dedicated platform engineering: HashiCorp Vault. The operational overhead is justified at scale, and the features are unmatched.
High compliance, want to avoid secrets in etcd entirely: Secrets Store CSI Driver, paired with your cloud secret store or Vault as the backend.
Don’t over-engineer for your current scale. A five-person startup running Vault without platform engineering resources will spend more time keeping Vault running than building the actual product. Sealed Secrets or ESO will get you 90% of the security benefit at 10% of the operational cost.
Rotation and Audit Practices
The tool choice matters less than the operational practices. I’ve seen organizations with sophisticated secrets tooling that never rotate credentials and have no idea who accessed what.
Secret rotation is where most teams fall down. ESO makes rotation easier because rotating in the external store propagates automatically. Sealed Secrets requires deliberate effort: update the external store, re-seal, commit. Vault with dynamic secrets handles rotation by design. Whatever your tooling, document a rotation runbook and test it before you need it urgently.
Audit logging is non-negotiable for production. Kubernetes provides audit logging at the API server level. Configure it to capture secret access:
# audit-policy.yaml excerpt
rules:
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
At the Metadata level, you capture who accessed which secret and when, without logging the secret values themselves. This is usually sufficient for compliance and incident response. For regulated environments, you may need Request or RequestResponse level logging, which does capture values and requires careful handling.
What least privilege actually means in practice: It’s not just about RBAC roles. It means periodically auditing which service accounts have access to which secrets and removing access that’s no longer needed. Teams accumulate permissions over time; they rarely clean them up. Add a quarterly review to your operations calendar.
A few practices worth making habitual:
- Never store secrets in ConfigMaps. They’re not encrypted at rest even with encryption enabled for Secrets.
- Rotate credentials after any team member departure or suspected compromise.
- Never log secret values, even in debug statements. This sounds obvious until you’re debugging a production issue at 3 AM.
- Test your secret rotation process in staging before relying on it in production.
The Right Level of Complexity
Kubernetes secrets management has a spectrum from “default, which isn’t great” to “HashiCorp Vault, which is excellent but demanding.” Most teams should be operating somewhere in the middle: etcd encryption enabled, RBAC tightened, and either Sealed Secrets or ESO in place.
The goal isn’t to implement the most sophisticated solution. It’s to close the gap between what Kubernetes provides by default and what your threat model actually requires. For most teams, that gap is closable without running a dedicated secrets platform.
Start with the basics, pick the external tool that matches your existing infrastructure, and invest the time you save from good tooling into solid rotation and audit practices. That combination beats a fancy secrets platform with poor operational hygiene every time.
What’s your current setup for Kubernetes secrets? Have you hit the limits of Sealed Secrets at scale, or found a particularly clean ESO configuration? I’m always interested in how teams are solving this in practice. Find me on X or LinkedIn.