Picture this: you’re the on-call engineer, sifting through CSI driver logs after a scare, only to find live service account tokens dumped right there, ripe for exploitation. Kubernetes v1.35 changes that—for CSI driver maintainers and cluster ops folks everywhere—by sliding those tokens into the secure secrets field where they belong.
It’s not flashy. But damn, it’s the kind of architectural tweak that prevents quiet disasters.
Why CSI Tokens Were a Ticking Log Bomb
Tokens landed in volume_context. Worked fine—until it didn’t. CSI specs scream that volume_context ain’t for secrets; it’s public-ish, and protosanitizer tools skip sanitizing it. Boom: logs spew tokens.
“This happened with CVE-2023-2878 in the Secrets Store CSI Driver and CVE-2024-3744 in the Azure File CSI Driver.”
Real breaches. Real CVEs. And every driver scrambling with custom sanitizers? Messy, inconsistent.
Kubernetes couldn’t just flip the switch—too many drivers expect the old spot. So, v1.35 drops a beta opt-in: serviceAccountTokenInSecrets: true in your CSIDriver spec.
Set it false (default)? Tokens stay in volume_context, business as usual. Flip to true? Straight to secrets in NodePublishVolumeRequest. Clean. Spec-compliant. No logs leaking.
How Does This Opt-In Actually Work?
Grab your CSIDriver YAML. Add under tokenRequests:
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: example-<a href="/tag/csi-driver/">csi-driver</a>
spec:
tokenRequests:
- audience: "example.com"
expirationSeconds: 3600
serviceAccountTokenInSecrets: true
Kubelet and apiserver have CSIServiceAccountTokenSecrets feature gate on by default—at beta, no less, ‘cause defaults preserve the status quo.
Driver code? Simple fallback:
const serviceAccountTokenKey = "csi.storage.k8s.io/serviceAccount.tokens"
func getServiceAccountTokens(req *csi.NodePublishVolumeRequest) (string, error) {
if tokens, ok := req.Secrets[serviceAccountTokenKey]; ok {
return tokens, nil
}
if tokens, ok := req.VolumeContext[serviceAccountTokenKey]; ok {
return tokens, nil
}
return "", fmt.Errorf("service account tokens not found")
}
Ship that fallback now—even pre-v1.35 clusters love it. Backward compatible gold.
Rollout? Prep driver with fallback (do it yesterday). Upgrade cluster. Edit CSIDriver to opt-in. Volumes republish cleanly; tokens shift sans drama.
Here’s my take—the one you’ll not find in release notes: this echoes the 2019 bound service account token pivot. Back then, Kubernetes ditched static tokens for request-bound ones, curbing long-lived risks. Now? Same playbook for CSI plumbing. It’s not hype; it’s Kubernetes’ secret sauce—iterative security that scales without forklift upgrades. Bold call: by mid-2025, 80% of production CSI drivers flip this on, gutting a class of token-exposure CVEs.
But wait—corporate spin? Nah, this is pure SIG Storage pragmatism. No vaporware promises, just code that works.
Why Does Kubernetes v1.35 Matter for CSI Driver Maintainers?
You’re maintaining GCE PD, EBS, or some niche filer? This slashes your toil. No more “did we sanitize enough?” paranoia. Specs align; protosanitizer handles secrets natively.
Clusters on 1.35+? Zero behavior shift unless you opt-in. Safe harbor.
And for end-users—your StatefulSets, PVCs hum along. Workload identity tokens flow securely to CSI for cloud auth. No token rot in etcd or logs.
Skeptical? Test it. Spin a minikube, deploy a patched driver, watch tokens vanish from context, land in secrets. gRPC logs? Sanitized bliss.
Look, Kubernetes ships 1.35 amid AI hype storms, but this? Quiet heroism in storage identity. The how is opt-in elegance; the why is battle-tested from CVEs. Architectural shift: CSI’s treating identity like the crown jewels it is.
Drivers lag? Vendor pressure mounts—expect patches in CSI operator waves soon. (Secrets Store already stung; they’ll hustle.)
Will Kubernetes 1.35 Break My CSI Setup?
Short answer: nope. Defaults hold the line. Fallback logic future-proofs.
But if you’re hacking a from-scratch driver—embrace secrets day one. Volume_context was always the hack.
Deeper why: CSI spec’s secrets field sat empty for years, begging this. TokenRequests (1.21 era) jammed it into context as a stopgap. v1.35 closes the loop.
Ops folks, audit your drivers. kubectl get csidrivers -o yaml | grep serviceAccountTokenInSecrets post-upgrade. Blank? Still old way—prod poke.
This isn’t revolution. It’s evolution that spares you log-driven nightmares.
🧬 Related Insights
- Read more: Anthropic’s Claude Code Leak: Source Maps Spill AI Agent Guts Wide Open
- Read more: The Irreversible Migration: How to Retire a Mission-Critical Database Without Losing Your Business
Frequently Asked Questions
What changed in Kubernetes v1.35 for CSI service account tokens?
CSI drivers can opt-in to receive tokens via the secrets field instead of volume_context, fixing log exposure issues from CVEs like 2023-2878.
How do I enable service account token secrets in my CSI driver?
Add serviceAccountTokenInSecrets: true to your CSIDriver spec’s tokenRequests array, after shipping fallback code in your driver.
Does Kubernetes 1.35 break existing CSI drivers?
No—defaults to old behavior. Feature gate’s on, but opt-in only. Add fallback for safety.