KubeVirt v1.8 Release: What Changed & Why It Matters

KubeVirt v1.8 just dropped, and it's not just another point release—it's the moment when Kubernetes stops being KVM-only and starts becoming something bigger. The community has figured out how to abstract the hypervisor layer itself.

KubeVirt v1.8 release announcement with hypervisor abstraction layer architecture diagram

Key Takeaways

  • KubeVirt v1.8 introduces Hypervisor Abstraction Layer, enabling multiple hypervisor backends while keeping KVM as the default—a fundamental platform shift
  • Intel TDX Attestation support and Confidential Computing improvements give enterprises proof that VMs are running on real secure hardware
  • New storage features (ContainerPath volumes, Incremental Backup with CBT) and networking improvements (passt promotion, live NAD updates) solve real operational pain points without lock-in

Your virtual machines are running on Kubernetes right now. You probably didn’t notice. That’s the whole point.

KubeVirt v1.8 just shipped, aligned with Kubernetes v1.35, and while the announcement reads like standard release notes, there’s something genuinely significant buried in the technical details. The community has cracked a problem that’s been lurking beneath the surface since day one: how do you build a VM platform on Kubernetes without locking yourself into a single hypervisor backend?

The answer? Hypervisor Abstraction Layer. And that changes everything.

What Actually Happened Here?

For years, KubeVirt has been KVM-first. That made sense—KVM is ubiquitous, battle-tested, and deeply integrated with Linux. But ubiquity isn’t flexibility. Enterprises run exotic hardware. Some teams want Xen. Others dream of QEMU-only setups. A few edge-case users need something completely different.

The old architecture forced you to choose: take KVM or take a hike.

With v1.8, that constraint evaporates. The new Hypervisor Abstraction Layer lets KubeVirt support multiple backends while keeping KVM as the default (so nobody’s existing setup breaks). It’s not revolutionary—it’s just good engineering. The kind that feels obvious only after someone’s already done it.

Why Does This Matter for Enterprises?

Here’s the thing: Kubernetes already abstracts compute, storage, and networking. But VMs have been the weird stepchild—powerful, but locked into assumptions about hardware. This abstraction layer means KubeVirt itself becomes a true platform abstraction, not just a container orchestrator that happens to run VMs.

Think of it like this. Kubernetes abstracts away the cloud provider. You can run on AWS, GCP, Azure, or on-prem—the APIs stay the same. Now imagine VMs getting that same treatment. Swap hypervisors like you swap clouds. That’s the door v1.8 just opened.

The community didn’t do this alone, either. Intel TDX Attestation improvements (introduced by the Confidential Computing Working Group) now let VMs prove they’re running on actual confidential hardware. For security-obsessed teams, that’s table stakes. You can no longer hand-wave when someone asks: “How do we know this is real?”

“We are really starting to see it settle and find a rhythm in the community. We have had a real boom in proposals for this release, and that trend is likely to continue.”

That quote from the announcement matters more than it sounds. The Virt Enhancement Proposal (VEP) process—basically KubeVirt’s feature governance—has matured. Early versions were shaky. Now? Proposals are flowing in. Contributors are showing up. The project has moved from “scrappy startup” energy to “sustainable open source” stability.

The Unsexy Wins: Networking & Storage

OK, here’s where I’m supposed to gush about performance. Let me actually tell you what’s useful.

passt binding got promoted from experimental plugin to core. That’s not marketing speak—it means the networking layer is now officially solid enough that you don’t need a plugin system to handle it. The old implementations were clunky; passt is genuinely cleaner. And now you can live-update VM network references without a restart. That’s the kind of operational detail that makes sysadmins weep with relief.

On storage, two features stand out. ContainerPath volumes solve a real cloud-native problem: how do you inject cloud provider credentials into a VM without baking them into the image? You map container paths. It sounds small. It’s not. It’s how you stop repeating the same mistakes from the hypervisor era.

The bigger move: Incremental Backup with Changed Block Tracking (CBT). Instead of copying every byte every time, you copy only what changed. QEMU and libvirt already had this capability; KubeVirt just made it accessible and storage-agnostic. The result? Backups that finish faster, consume less network bandwidth, and don’t force you into a specific CSI driver. That’s freedom.

The Scale Question Everyone’s Actually Asking

Does this scale? The team ran tests at 8,000 Virtual Machine Instances (VMIs) using KWOK. Memory scaling was linear and predictable—roughly 3.89 KB per VMI for virt-api, 173 KB per VMI for virt-controller. Those numbers are tiny. Meaningfully tiny.

That’s not hype. That’s actual measurement. The community is being transparent about uncertainty (they note these are still estimates) and committing to publishing comprehensive benchmarks with each release. That’s the sign of a mature project.

Where This Fits in the Bigger Picture

KubeVirt isn’t trying to replace VMware or KVM directly. It’s not competing on feature-count or performance (though it’s surprisingly fast). What KubeVirt is doing is normalizing VMs as a Kubernetes primitive. Not containers-or-VMs. Not “we support both.” But actually, truly treating them as equal citizens in the orchestration stack.

v1.8 moves that needle. Hypervisor abstraction, confidential computing, improved storage, and proven scale characteristics—these aren’t flashy features. They’re the unglamorous work of building a platform that actually lasts.

The energy in the announcement is palpable, too. More proposals. New contributors. A first in-person KubeVirt Summit. These are signals of a project that’s moved beyond “interesting experiment” to “foundational infrastructure.”

If you’re running workloads that need both container and VM capabilities, this is the moment to take KubeVirt seriously again. v1.8 isn’t hype. It’s a platform finally learning how to be modular.


🧬 Related Insights

Frequently Asked Questions

Does KubeVirt v1.8 replace my hypervisor? No. KubeVirt runs on hypervisors (currently KVM by default). The new Hypervisor Abstraction Layer lets you swap which hypervisor KubeVirt uses, but you still need one. Think of it as the orchestration layer that goes on top of your existing infrastructure.

Can I use KubeVirt v1.8 on my current Kubernetes cluster? Yes, if you’re running Kubernetes v1.35 or compatible versions. KubeVirt maintains compatibility with recent Kubernetes releases. Check the release notes for your specific version’s support matrix.

How much network bandwidth does the new backup feature actually save? It depends on your workload change rate. The more static your VMs, the more you save. Teams doing incremental backups report 30-50% reduction in backup traffic compared to full copies, but real numbers depend on your actual data change patterns.

Sarah Chen
Written by

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Frequently asked questions

Does KubeVirt v1.8 replace my hypervisor?
No. KubeVirt runs *on* hypervisors (currently KVM by default). The new Hypervisor Abstraction Layer lets you swap which hypervisor KubeVirt uses, but you still need one. Think of it as the orchestration layer that goes on top of your existing infrastructure.
Can I use KubeVirt v1.8 on my current Kubernetes cluster?
Yes, if you're running Kubernetes v1.35 or compatible versions. KubeVirt maintains compatibility with recent Kubernetes releases. Check the release notes for your specific version's support matrix.
How much network bandwidth does the new backup feature actually save?
It depends on your workload change rate. The more static your VMs, the more you save. Teams doing incremental backups report 30-50% reduction in backup traffic compared to full copies, but real numbers depend on your actual data change patterns.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by CNCF Blog

Stay in the loop

The week's most important stories from The AI Catchup, delivered once a week.