Bun Cgroup-Aware HardwareConcurrency on Linux

Bun just plugged a massive blind spot in container CPU reporting. Your Node-like apps in Docker now see the actual limits, not the host's full arsenal.

Bun runtime logo overlayed on Linux cgroup CPU quota diagram in a Docker container

Key Takeaways

  • Bun now respects Linux cgroup cpu.max for accurate hardwareConcurrency in containers.
  • Fixes oversubscribed threads in Docker/k8s, boosting efficiency.
  • Unique edge: Echoes 2000s VM CPU billing scandals, pushing runtime maturity.

Containers lie.

Bun stops that — dead in its tracks.

Imagine firing up a JS server in a Docker container limited to two CPUs. Before this Bun PR, it’d glance at the host — say, a beefy 64-core monster — and think, “Sweet, I’ve got ‘em all.” Cue thread pools exploding, memory ballooning, OOM kills everywhere. But Jarred Sumner’s commit 1526b00 routes navigator.hardwareConcurrency (and os.availableParallelism()) through WebKit’s WTF::numberOfProcessorCores(). On Linux? It grabs sched_getaffinity plus cgroup cpu.max caps. Boom — your --cpus=2 container reports exactly 2. No more illusions.

Routes navigator.hardwareConcurrency (and therefore os.availableParallelism()) through WTF::numberOfProcessorCores() instead of an inline sysconf/sysctl. On Linux this picks up the new sched_getaffinity + cgroup cpu.max capping in WTF, so containers with –cpus=N report N instead of the host core count.

That’s the raw commit gold. Simple, but surgical.

The Container Deception That’s Plagued Runtimes

Here’s the thing. Back in Node’s early days — and Bun inherited this mess — core detection was lazy. A quick sysconf(_SC_NPROCESSORS_ONLN) or sysctl call. Fine on bare metal. Disaster in containers. Docker, Kubernetes, systemd slices — they all clamp your CPUs via cgroups v2’s cpu.max. But naive detection ignores that, reporting host totals. Your app spawns 64 threads on two cores. Thrashing. Inefficiency. Wasted cloud bills.

And — get this — it’s not new. Flashback to the 2000s virtualization boom. VMware guests reporting host CPUs tricked Oracle into billing full racks for tiny VMs. Lawyers got involved. My unique take? Bun’s move echoes that reckoning. Runtimes must respect the cage they’re in, or pay the perf penalty. Bun leads; Node’s still playing catch-up.

Short fix, huge ripple.

Developers in serverless land (Lambda, Fly.io) or k8s clusters rejoice quietly. No more tuning hacks for “logical cores.”

How Bun Pulled This Off Without Breaking a Sweat

Zig and C++ bindings — Bun’s secret sauce. They expose WTF’s core counter, swapping out platform hacks for one clean call. Thread pools? Updated. Node OS APIs? Aligned. Even WebKit prebuilts bumped.

Under the hood: sched_getaffinity fetches your thread’s CPU mask. Cgroup awareness layers on cpu.max quota. It’s like giving your app X-ray vision into its resource jail. No syscalls per query — cached smartly.

Bunx it locally: bunx bun-pr 28801. Test in a constrained pod. Watch os.availableParallelism() spit truth.

But wait — PR had three test flakes. Jarred pushed anyway. That’s Bun: ship fast, fix later.

Why Does Bun’s CPU Fix Crush It for Containerized JS?

Picture JS as the web’s workhorse. Billions of requests, microservices galore. Node’s solid, but Bun’s 3x faster cold starts, Bun’s Zig core crushes parse times. Now add accurate parallelism? Chef’s kiss.

In a world of auto-scaling fleets, apps that self-throttle win. Oversubscribed threads? Latency spikes. Correct cores? Optimal worker pools, smoother autoscaling. Prediction: By 2025, Bun snags 20% of container JS workloads. Node maintainers will cherry-pick this — or fork shame.

Skeptical? Run the numbers. A 64-core host, 32 one-core containers. Pre-Bun: each spawns 64 workers, chaos. Post-Bun: 1 per pod, harmony.

Energy here — because this isn’t hype. It’s plumbing that unlocks scale.

Look, corporate spin calls this “enhanced detection.” Nah. It’s fixing a lie baked into POSIX-era APIs. Bun — built by an ex-WebKit ninja — borrows from browser smarts. WTF’s been cgroup-aware since kernel 5.10 vibes. Smart borrow.

Will This Break Your Existing Bun Apps?

Nope. Fallbacks galore. Tests passed (mostly). But — side note — if you’re hardcoding core counts? Rethink that. Let the runtime lead.

Dense para time: This PR centralizes detection across web APIs, Node compat, threads; it unifies iOS/macOS/Windows too via WTF, slashing maintenance; expect ripple to Bun’s test suite, where flakey CI from core mismatches vanishes; and for edge runtimes? Imagine Cloudflare Workers auto-sizing to edge node slices — Bun could own that niche.

One sentence: Brilliant.

Bun vs Node: Container King Crowned?

Node’s os.cpus().length still sysconf-dumb in v22. Bun laps it. Historical parallel? Firefox Quantum borrowing Servo threads — sudden perf leap. Bun does that for servers.

Critique: Bun’s PR log mentions Claude-generated fixes. AI assist? Fine, but humans land the punch.


🧬 Related Insights

Frequently Asked Questions

What does Bun’s cgroup-aware AvailableParallelism mean for Docker?

It ensures Bun apps in Docker containers (–cpus=N) detect exactly N cores, not the host total — preventing oversized thread pools and crashes.

Does Bun now outperform Node in Linux containers?

Yes, with accurate hardwareConcurrency + Bun’s baseline speed, expect better resource efficiency and lower latency in k8s/Docker setups.

How to test Bun’s new CPU core detection?

docker run --cpus=2 bun:edge node -e "console.log(os.hardwareConcurrency())" — should log 2, not host cores.

Priya Sundaram
Written by

Hardware and infrastructure reporter. Tracks GPU wars, chip design, and the compute economy.

Frequently asked questions

What does Bun's cgroup-aware AvailableParallelism mean for Docker?
It ensures Bun apps in Docker containers (--cpus=N) detect exactly N cores, not the host total — preventing oversized thread pools and crashes.
Does Bun now outperform Node in Linux containers?
Yes, with accurate hardwareConcurrency + Bun's baseline speed, expect better resource efficiency and lower latency in k8s/Docker setups.
How to test Bun's new CPU core detection?
`docker run --cpus=2 bun:edge node -e "console.log(os.hardwareConcurrency())"` — should log 2, not host cores.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Hacker News

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.