Fly.io to On-Prem Kubernetes Journey

Everything deploys flawlessly on localhost—until the internet bites. One developer's path from Fly.io's cheap thrills to a free on-premise Kubernetes cluster reveals the hidden costs of managed platforms.

Fly.io's Free Ride Ends: One Dev's Leap to Zero-Cost On-Prem Kubernetes — theAIcatchup

Key Takeaways

  • Fly.io excels for stateless dev but chokes on stateful services without operators.
  • Managed Kubernetes like Linode's costs $46+/month; on-prem k3s drops to zero with borrowed hardware.
  • Full control trumps convenience for solos—k3s enables enterprise features on hobby rigs.

The cursor blinked accusingly on my terminal at midnight, Fly.io’s logs screaming about another ScyllaDB hiccup.

On-premise Kubernetes. That’s where this story lands — after a whirlwind through serverless dreams and managed clusters that promised the moon but delivered mostly headaches for anything with state.

Look, containers changed everything back in Docker’s early days. You package your app, its deps, runtime — poof, it runs anywhere. Efficient, kernel-sharing magic versus bloated VMs. But scale hits, and suddenly you’re herding cats across machines. Crashes? Updates? Routing? Chaos.

Kubernetes rides in like the orchestra conductor we never knew we needed.

Pods. Deployments. Services. Ingress. You declare your desires — three replicas, always up, port 80 exposed — and it makes it so. Pods die? Respawned. Nodes fail? Rescheduled. Desired state over manual drudgery.

Our hero here kicked off with Fly.io for the backend. Dead simple. Push code, deploys fly (pun intended), stayed under $5/month in dev. Frontend? Vercel, GitLab CI magic, still rocking it today. Stateless bliss.

But stateful services — ScyllaDB, NATS — wrecked the party.

Here’s the rub: proper stateful ops demand Kubernetes operators. Custom controllers for bootstrapping clusters, repairs, scaling, backups. Scylla’s got one. NATS too. Fly.io? No control plane access. You’re manually wrestling lifecycles, fixing topology changes by hand. More time babysitting the platform than building product.

“On platforms like this, running Kubernetes operators isn’t possible because you don’t have access to a Kubernetes control plane. As a result, lifecycle management for stateful systems must be handled manually.”

Fly.io’s own words — or close enough from the trenches — nail it. Time sink.

Why Jump to Linode’s Managed Kubernetes?

Credit lured ‘em in: Linode’s $100 signup freebie. Three worker nodes — 1 CPU, 2GB RAM, 50GB each — plus load balancer. Control plane? Free. Bill: $36 nodes + $10 LB = $46/month.

Terraform scripted it all. IaC gold: versioned, reproducible. Cluster up, operators unleashed. Scylla hummed, NATS scaled. Proper Kubernetes at last.

Credit burned out. $46 for a test project? Ouch. Eyes turned inward.

A old IOI teacher tossed three VMs: 8 cores, 8GB RAM, 50GB storage. Free company infra. Game on.

k3s entered stage right — lightweight Kubernetes for edge, on-prem, low-resources. Full features, slashed overhead. No etcd bloat, sqlite by default. Boots on VMs like butter.

Full stack deployed: Scylla cluster, NATS, Postgres, Redis, backend services. Frontend stays Vercel. Zero dollars. Tradeoff? Ops sweat. No managed plane — infra breaks, you wrench.

But control. Pure, unadulterated control.

Is On-Premise k3s the Anti-Cloud Rebellion We Need?

Here’s my hot take, one the original tale glosses over: this echoes the pre-AWS ’90s server farms. Companies bolted racks in closets, ran their own NFS, Apache — total sovereignty, zero vendor lock. Then AWS seduced with elasticity, and we forgot the joy of owning the metal.

Today? Cloud bills balloon for stateful beasts. Operators shine on your turf, no platform handcuffs. k3s proves Kubernetes isn’t just for hyperscalers; it’s democratized for garages.

Yet — and it’s a big yet — you’re the pager god now. Node down? Etcd corruption? CNI snafus? Dive in. That teacher’s VMs? Gold ‘til hardware flakes.

Short-term win: free ride. Long-term? Scale demands more iron, redundancy. But for indie devs, side hustles? Revolution.

The how: k3s install’s a curl | sh dream. Server on VM1, agents on VM2/3. Traefik ingress baked in. Longhorn for storage if you crave persistence beyond local disks.

Networking? Flannel or Cilium — pick your poison. Cert-manager for TLS. Helm charts for operators. Scylla Operator via yaml: clusterforms crank out rack-aware rings.

NATS? JetStream ops handle persistence. All declarative. GitOps with Flux or ArgoCD? Chef’s kiss.

Cost math seals it. Linode $46. Fly.io? Dev cheap, prod spikes with state. On-prem: $0 + time. Time’s cheap when building.

Skeptical? Cloud PR spins ‘serverless forever.’ Nah. Stateful reality bites back. This shift — localhost to edge — it’s architectural rebellion against abstraction layers that hide too much.

Prediction: as AI models chew RAM, on-prem k3s clusters sprout in basements. Colocation deals boom. Cloud? For bursty crap only.

One glitch: backups. Velero to S3 — wait, on-prem S3? Minio. Full circle.

Monitoring? Prometheus stack. Grafana dashboards. You’re in deep now.

Worth it? For control freaks, yes. The original post teases a k3s deep-dive — bet it’ll spill setup beans.

But here’s the why underneath: platforms optimize for their graph, not yours. Stateful? They falter. Own the cluster, own your fate.

What About Security and Scale?

On-prem whispers freedom, shouts responsibility.

RBAC tight. Network policies block east-west chatter. Falco for runtime threats.

Scale? Add VMs. k3s autoscales nodes via cluster-autoscaler hacks. Not EKS smooth, but doable.

Vercel frontend proxies ingress — hybrid heaven.

Critique time: Fly.io’s great for prototypes. Don’t hate. But stateful pivot demands Kubernetes truth.


🧬 Related Insights

Frequently Asked Questions

What is k3s and why use it for on-premise Kubernetes?

k3s is a certified, lightweight Kubernetes distro perfect for on-prem or edge — strips etcd overhead, runs on modest hardware like 8GB VMs, yet full-featured for operators.

How much does on-premise Kubernetes cost vs Fly.io?

Zero if you’ve got hardware access; Fly.io free for light dev but stateful services balloon costs and ops pain without full K8s.

Can I run ScyllaDB operator on Fly.io?

No — no control plane access means manual stateful management; proper operators need your own cluster like k3s.

Sarah Chen
Written by

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Frequently asked questions

What is k3s and why use it for on-premise Kubernetes?
k3s is a certified, lightweight Kubernetes distro perfect for on-prem or edge — strips etcd overhead, runs on modest hardware like 8GB VMs, yet full-featured for operators.
How much does on-premise Kubernetes cost vs Fly.io?
Zero if you've got hardware access; Fly.io free for light dev but stateful services balloon costs and ops pain without full K8s.
Can I run ScyllaDB operator on Fly.io?
No — no control plane access means manual stateful management; proper operators need your own cluster like k3s.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.