Everyone’s been chanting ‘cloud forever’ like it’s the tech gospel. Azure, AWS, endless scaling dreams — until the spreadsheets hit. Your CTO stares at those egress fees, latency gripes from Nordic winters, and suddenly: on-prem’s back, baby. But not the old crusty way. No. Docker and Ansible on-prem stack flips the script, turning weekend warriors into infra gods.
Picture this: four VMs humming like a well-oiled spaceship engine — .NET APIs, Angular fronts, Postgres breathing easy, monitoring that actually works. Zero cloud tethers. One ansible-playbook command, and boom — reproducible magic. I did it for a client fleeing Azure. Changed everything.
What Was Everyone Expecting — And Why This Shatters It?
Cloud hype promised utopia. Infinite resources! Pay-as-you-go bliss! But reality? Bill shock. Data sovereignty snarls in Europe. Downtime when the datacenter coughs. Folks figured on-prem meant hiring a graybeard for months of YAML hell.
Wrong.
Docker containers — those portable wonder-boxes — plus Ansible’s automation wizardry make it trivial. It’s like giving kids lightsabers: powerful, precise, and fun. No more ‘it works on my machine’ excuses. This stack? Anyone with git pull can rebuild it. That’s the shift — from fragile pets to immortal cattle, but yours to own.
Here’s the gem from the blueprint that hooked me:
The question is not whether Docker and Ansible are the right tools. They are, for 90% of Nordic SMBs running steady-state workloads.
Spot on. Steady-state? Think e-commerce backends, CRMs — not TikTok-scale chaos.
The Four-VM Fortress: Why Not One Giant Beast?
One server sounds simple. Tempting, right? Load it up, call it done.
But here’s the rub — or rather, the crash. Resource fights. One leaky Java heap starves your DB. Debugging? Nightmare alley at 3 AM.
Split it. Four VMs. Isolation like moats around castles.
| VM | Role | What Runs Here |
|---|---|---|
| DEV | Development | App containers (dev config), dev database |
| TEST | Staging / QA | App containers (staging config), test DB, test runners |
| PROD | Production | App containers (prod config), prod DB, Nginx + SSL |
| TOOLS | Shared tooling | Harbor registry, SonarQube, Grafana + Loki, CI/CD, Vaultwarden |
Deliberate. DEV implodes? Prod yawns. Tools glitch? Prod untouched. Ansible clones the OS base across all — same packages, SSH locks, firewalld shields. Boring? Yes. Bulletproof? Absolutely.
Hardware whisper: Rocky Linux 9 (CentOS heir, 10-year life). 4-8 cores, 16-32GB RAM per VM. €20k for four Dell PowerEdges — laughs at cloud bills for years. 1Gbps switch, VLANs for internal chatter. Nginx on PROD slurps external hits, spits SSL sweetness.
Ansible’s Secret Sauce: No More Midnight YAML Hunts
Ever scrambled for that one playbook at 2 AM? Ansible layout fixes it.
infrastructure/ ├── ansible.cfg ├── inventory/ │ ├── hosts.yml │ └── group_vars/ │ ├── all.yml │ ├── dev.yml │ etc. ├── playbooks/ │ ├── site.yml # The nuke button │ └── … └── roles/ ├── base/ etc.
Hosts.yml? Dead simple. Groups for dev, test, prod, tools. IP-mapped bliss.
Base role — the unsung hero. Timezone to Copenhagen. Vim, htop, git. SSH? Passwords banned, root locked out. Firewalld up. Handlers restart sshd smooth as butter.
- name: Harden SSH - disable password auth
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^#?PasswordAuthentication"
line: "PasswordAuthentication no"
notify: restart sshd
Boring wins.
Docker role? Repo add, dnf install CE + Compose plugin. Deploy user into docker group. Start ‘er up.
Roll It Out: From Zero to Hero in Hours
site.yml chains ‘em: common.yml (base OS), docker.yml, monitoring.yml (Grafana/Loki magic), registry.yml (Harbor hoists containers), app-deploy.yml (your .NET/Angular payload).
Run ansible-playbook -i inventory/hosts.yml playbooks/site.yml. Watch it fly. First time? Tweak group_vars for secrets (Vault later). Reproducible as a factory line.
But wait — my twist, the one nobody’s shouting. This isn’t just cost-cut; it’s the prequel to AI’s local renaissance. Remember PCs crushing mainframes? Data stayed in-house, cheap, controllable. Today? GDPR hammers, edge AI hungers for low-latency juice. SMBs won’t rent Nvidia from hyperscalers; they’ll stack this Docker fortress, plug in H100s via Kubernetes next. On-prem’s not retreat — it’s the launchpad. Cloud? Yesterday’s mainframe.
Nginx role seals it: reverse proxy, SSL certs (Let’s Encrypt play nice). Monitoring? Loki logs rivers, Grafana dashboards glow. Harbor? Your private Docker Hub, no quay.io taxes.
Scale? Add VMs to inventory. Ansible doesn’t care. Nordic SMBs — steady loads like invoicing apps — thrive here. Hyperscale? Sure, migrate later. But 90%? This.
Pitfalls? SELinux on Rocky bites newbies — audit logs guide you. Firewalld zones: trust internal VLAN, public to Nginx only. Deploy user — non-root, key-only SSH.
Why Does This Docker Ansible On-Prem Stack Matter Now?
Cloud fatigue. Sovereignty mandates. AI’s data gravity pulling home. This weekend build? Your hedge. Wonder at it: tools from 2014 (Docker) + 2012 (Ansible) outsmart 2024 hype machines. Energy like warp drive — simple, sovereign, yours.
Teams love it. Sysadmin with RHEL chops? Rocky shines. Ubuntu fans? Swap painless. No lock-in, ever.
How Much Does On-Prem Hardware Cost for Docker Stack?
€5k per server. Four? €20k. Amortize over 5 years — peanuts vs. cloud. Power? 500W rack, green enough for Nordic grants.
Is Docker Ansible Better Than Kubernetes for SMBs?
K8s? Overkill for steady-state. Docker Compose + Ansible = 10% complexity, 100% control. K8s when you hit 50 devs.
🧬 Related Insights
- Read more: One Dev’s Rebel Language: Native SMS Scripting for Android
- Read more: GitLab’s MCP Bridge: Finally Killing Dev Tool Context Switching?
Frequently Asked Questions
How do I set up Docker and Ansible for on-prem stack?
Grab the layout: inventory, playbooks, roles. Rocky 9 VMs, ansible-playbook site.yml. Tweak group_vars. Done in weekend.
Docker Ansible on-prem vs cloud costs?
Cloud: €10k+/year scaling. On-prem: €4k/year power/maintenance post-capex. Wins long-term.
Can I run AI models on this Docker on-prem setup?
Yes — add Nvidia Docker runtime role. Local LLMs via Ollama containers. Edge ready.