Your pager buzzes at 3 a.m. Session dead. Host has three IPs, one’s flaky — guess which one Boundary latched onto?
That’s the nightmare ending, thanks to uncontrolled session routing for multi-IP hosts in Boundary. For ops folks and devs wrestling load-balanced clusters or multi-homed servers, HashiCorp’s latest config option — preferred endpoints — hands you the reins. Pick your winners upfront, skip the losers. Real people sleep better.
Why Does Multi-IP Routing Even Matter in Boundary?
Boundary, HashiCorp’s open-source secure access proxy (yeah, the one they hope funnels you to their paid HCP tier), routes sessions to targets like hosts or K8s pods. Multi-IP setups? Common as dirt now — think AWS ALBs, dual-stack IPv4/IPv6, or just plain old anycast. Without smarts, it round-robins or picks randomly. Boom: connection fails mid-session, you’re SSH’d into limbo.
HashiCorp’s spin? Clean control. But let’s cut the PR: this screams ‘enterprise gotcha.’ Free tier users hit walls; pay up for stability. I’ve seen it before — Terraform’s early days, when ‘idempotency’ was code for ‘buy support.’ Who’s cashing in? HashiCorp, post-IPO jitters, pushing HCP Boundary hard.
Configure preferred endpoints to control how Boundary selects target addresses and avoids failed connections.
That’s straight from their docs. Simple. Potent. But does it deliver?
Here’s the thing — in my 20 years chasing Valley unicorns-turned-zombies, tools like this expose the grift. Open core model: core free, scaling painful. Multi-IP? Barely an issue pre-cloud-native explosion. Now? Every EKS cluster laughs at you.
Short answer: yes, if you’re not already scripting workarounds.
How Bad Is the Problem Without Preferred Endpoints?
Picture a database host. IPs: 10.0.1.10 (healthy), 10.0.1.11 (firewall hell), 10.0.1.12 (IPv6-only, misconfig). Boundary probes, picks .11. Session routes there — tcp connect timeout. You? Retrying manually, cursing Terraform drifts.
Old way: static single IP (lame), DNS round-robin (unpredictable), or custom plugins (hello, maintenance debt). Preferred endpoints? List ‘em in host catalog, ordered. Boundary tries first, then next. Failures? Logged, skipped. Sessions stick.
I tested this on a throwaway Vault-integrated setup last week. Flipped a firewall rule mid-session — no drop. Stuck to primary IP like glue. Nice. But cynical me asks: why wasn’t this default years ago? HashiCorp’s been Boundary-ing since 2020. Feels like feedback from big customers, now trickling OSS.
And the money angle — HCP Boundary charges per session-hour. Stable routing? More sessions, more bucks. Coincidence?
Setting It Up: Step-by-Step, No Fluff
Grab your Boundary controller. Assume you’re on 0.15+ (check changelog; this landed recently).
- Host catalog. Edit host set.
In Terraform (because who hand-jams YAML anymore?):
host_catalog_id = boundary_host_catalog.static-test.id
host_sets {
name = "multi-ip-hosts"
ip_addresses = [
"10.0.1.10", # primary, always good
"10.0.1.12", # backup
]
preferred_endpoints = [
"10.0.1.10", # try me first
]
}
Apply. Boom.
CLI way, for purists:
boundary hosts add -catalog-id hc_123 -set-id hs_456 -address 10.0.1.10 -preferred-endpoint
API? POST /host-sets/{id}/hosts with {“preferred_endpoint”: true}.
Test: boundary connect ssh -target-id tgt_789. Watch logs: “Trying preferred endpoint 10.0.1.10… success.”
Edge cases? IPv6 first — order matters. Dynamic IPs? Integrate with external attrs (PDP plugin). K8s? Use host plugins pulling from metadata.
But wait — docs bury this under ‘advanced networking.’ Why? Gatekeep the free users?
Long para time: This isn’t rocket science, yet HashiCorp dresses it as ‘sophisticated routing intelligence’ in keynotes. Reminds me of Consul’s service mesh pivot — promised zero-trust utopia, delivered config hell until enterprises paid consultants. Boundary’s trajectory? Same. Open source lures you in, multi-IP pains push you paid. Prediction — my unique call: by 2025, 70% of Boundary users on HCP, citing ‘routing stability’ in RFPs. Mark it.
Sessions recover faster now.
Is Boundary’s Fix Future-Proof for Hybrid Clouds?
Hybrid messes — on-prem + AWS + GCP. Multi-IP galore. Preferred endpoints shine here, but…
Limits. No auto-failover to healthy peers (yet). No ML-picking (god forbid buzzwords). Static list. Fine for static hosts; dynamic? Pair with Vault leases or Consul.
Skepticism peaks: HashiCorp’s spinning yarn about ‘zero trust everywhere.’ Reality? Another proxy layer atop SSH/RDP. Does it replace Tailscale or Pomerium? Nah — those zero-config. Boundary? DevOps tax.
Still, for Vault shops, it’s glue. Integrates sessions with policies. Who wins? Enterprises with compliance ticks.
Real talk — if you’re solo dev, skip. WireGuard VPN cheaper.
The HashiCorp Money Play
Twenty years in, I’ve sniffed this. Boundary open-sourced to compete Teleport, StrongDM. Result? HCP lock-in. Preferred endpoints? OSS now, but watch: enterprise gets dynamic prefs via API.
Bold prediction: IPO pressure means more ‘pro’ features behind paywall. Free tier stagnates.
Users? Grumble, upgrade.
FAQ gold.
🧬 Related Insights
- Read more: Duolingo CEO Luis von Ahn: ‘Delete the Blockchain’ and Why AI Won’t Kill Language Apps
- Read more: Drift’s $280 Million Hack Exposes Crypto’s Fatal Flaw: Trust in Code Isn’t Enough
Frequently Asked Questions
What is session routing for multi-IP hosts in Boundary?
Boundary’s session routing picks target IPs for connections. For multi-IP hosts, preferred endpoints let you prioritize reliable ones, avoiding flakes.
How do I configure preferred endpoints in Boundary?
In host sets, list IPs and mark preferred ones via Terraform, CLI, or API. Boundary tries them first.
Does Boundary multi-IP routing work with Kubernetes?
Yes, via host plugins fetching pod IPs. Set prefs based on node selectors for stability.