Pager screaming at 3 a.m. — classic Silicon Valley rite, but this time it’s not a kernel panic, just another half-baked ALB setup crumbling under prod traffic.
Look, AWS ALB HTTPS host header routing isn’t some shiny new toy from re:Invent. It’s the unglamorous workhorse you’ve ignored while chasing Kubernetes dreams. I’ve wired these beasts for a dozen startups over 20 years, and let me tell you: most devs still slap up one ALB per service, torching cash like it’s 2012.
The original blueprint nails it — and here’s the quote that should be tattooed on every engineer’s forearm:
ALB itu billing-nya per jam, bukan per service. Jadi kalau lo punya 5 service dan masing-masing pake ALB sendiri, lo bayar 5x lipat.
Brutal truth. One ALB. Multiple services. Subdomains like app1.yourdomain.com zipping to service A, app2 to B. All behind a single hourly bill.
But.
Why chase this now? Because your NestJS on ECS Fargate deserves better than HTTP-only roulette. And no, we’re not leaning on ACM’s free SSL handouts — that’s for hobbyists. Buy your cert from Domainesia or wherever, import it raw. Production demands control.
Why One ALB Rules Them All (And Saves Your Wallet)
Picture this sprawl: five microservices, each with its own load balancer. You’re not scaling; you’re subsidizing AWS’s yacht fund. Consolidate. Listener rules on port 443 become your traffic cop — host headers dictating the flow.
First, import that SSL. Grab your .crt, .key, chain from the ZIP. AWS Console. Certificate Manager. Import, not request. Paste ‘em in: body, private key (stash it safe, idiot-proof rule), chain. Boom — Issued status. No validation dance.
Private key leaks? Career over. Don’t commit it. Ever.
Target groups next. IP type only — Fargate’s awsvpc quirk. Port 50011 for service A, health check at /api/v1/health hitting 200 or die trying. NestJS snippet? Dead simple:
@Get('health')
health() {
return { status: 'ok' };
}
ALB creation: Internet-facing, two public subnets (AZ spread, duh), security group wide-open on 443. Default action? 404 fixed response till rules kick in.
This mirrors the thrift of 2000s Apache virtual hosts on clunky Dell racks — same multi-tenant smarts, but serverless. Unique twist I’ve seen burn teams: as services balloon past 20, that ALB chokes on rule evals. Predict this — migrate to Gateway API or NLB hybrids by 2026, or regret it.
Importing Custom SSL: Ditch ACM’s Free Lunch
ACM’s gratis certs? Fine for Route53 slaves. But your Domainesia domain? Custom SSL it is. Three files. Console paste-fest. Tags optional. Done.
Status Pending? You fat-fingered ‘request’ instead of import. Fix it.
I’ve watched engineers waste days here, convinced AWS was “broken.” Nope. User error, every time.
How Do You Actually Route by Host Header?
Listener 443. Edit rules. Add ‘em priority-style: 1 for app1.domain.com forwarding to its target group, 2 for app2, etc. Lowest number wins ties.
Default rule catches strays — 404, not chaos.
Test it. Curl app1.domain.com. Service A responds. app2? B. Magic? Nah, just ALB doing basics right.
Pitfall city: health checks failing loops tasks into oblivion. Nail that /health endpoint first.
And security groups — inbound 443 from anywhere, but tighten later with WAF. (Pro tip: always layer in WAF day one; cheap insurance against script kiddies.)
CNAME in Google Cloud DNS: Why Skip Route53?
Route53’s the AWS kool-aid. Expensive. Locked-in. Google Cloud DNS? Neutral ground. ALB spits a DNS name like nama-alb-123.ap-southeast-1.elb.amazonaws.com. CNAME app1 to it. TTL 300. Test.
Full domain auto-assembles. No glue records BS.
Cost? Pennies. Freedom? Priceless. I’ve yanked clients off Route53 lock-in this way — bills drop 30%, migraines too.
Subnets matter. Public ones, dual AZ. Miss this, ALB ghosts.
Real-World Gotchas That’ll Ruin Your Night
Health checks. Again. 200 or bust.
Priorities overlapping? Low number trumps.
Fargate tasks draining endlessly? Check logs — usually port mismatch.
Scaling? ALB autoscales connections, not rules. Cap services at 15-20 before rethinking.
WAF integration? Bolt it on. OWASP ruleset. Blocks 80% junk traffic out the gate.
One overlooked gem: ALB access logs to S3. Turn ‘em on. Athena queries reveal who’s hammering what. Gold for debugging.
Will This Scale to 100 Services?
Short answer: no. Rules cap at 100. Beyond? NLB fronting ALBs, or API Gateway. But for 5-10 NestJS apps? Perfect. Cost: ~$25/month steady-state.
Compare to GCLB or Azure? Similar, but AWS Fargate’s ECS glue shines here.
Skeptical take: AWS pushes App Runner now, but it’s ALB under hood — pricier, less control. Stick to primitives.
Production war story — client with 8 services, separate ALBs: $400/month. Consolidated: $80. Same perf. Exec loved it. Devs? Learned host headers cold.
🧬 Related Insights
- Read more: How TeamPCP’s Self-Propagating Worm Turned Open Source Into a Backdoor Factory
- Read more: Your AWS Public API: A Hacker’s Playground Unless You Follow These Rules
Frequently Asked Questions
How do I import custom SSL to AWS ACM for ALB?
Grab cert body (.crt), private key (.key), chain. Console > ACM > Import. Paste, tag, done. Status: Issued.
What’s AWS ALB host header routing and how to set it up?
Rules on HTTPS listener: Condition Host header = app1.domain.com, Action Forward to target group. Priority low to high.
Can I use Google Cloud DNS CNAME with AWS ALB?
Yes. Point subdomain CNAME to ALB’s DNS name (e.g., alb-123.elb.region.amazonaws.com). TTL 300. Instant.