Go Goroutines & WaitGroup: Parallelism Basics

Ten API calls. Ten seconds flat, one by one. Goroutines? One second. But forget WaitGroup, and your program's a ghost town.

Go's Goroutines: Slash 10 API Calls from 10 Seconds to 1—Without the Crash — theAIcatchup

Key Takeaways

  • Goroutines turn sequential slogs into parallel speed—10s to 1s easy.
  • sync.WaitGroup is essential: Add before, defer Done, Wait after.
  • Fix loop closures by passing values—avoid the infamous '55555' bug.

10 HTTP requests to some flaky API. Each takes a second. Run ‘em sequential? Ten seconds of your life wasted. Fire up goroutines? Boom—one second, the slowest call’s time.

That’s Go’s concurrency hook. Simple. Seductive. And a trap for the unwary.

Go doesn’t mess around like Java’s thread hell or Node’s callback spaghetti. Nah, it hands you goroutines on a platter. Lightweight beasts—kilobytes, not megabytes. Thousands without your machine choking.

But here’s the kick: main() exits, goroutines die. Unceremoniously. No goodbye notes.

Goroutines: Magic or Just Fast Switching?

Think browser tabs. YouTube loads video, Twitter scrolls—feels parallel. CPU juggles like a circus act. Goroutines? Same vibe, Go runtime’s the ringmaster, not your OS.

Launch ‘em easy: slap ‘go’ before a function call.

go go sayHello("Alice") go sayHello("Bob")

Main() zips on. No waiting. Ditch the sleeps—brittle hacks that crumble under load.

Without sync? Output: “Program done.” Goroutines? Ghosts. Main quits, kills ‘em all.

Why Does Your For-Loop Scream ‘55555’?

Newbies trip here. Hard.

for i := 0; i < 5; i++ {
    go func() {
        fmt.Println(i) // captures reference, not value
    }()
}

All print 5. Loop ends, i=5. Goroutines wake up late to the party.

Fix: pass the value.

go func(i int) { fmt.Println(i) }(i)

Or use a local var per iteration. Don’t share the loop’s i—it’s a reference trap, shared across goroutines. Classic. Infuriating. Avoidable.

Fundamental rule: when main() returns, the Go program terminates immediately — regardless of how many goroutines are still running. They all get killed at once, without warning, without finishing their work.

That’s from the original piece. Spot on. Brutal truth.

Is sync.WaitGroup Your Kitchen Chef?

Picture a head chef: “Chop onions. Sauce. Plates.” Waits for nods, not watches.

WaitGroup’s that nod system.

  • wg.Add(n): Expect n tasks.

  • wg.Done(): Task complete. (Defer it—panics can’t dodge.)

  • wg.Wait(): Block till zero.

var wg sync.WaitGroup
wg.Add(2)
go sayHello("Alice", &wg)
go sayHello("Bob", &wg)
wg.Wait()
fmt.Println("All done")

Scales to loops. URLs list? Add(1) before each go func(u string){ defer wg.Done(); download(u) }(url)

Crucial: Add outside goroutine. Inside? Race—Wait() might finish before Add() registers. Deadlock city.

Most common Go concurrency sin. “Sometimes works.” Till it doesn’t. Under load? Kaboom.

And defer wg.Done()? Lifeline. Goroutine panics? Still signals complete. No eternal wait.

Why Does This Matter for Go Developers?

Go sold concurrency as easy mode. Threads? Heavy. Callbacks? Nightmare. Goroutines + channels = bliss, they said.

Reality? Pitfalls galore. That loop bug? Burned me in prod once—five workers, all ID 5, data duped everywhere. Laughable in hindsight. Costly then.

Unique angle: Go predates Node’s async mess by years, yet JavaScript devs flock to it thinking ‘easy parallelism.’ Nope. Go nailed lightweight multiplexing first—borrowed from Erlang’s actors, really. Prediction: Go 2 generics will spawn template goroutine pools, making worker patterns trivial. No more copy-paste Add(1) loops.

But hype aside—Go’s PR spins goroutines as zero-cost. Lies. Context switches ain’t free. Ten thousand? GC pauses spike. Profile first, fanboy later.

Corporate spin? Golang team loves benchmarks: 10 calls, 1s. Fine for toys. Real world? APIs flake, networks lag. WaitGroup alone? Won’t cut error handling (Part 3 teases that). Or channel backpressure (Part 2).

Skeptical take: Great start. Don’t drink the Kool-Aid whole. Test under storm.

Real power? API fanout. Scrape 100 endpoints? Sequential hourglass. Goroutines? Minutes. Add semaphore (next parts) for rate limits—don’t DDoS yourself.

Browser analogy holds—til it doesn’t. Tabs crash tabs sometimes. Goroutines leak memory? Whole app balloons.

Will Goroutines Kill OS Threads?

Not quite. Go multiplexes ‘em onto threads. Smart. But millions? Runtime buckles. Use for I/O bound, not CPU hogs—GOMAXPROCS caps true parallelism.

Bold call: In 2025, every backend kid knows goroutines cold. Or gets laughed out of interviews. Rust’s async? Cuter syntax, same footguns. Go wins on simplicity.

Historical parallel: C’s pthreads scarred a generation. Java’s synchronized? Verbose jail. Go? Liberation. But freedom demands discipline—WaitGroup your handcuffs.

Dive deeper? Parts 2-3: channels, pools, errors. This? Your concurrency 101. Master it, or stay sequential schlub.

Short version: Goroutines accelerate. WaitGroup synchronizes. Ignore either? Regret.


🧬 Related Insights

Frequently Asked Questions

What are goroutines in Go?

Lightweight functions running concurrently, managed by Go runtime—not OS threads. Launch with ‘go’, cheap as chips.

How does sync.WaitGroup work in Go?

Add(n) for tasks, Done() per finish, Wait() blocks till zero. Defer Done() always.

Common Go goroutine mistakes?

Forgetting WaitGroup (program exits early), loop variable capture (all print last value), Add inside goroutine (races).

James Kowalski
Written by

Investigative tech reporter focused on AI ethics, regulation, and societal impact.

Frequently asked questions

What are goroutines in Go?
Lightweight functions running concurrently, managed by Go runtime—not OS threads. Launch with 'go', cheap as chips.
How does sync.WaitGroup work in Go?
Add(n) for tasks, Done() per finish, Wait() blocks till zero. Defer Done() always.
Common Go goroutine mistakes?
Forgetting WaitGroup (program exits early), loop variable capture (all print last value), Add inside goroutine (races).

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.