Simple Go Message Bus: Consumer Tutorial

Episode 3 ties the knot on this DIY message bus in Go. Producers shout, brokers hoard, consumers grab— but don't quit your Kafka job yet.

Go Message Bus Finally Works: But Is It Enough? — theAIcatchup

Key Takeaways

  • Simple Go message bus connects producer-broker-consumer with minimal goroutines and mutexes for thread safety.
  • Pull-only model dumps entire queue per subscribe—easy but limited, no persistence or multi-consumers.
  • Teaches core concepts better than bloated docs; historical parallel to simple kernels beating hype machines.

Message bus complete. Barely.

I’ve chased enough Silicon Valley unicorns to know when something’s real gold or just shiny pyrite. This simple message bus tutorial—episode three of who-knows-how-many—finally snaps the pieces together: producer, broker, consumer. No Kafka bloat. No RabbitMQ wizards. Just raw Go sockets and mutexes holding it all from crumbling.

But here’s the kicker: in 20 years covering this circus, I’ve seen a thousand ‘simple’ systems pitched as the next big thing. Remember ZeroMQ? Hyped to the moon, then quietly sidelined by everyone chasing scale. This? It’s a toy. A brilliant, educational toy that cuts through the marketing fog. And yeah, that’s worth something in a world drowning in vendor lock-in.

Look, the original post nails the basics without fluff. Producer fires messages to a broker on one port. Broker stashes ‘em in topic queues. Now consumers dial a separate port—9991 by default—and slurp everything queued.

2024/06/10 11:00:01 [orders] first order
2024/06/10 11:00:01 [orders] second order
2024/06/10 11:00:01 [orders] third order

That output? Pure joy for three terminals and a make command. Producer blasts “first order” into the void. Broker listens on producer port (default 9990, I assume from priors). Consumer subscribes, gets the dump. Boom.

Why Build Your Own Damn Message Bus?

Goroutines. Mutexes. That’s the secret sauce forced in here. ProducerListen and ConsumerListen can’t both hog the main thread—they’d deadlock like bad exes at a party. So, one goroutine for producers, main thread blocks on consumers. Slap a sync.Mutex around broker.topics, lock in HandleNewMessage and ConsumerListen. Data races? Vanquished.

The consumer’s Subscribe? Mirror of producer. TCP dial to broker’s consumer host:port, print topic name, scan lines back. No persistence. No acks. Messages evaporate on broker restart. It’s pull-only, one-shot per connect. Fire and forget.

And the broker’s ConsumerListen loop—accept, scan topic, dequeue frenzy if exists, slam conn.Close(). Brutal efficiency. No lingering sockets. No backpressure. Just yeet the queue contents.

But cynicism alert: who profits? Not you, building this weekend project. Bigcos rake it in on managed Kafka—Confluent’s laughing to the bank with your egress fees. This scratches the itch for grokking internals, nothing more.

Short para. Brutal truth.

Does This Go Message Bus Actually Scale?

Scale? Ha. Single broker. In-memory queues. Restart? Poof, messages gone. Multiple consumers? Not yet—each pulls the whole queue, leaving zilch for siblings. Push model? Absent. Persistence? Dream on.

Yet that’s the genius. Tutorials like this explain the black box. Kafka docs? 500 pages of Jepsen tests and ZooKeeper arcana. Here, 100 lines of Go expose the bones: sockets for IPC, maps for topics, queues per topic. Mutex for goroutine peace.

Unique angle—and one the post skips: this echoes the Minix microkernel wars of the ’90s. Tanenbaum built simple, provable systems to dunk on monolithic Linux. Result? Torvalds iterated faster. Your toy bus? Same vibe. Forces you to ponder: what if RabbitMQ started this transparent?

Producer CLI flags: -host, -port (9990?), -topic, -message. Consumer: -host (127.0.0.1), -cport (9991), -topic. Broker config swells with ConsumerHost/Port. make build, three terminals, orders topic demo. Works first try.

Deeper dive: broker.topics map[string]*Topic. Topic has Queue (ring buffer? Dequeue impl from prior eps). Decoder for messages? Handled upstream.

Cynical prediction: series continues—persistent queues via boltdb, maybe fanout for multi-consumers. But mark my words, productionizing this means rediscovering every distributed systems nightmare: partitioning, replication, exactly-once.

One sentence wonder.

Bigger picture. Devs waste years on SaaS queues before grokking tradeoffs. This series? Therapy. Builds intuition cheaper than any Udemy course. Skip it, you’re Kafka’s pawn—blind to why your throughput tanks at 10k msg/s.

What’s Missing—and Why It Stings

Push vs pull. Pull wins for simplicity—no polling hell—but real consumers linger, streaming indefinitely. Here? Connect, grab all, die. Fine for batch, useless for realtime.

Multi-consumers: clone queue on subscribe? Or pub-sub with offsets? Broker restarts wipe data—file persistence or Redis backend next?

PR spin check: post claims “short and simple, something I understand end to end.” Spot on. No hype. Rare in blogosphere.

Goroutines minimal—promised, delivered. Only for dual listens. Mutex everywhere topics touched. Safe.


🧬 Related Insights

Frequently Asked Questions

What does a simple message bus in Go do?

It routes messages from producers to consumers via a broker, using TCP sockets and in-memory queues—no external deps.

How to build a message bus consumer in Go?

Dial broker’s consumer port, send topic, scan lines from bufio.Scanner—dequeues all queued messages in one go.

Is a DIY Go message bus production ready?

Nope—lacks persistence, multi-consumer support, and scaling. Great for learning, not for prime time.

James Kowalski
Written by

Investigative tech reporter focused on AI ethics, regulation, and societal impact.

Frequently asked questions

What does a simple message bus in Go do?
It routes messages from producers to consumers via a broker, using TCP sockets and in-memory queues—no external deps.
How to build a message bus consumer in Go?
Dial broker's consumer port, send topic, scan lines from bufio.Scanner—dequeues all queued messages in one go.
Is a DIY Go message bus production ready?
Nope—lacks persistence, multi-consumer support, and scaling. Great for learning, not for prime time.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.