Back to Resources Resource

Message Queues - Interactive Guide

Master message queues with an interactive simulator. Understand how Kafka, RabbitMQ, and SQS enable async processing, decouple services, and build resilient distributed systems.

Decouple your services, handle traffic spikes, and build resilient systems. This guide covers Message Queuesโ€” the backbone of async communication used by Netflix, Uber, and every modern distributed system.

How to use: Toggle between Visual to understand the architecture and Code to implement it. Try the simulator to see message flow in action.

The Problem

User places an order. Your monolith must: validate payment, update inventory, send confirmation email, notify warehouse, update analytics. If email service is slow, the user waits. If warehouse API is down, the entire order fails.

๐Ÿ”—
Tight Coupling
Service A calls Service B directly. B goes down, A fails. Change B's API, A breaks.
โณ
Sync Blocking
User waits while server sends email, generates PDF, calls 5 external APIs sequentially.
๐Ÿ“ˆ
Traffic Spikes
Black Friday hits. 10x traffic. Database overwhelmed. Requests timeout. Revenue lost.

The Solution: Message Queues

Producer sends message to a queue and moves on. Consumer processes at its own pace. Services are decoupledโ€”they don't know (or care) about each other's existence.

Message Queue Simulator

Producer

๐Ÿ“ค
Messages Sent: 0
โ†’
enqueue
orders-queue 0 messages
Queue is empty
โ†’
dequeue

Consumer

๐Ÿ“ฅ
Processed: 0
0
Produced
0
In Queue
0
Consumed
0/s
Throughput
800ms

How It Works

Producer creates message
Message sent to Broker
Broker persists to Queue
Consumer available?
Yes
Deliver & Process
Send ACK on success
No
Wait in Queue
Until consumer ready
Key Insight: The queue acts as a buffer. Producers can send at any rate. Consumers process at their own pace. No direct dependency.

Producer (Node.js + RabbitMQ)

const amqp = require('amqplib'); async function sendOrder(order) { const conn = await amqp.connect('amqp://localhost'); const channel = await conn.createChannel(); const queue = 'orders'; // Ensure queue exists (durable = survives broker restart) await channel.assertQueue(queue, { durable: true }); // Send message (persistent = survives broker restart) channel.sendToQueue(queue, Buffer.from(JSON.stringify(order)), { persistent: true } ); console.log(`Order ${order.id} queued`); await channel.close(); await conn.close(); } // Usage: Fire and forget! sendOrder({ id: 'ORD-123', items: [...], total: 99.99 });

Consumer with Acknowledgments

async function startConsumer() { const conn = await amqp.connect('amqp://localhost'); const channel = await conn.createChannel(); const queue = 'orders'; await channel.assertQueue(queue, { durable: true }); // Process one message at a time (backpressure) channel.prefetch(1); channel.consume(queue, async (msg) => { const order = JSON.parse(msg.content.toString()); try { await processOrder(order); // Success: acknowledge message (removes from queue) channel.ack(msg); } catch (err) { // Failure: reject & requeue (or send to DLQ) channel.nack(msg, false, true); // requeue=true } }); } startConsumer();
Critical: Always acknowledge after successful processing. If consumer crashes before ACK, message is redelivered. Make your consumer idempotent!

Kafka Producer (High Throughput)

const { Kafka } = require('kafkajs'); const kafka = new Kafka({ clientId: 'order-service', brokers: ['kafka1:9092', 'kafka2:9092'] }); const producer = kafka.producer(); async function sendOrderEvent(order) { await producer.connect(); await producer.send({ topic: 'orders', messages: [{ key: order.userId, // Partition by user (ordering) value: JSON.stringify(order), headers: { 'event-type': 'ORDER_CREATED' } }] }); } // Kafka: Millions of messages/sec, ordered within partition

AWS SQS (Serverless)

const { SQSClient, SendMessageCommand } = require('@aws-sdk/client-sqs'); const sqs = new SQSClient({ region: 'us-east-1' }); async function queueOrder(order) { await sqs.send(new SendMessageCommand({ QueueUrl: process.env.ORDER_QUEUE_URL, MessageBody: JSON.stringify(order), MessageGroupId: order.userId, // FIFO queue ordering MessageDeduplicationId: order.id // Prevent duplicates })); } // SQS: Fully managed, auto-scaling, pay-per-message

When to Use

  • Async task processing (emails, reports)
  • Microservices communication
  • Traffic spike buffering
  • Event-driven architectures
  • Background job processing

When to Skip

  • Need immediate response (use sync API)
  • Simple monolith (adds complexity)
  • Strict request-response needed
  • Low volume, low complexity

Delivery Guarantees

Choose your tradeoff: speed vs. reliability vs. complexity.

At-Most-Once
Fire and forget. Message may be lost.
+ Fastest, simplest
- May lose messages
Use: Metrics, logs
Exactly-Once
Perfect delivery. Complex to implement.
+ No duplicates
- Expensive, complex
Use: Financial txns
Pro Tip: Most systems use At-Least-Once with idempotent consumers. It's the best balance of reliability and simplicity. See our Idempotency Guide.

Queue Types & Patterns

๐Ÿ“ฌ
Point-to-Point
One producer, one consumer. Task queue pattern. Each message processed by exactly one consumer.
๐Ÿ“ข
Pub/Sub (Fan-out)
One producer, many consumers. Each subscriber gets a copy. Great for event broadcasting.
โšก
Priority Queue
High-priority messages jump the line. VIP orders processed before regular ones.
๐Ÿ’€
Dead Letter Queue
Failed messages go here after N retries. Debug, fix, replay. Never lose a message.

Picking the Right Queue

Feature RabbitMQ Kafka AWS SQS Redis Streams
Best For Task queues Event streaming Serverless Lightweight
Throughput ~50K/s Millions/s ~3K/s ~100K/s
Ordering Per queue Per partition FIFO optional Per stream
Ops Overhead Medium High None Low

Common Pitfalls

๐Ÿ”„
Poison Messages
Bad message causes consumer to crash. Redelivered. Crash again. Infinite loop.
Fix: Max retry count โ†’ Dead Letter Queue
๐Ÿ“Š
Queue Backlog
Producers faster than consumers. Queue grows unbounded. Memory exhausted.
Fix: Scale consumers, set queue limits, backpressure
โฑ๏ธ
Message Expiry
Old messages still being processed hours later when they're no longer relevant.
Fix: Set TTL, check timestamp in consumer
๐Ÿ”€
Ordering Issues
Multiple consumers process messages out of order. State becomes inconsistent.
Fix: Partition by entity ID, single consumer per partition

Real-World Scale

LinkedIn: 7 trillion messages/day on Kafka. Netflix: 700 billion events/day. Uber: Kafka handles all trip events globally.

Explore More Resources