Decouple your services, handle traffic spikes, and build resilient systems. This guide covers Message Queuesโ the backbone of async communication used by Netflix, Uber, and every modern distributed system.
How to use: Toggle between Visual to understand the architecture and Code to implement it. Try the simulator to see message flow in action.
The Problem
User places an order. Your monolith must: validate payment, update inventory, send confirmation email, notify warehouse, update analytics. If email service is slow, the user waits. If warehouse API is down, the entire order fails.
The Solution: Message Queues
Producer sends message to a queue and moves on. Consumer processes at its own pace. Services are decoupledโthey don't know (or care) about each other's existence.
Message Queue Simulator
Producer
Consumer
How It Works
Send ACK on success
Until consumer ready
Producer (Node.js + RabbitMQ)
const amqp = require('amqplib');
async function sendOrder(order) {
const conn = await amqp.connect('amqp://localhost');
const channel = await conn.createChannel();
const queue = 'orders';
// Ensure queue exists (durable = survives broker restart)
await channel.assertQueue(queue, { durable: true });
// Send message (persistent = survives broker restart)
channel.sendToQueue(queue,
Buffer.from(JSON.stringify(order)),
{ persistent: true }
);
console.log(`Order ${order.id} queued`);
await channel.close();
await conn.close();
}
// Usage: Fire and forget!
sendOrder({ id: 'ORD-123', items: [...], total: 99.99 });Consumer with Acknowledgments
async function startConsumer() {
const conn = await amqp.connect('amqp://localhost');
const channel = await conn.createChannel();
const queue = 'orders';
await channel.assertQueue(queue, { durable: true });
// Process one message at a time (backpressure)
channel.prefetch(1);
channel.consume(queue, async (msg) => {
const order = JSON.parse(msg.content.toString());
try {
await processOrder(order);
// Success: acknowledge message (removes from queue)
channel.ack(msg);
} catch (err) {
// Failure: reject & requeue (or send to DLQ)
channel.nack(msg, false, true); // requeue=true
}
});
}
startConsumer();Kafka Producer (High Throughput)
const { Kafka } = require('kafkajs');
const kafka = new Kafka({
clientId: 'order-service',
brokers: ['kafka1:9092', 'kafka2:9092']
});
const producer = kafka.producer();
async function sendOrderEvent(order) {
await producer.connect();
await producer.send({
topic: 'orders',
messages: [{
key: order.userId, // Partition by user (ordering)
value: JSON.stringify(order),
headers: {
'event-type': 'ORDER_CREATED'
}
}]
});
}
// Kafka: Millions of messages/sec, ordered within partitionAWS SQS (Serverless)
const { SQSClient, SendMessageCommand } = require('@aws-sdk/client-sqs');
const sqs = new SQSClient({ region: 'us-east-1' });
async function queueOrder(order) {
await sqs.send(new SendMessageCommand({
QueueUrl: process.env.ORDER_QUEUE_URL,
MessageBody: JSON.stringify(order),
MessageGroupId: order.userId, // FIFO queue ordering
MessageDeduplicationId: order.id // Prevent duplicates
}));
}
// SQS: Fully managed, auto-scaling, pay-per-messageWhen to Use
- Async task processing (emails, reports)
- Microservices communication
- Traffic spike buffering
- Event-driven architectures
- Background job processing
When to Skip
- Need immediate response (use sync API)
- Simple monolith (adds complexity)
- Strict request-response needed
- Low volume, low complexity
Delivery Guarantees
Choose your tradeoff: speed vs. reliability vs. complexity.
Queue Types & Patterns
Picking the Right Queue
| Feature | RabbitMQ | Kafka | AWS SQS | Redis Streams |
|---|---|---|---|---|
| Best For | Task queues | Event streaming | Serverless | Lightweight |
| Throughput | ~50K/s | Millions/s | ~3K/s | ~100K/s |
| Ordering | Per queue | Per partition | FIFO optional | Per stream |
| Ops Overhead | Medium | High | None | Low |
Common Pitfalls
Real-World Scale
LinkedIn: 7 trillion messages/day on Kafka. Netflix: 700 billion events/day. Uber: Kafka handles all trip events globally.