Module 14 Lesson 4: Project: Multi-Service Architecture
·DevOps

Module 14 Lesson 4: Project: Multi-Service Architecture

The microservices puzzle. Learn how to orchestrate a complex system with a Gateway, separate Services for Auth and Data, and a shared Message Queue.

Module 14 Lesson 4: Project: Multi-Service Architecture

In this project, we move beyond "Web + DB." We are building a high-traffic architecture that uses an API Gateway, a Background Worker, and a Redis message queue.

1. The Components

  • Gateway: Nginx (routes traffic to the right service).
  • Auth Service: Node.js (handles logins).
  • Order Service: Python (handles business logic).
  • Worker: Python (processes heavy tasks like email or PDFs).
  • Broker: Redis (passes messages between services).

2. The Microservice Compose File

services:
  gateway:
    image: nginx:alpine
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"

  auth-api:
    build: ./auth
    environment:
      - REDIS_HOST=queue

  order-api:
    build: ./order
    environment:
      - REDIS_HOST=queue

  worker:
    build: ./worker
    environment:
      - REDIS_HOST=queue

  queue:
    image: redis:alpine

Visualizing the Process

graph TD
    Start[Input] --> Process[Processing]
    Process --> Decision{Check}
    Decision -->|Success| End[Complete]
    Decision -->|Retry| Process

3. Communication Patterns

In this architecture, services don't talk directly to each other (which is "Brittle"). Instead, they use the Pub/Sub pattern:

  1. Auth API finishes a login. It sends a "UserLoggedIn" message to Redis.
  2. Order API is "Subscribed" to that message. It sees the message and prepares a personalized menu.
  3. The Benefit: If the Order API is temporarily down, the message stays in Redis. When the API comes back up, it "Catches up" on all the work it missed.

4. Key Takeaways

  1. Shared Infrastructure: All services share the same queue service by simply using its hostname.
  2. Isolation: If the auth-api is hacked, the order-api is safe on its separate IP.
  3. Scaling individual parts: (Review Module 6). You can scale ONLY the worker service to 10 instances if you have a lot of emails to send, without wasting money scaling the auth-api.

Exercise: The Message Loop

  1. Setup a simple 2-service Compose file with a publisher and a subscriber (use Python for both).
  2. Add a redis container.
  3. Use the redis-py library to send a "hello" message from one to the other.
  4. Kill the subscriber container. Send 5 more messages.
  5. Restart the subscriber. Did it receive the 5 messages?
  6. How would you use docker-compose logs -f to watch both services at once?

Summary

This is how "Big Tech" works. By decoupling your services using Docker and a message broker like Redis, you create a system that is infinitely scalable, resilient to individual failures, and easy to update.

Next Lesson: Lightning fast: Containerizing a Static Site with Nginx.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn