
Module 14 Lesson 4: Project: Multi-Service Architecture
The microservices puzzle. Learn how to orchestrate a complex system with a Gateway, separate Services for Auth and Data, and a shared Message Queue.
Module 14 Lesson 4: Project: Multi-Service Architecture
In this project, we move beyond "Web + DB." We are building a high-traffic architecture that uses an API Gateway, a Background Worker, and a Redis message queue.
1. The Components
- Gateway: Nginx (routes traffic to the right service).
- Auth Service: Node.js (handles logins).
- Order Service: Python (handles business logic).
- Worker: Python (processes heavy tasks like email or PDFs).
- Broker: Redis (passes messages between services).
2. The Microservice Compose File
services:
gateway:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
auth-api:
build: ./auth
environment:
- REDIS_HOST=queue
order-api:
build: ./order
environment:
- REDIS_HOST=queue
worker:
build: ./worker
environment:
- REDIS_HOST=queue
queue:
image: redis:alpine
Visualizing the Process
graph TD
Start[Input] --> Process[Processing]
Process --> Decision{Check}
Decision -->|Success| End[Complete]
Decision -->|Retry| Process
3. Communication Patterns
In this architecture, services don't talk directly to each other (which is "Brittle"). Instead, they use the Pub/Sub pattern:
- Auth API finishes a login. It sends a "UserLoggedIn" message to Redis.
- Order API is "Subscribed" to that message. It sees the message and prepares a personalized menu.
- The Benefit: If the Order API is temporarily down, the message stays in Redis. When the API comes back up, it "Catches up" on all the work it missed.
4. Key Takeaways
- Shared Infrastructure: All services share the same
queueservice by simply using its hostname. - Isolation: If the
auth-apiis hacked, theorder-apiis safe on its separate IP. - Scaling individual parts: (Review Module 6). You can scale ONLY the
workerservice to 10 instances if you have a lot of emails to send, without wasting money scaling theauth-api.
Exercise: The Message Loop
- Setup a simple 2-service Compose file with a
publisherand asubscriber(use Python for both). - Add a
rediscontainer. - Use the
redis-pylibrary to send a "hello" message from one to the other. - Kill the
subscribercontainer. Send 5 more messages. - Restart the
subscriber. Did it receive the 5 messages? - How would you use
docker-compose logs -fto watch both services at once?
Summary
This is how "Big Tech" works. By decoupling your services using Docker and a message broker like Redis, you create a system that is infinitely scalable, resilient to individual failures, and easy to update.
Next Lesson: Lightning fast: Containerizing a Static Site with Nginx.