
Module 14 Lesson 1: Dockerizing a Node.js Full-Stack App
From code to container. A step-by-step guide to containerizing a modern Node.js application with a React frontend, Express backend, and MongoDB database.
67 articles

From code to container. A step-by-step guide to containerizing a modern Node.js application with a React frontend, Express backend, and MongoDB database.
Hands-on with YAML. Learn the structure of a Compose file and build a working stack with a web server and a database in minutes.
The capstone of the basics. Build a complete Voting App architecture with a Python worker, a Redis queue, a Postgres DB, and a Node.js results page.

Your first automated build. Follow a step-by-step guide to creating, committing, and running your very first GitLab CI/CD pipeline.
Hands-on: Design the mission statement and tool-set for your first Bedrock Agent.
Hands-on: Write a Lambda function and a schema for an agent that can track package deliveries.
Hands-on: Trace the execution of a multi-step agent and identify reasoning bottlenecks.
Hands-on: Design the architecture for a multi-agent system that handles a complex business workflow.
Hands-on: Design a human-in-the-loop workflow for a high-value financial transaction agent.
Hands-on: Create a CloudWatch dashboard that tracks your agent's success rate and token spend.
Hands-on: Design a secure architecture that involves IAM, Secrets Manager, and Guardrails.
Hands-on: Design a deployment strategy for a mission-critical AI agent.
Hands-on: Identify a business workflow that requires the control and state management of AgentCore.
Hands-on: Design a multi-node AgentCore graph that includes AI reasoning and data validation.
Hands-on: Verify your AWS configuration and enable your first foundation models.
Hands-on: Design an AgentCore workflow that uses a critic node to verify RAG output.
Hands-on: Write a script that compares the outputs of two different models using the same prompt.
Hands-on: Robust prompt engineering to reduce hallucinations and maximize cost-efficiency.
Hands-on: Build a streaming CLI chat application that handles tokens as they arrive.
Hands-on: Build a functional REST API with FastAPI that exposes multiple Bedrock models.
Hands-on: Design your first Knowledge Base and select your chunking strategy.
Hands-on: Build a Python script that answers questions about your S3 documents and prints the citations.
Hands-on: Implement a Bedrock Guardrail and verify your grounding instructions.
Reviewing professional prompting techniques and completing a structured output project.
Reviewing non-text GenAI and practicing image generation and code refactoring.
Reviewing the AI tech stack and building a simple RAG pipeline and local AI setup.
Hands-on: Combine tools, memory, and reasoning into a single autonomous Research Agent.
Hands-on: Build an Information Extraction agent that converts raw text into a clean Python object.
Hands-on: Build a Cost-Monitoring Callback that calculates and prints the price of every AI request.
Hands-on: Set up LangSmith and trace a complex multi-tool agentic decision.
Hands-on: Create a production-ready FastAPI endpoint with caching and retry logic.
Hands-on: Finalize your deployment package and learn about cloud hosting options for LangChain apps.
Hands-on: Build a reusable prompt library and implement a few-shot dynamic classifier.
Hands-on: Build a Parallel-Sequential Research Chain that writes, reviews, and translates a report.
Hands-on: Build a pipeline that loads a multi-page PDF and splits it into optimized chunks.
Hands-on: Build a local knowledge base using ChromaDB and perform semantic queries.
Hands-on: Finalize your first production-ready RAG system over your own local documents.
Hands-on: Build a persistent chatbot that remembers your name across different CLI sessions.
Hands-on: Build a toolbox for an agent that can multiply numbers and search Wikipedia.
Hands-on: Compare a chatbot vs an agent workflow and identify agent-worthy problems.
Hands-on: Design a configuration-driven agent with a global PII-filtering policy.
Hands-on: Use a decision matrix to select the right framework for three real-world business scenarios.
Hands-on: Build a self-correcting agent loop that uses Pydantic to validate outputs.
Hands-on: Secure an agent against jailbreaking and implement a PII redaction layer.
Hands-on: Build a state-machine governed agent that handles a login and payment flow.
Hands-on: Build a simple RAG agent that retrieves context from a local Vector DB before answering.
Hands-on: Design a dashboard concept for a multi-agent research crew that shows planning and status.
Hands-on: Build a manual, single-step agent loop from scratch in Python.
Hands-on: Design a planner-executor flow for a multi-step research task.
Hands-on: Build a tool-using LangChain agent and observe the reasoning traces in real-time.
Hands-on: Convert a simple LangChain AgentExecutor into a controlled LangGraph workflow.
Hands-on: Build a complex research agent with routing, loop guards, and a validation gate.
Hands-on: Build a two-agent research crew that identifies tech trends and writes a report.
Hands-on: Build an event-driven agent pipeline that detects toxicity and triggers a response.
Prepare your machine for Ollama. A hands-on guide to checking your hardware and selecting your first model.
Hands-on: The complete RAG project. Index a folder of text files and build a bot that can answer questions about them.
Hands-on: Secure your environment. Final checks for a professional, compliant local AI setup.
Hands-on: Deployment with Docker Compose. Building a multi-container stack with Ollama and a Web UI.
Hands-on: Deploying to a remote server. Final operational checks before going live.
Hands-on session: Pulling your first model and having a high-speed conversation with a local AI.
Put your knowledge to the test. Compare Llama, Mistral, and Gemma on speed, humor, and logic.
Hands-on: Benchmarking your machine. Compare quantization levels and measure memory usage in real-time.
Hands-on: Creating a specialized AI persona from scratch. Move beyond the default registry.
Hands-on: The full workflow from Hugging Face download to Ollama creation.
Hands-on: Benchmarking your machine. Compare quantization levels and measure memory usage in real-time.
Hands-on: Creating a fully functional, streaming terminal chatbot using Python and Ollama.
Hands-on: Combine system prompts, JSON mode, and negative constraints to build a production-ready data extractor.