Module 4 Exercises: Deployment Operations

Module 4 Exercises: Deployment Operations

Master the lifecycle of a production app. Perform rolling updates, debug resource limits, and practice the emergency rollback.

Module 4 Exercises: Deployment Operations

In Module 4, we focused on the "How" of deployments. You learned about declarative YAML, imperative debugging, and how to protect your nodes from hungry containers. These exercises will put those skills to use in a simulated production environment.


Exercise 1: Deploy a Multi-Container Pod

Create a YAML for a Pod that has two containers:

  1. Main Container: nginx
  2. Sidecar Container: busybox that runs a command while true: echo "$(date) - All systems healthy" >> /var/log/health.log; sleep 10;.
  3. Shared Volume: Use an emptyDir volume to allow the sidecar to write to a file and Nginx to serve it.
  4. Verification: Use kubectl exec to see the content of the log file from the Nginx container.

Exercise 2: The High-Stakes Rolling Update

Create a Deployment with 3 replicas.

  1. Initial Image: nginx:1.14.2
  2. Update Strategy: Set maxSurge: 1 and maxUnavailable: 0.
  3. The Update: Update the image to nginx:1.16.1.
  4. Monitoring: Use kubectl rollout status to watch the transition.
  5. Audit: Run kubectl get pods. How many pods were running at the peak of the update?

Exercise 3: Recovering from a Bad Release

  1. The Error: Update your deployment to use a non-existent image (e.g., nginx:this-version-does-not-exist).
  2. The Impact: Wait 1 minute. Run kubectl get pods. Why are the old pods still running?
  3. The Recovery: Use the kubectl rollout undo command to cancel the failed update.
  4. Verification: Run kubectl rollout history to verify you are back to the stable version.

Exercise 4: Resource Limit Debugging

  1. Creation: Create a pod with a memory limit of 64Mi.
  2. Stress Test: Run a command inside the pod (or use a heavy image) that consumes 100Mi of RAM.
  3. Observation: Run kubectl describe pod. What status reason do you see? (Hint: Look for "OOMKilled").

Solutions (Self-Check)

Exercise 1 Strategy:

You need a Volume defined in spec.volumes and volumeMounts defined in both containers[0] and containers[1] pointing to the same volume name.

Exercise 2 Solution:

At the peak, you should have seen 4 pods running. Because maxSurge was 1, K8s created 1 new pod before killing any of the 3 old ones.

Exercise 3 Hint:

The old pods stayed running because your maxUnavailable was set to 0. K8s refused to kill a stable pod until a new pod passed its health checks. Since the new pod couldn't pull its image, it never became "Ready," so the rollout paused.

Exercise 4 Solution:

You will see Reason: OOMKilled. This is your signal that your Limits were set too low for the application's actual workload.


Summary of Module 4

You are now a Kubernetes Operator.

  • You can manage infrastructure at scale using Declarative YAML.
  • You are a ninja with the kubectl CLI.
  • You can perform Rolling Updates with zero downtime.
  • You are prepared for disasters with Rollbacks.
  • You can protect your cluster with Resource Requests and Limits.

In Module 5: Networking in Kubernetes, we will move beyond simple Services and master the "External Gateway": Ingress Controllers.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn