Caching Strategies: Redis and Memory

Caching Strategies: Redis and Memory

The fastest response is the one you don't have to calculate. Learn how to implement caching to drastically reduce database load and latency.

Caching Strategies: Redis and Memory

The most expensive operation in an API is often fetching data from a database or calling a third-party AI service. If 1,000 users ask the same question, why calculate the answer 1,000 times?

In this lesson, we learn how to use Caching to return results in under 5ms.


1. What is Caching?

Caching is storing a copy of the result in a Fast location (RAM) so that future requests can skip the work.

The "Price" of Data:

  • Database Query: 50ms - 200ms.
  • Redis Cache: 1ms - 5ms.
  • In-Memory Cache: 0.1ms.

2. Choosing Your Cache: Redis vs. In-Memory

In-Memory (Local):

  • Pros: Instant speed.
  • Cons: If you have 3 servers, they all have different caches. If you restart the server, the cache is wiped.
  • Tool: lru_cache or cachetools.

Redis (Distributed):

  • Pros: Every server shares the same cache. Data survives restarts. Extremely powerful features (TTL, Lists, Sets).
  • Cons: Slight network overhead (though still incredibly fast).

3. Implementing Redis with FastAPI

We usually use the redis-py library.

import redis
from fastapi import FastAPI

r = redis.Redis(host='localhost', port=6379, db=0)

@app.get("/user/{id}")
async def get_user(id: int):
    # 1. Check if it's in the cache
    cached_user = r.get(f"user:{id}")
    if cached_user:
        return json.loads(cached_user)
    
    # 2. If not, do the hard work (DB)
    user = db.fetch_from_slow_database(id)
    
    # 3. Save it for next time (with a 1-hour expiry)
    r.setex(f"user:{id}", 3600, user.json())
    
    return user

4. Cache Invalidation: The "Hard" Part

The hardest part of caching is knowing when to delete it. If a user updates their password, you must delete the cached version of that user, or they will stay "Logged out" (or use stale data) for an hour.

Simple Rule: Cache every GET request. Invalidate the cache on every POST, PUT, or DELETE request for that resource.


Visualizing the Cache Flow

graph TD
    A["User Request"] --> B{"Is it in Cache?"}
    B -- "HIT (found)" --> C["Return from Redis (2ms)"]
    B -- "MISS (not found)" --> D["Query Database (150ms)"]
    D --> E["Save to Redis"]
    E --> F["Return Response"]

Summary

  • Caching: Reduces load and improves speed.
  • Redis: The industry standard for shared, persistent caching.
  • TTL: Always set an expiry (Time To Live) so your data doesn't stay stale forever.
  • Invalidation: Don't forget to clear the cache when data changes!

In the next lesson, we wrap up Module 15 with Load Testing and Stress Testing.


Exercise: The Expiry Logic

You are building an Employee Directory.

  1. Employee names change once a year.
  2. Employee "In/Out" status changes 10 times a day. How would you set the TTL (Time to Live) differently for these two pieces of data?

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn