
30 - The Cache Aside Pattern (Optimise your caching approach!)
Caching speeds up reads by storing frequently requested data in a faster store closer to the application, avoiding repeated round-trips to the underlying data store. The cache-aside pattern is specifically for caches that do not natively implement read-through or write-through — the application itself is responsible for populating and invalidating the cache.
The flow is: (1) attempt a cache read; (2) on a miss, read from the data store; (3) write the result into the cache before returning it to the caller, so the next request for the same data is served from cache. Over time, hot data naturally rises into the cache without needing to pre-load everything upfront.
Key implementation considerations include:
- Expiration policy — set a TTL appropriate to how frequently the data changes; too short defeats the purpose, too long serves stale data.
- Eviction policy — when Azure Cache for Redis reaches capacity, configure eviction behaviour so the hottest data stays in cache.
- Consistency on write — the simplest approach is to invalidate (remove) the cache entry on write; the next read will repopulate it from the source of truth.
- Distributed cache — if multiple application instances share one Redis cache, a write on instance A correctly invalidates the entry for instance B; on-machine caches per instance do not have this property and should only be used for truly static or infrequently changing data.
Use this pattern when cache-miss population must be dynamic and unpredictable. If data changes on a predictable schedule (e.g., a product catalogue refreshed monthly), consider pre-warming the cache at application startup instead, which avoids the first-request penalty entirely.
Related Content

27 - The Compute Resource Consolidation Pattern (Optimise for Cost!)
Are you running dedicated compute for every tenant, microservice, or application instance — and paying for it? The Compute Resource Consolidation pattern shows you how to consolidate tasks onto shared infrastructure, such as a single AKS cluster with namespace isolation or an Azure SQL elastic pool, to reduce costs and management overhead. This episode explores the key trade-offs: blast radius containment, noisy neighbour contention, scalability profiles, and multi-tenancy strategies. Part of the "Architecting for the Cloud, One Pattern at a Time" series.
26 - The Pub Sub, Priority Queue and Pipes and Filter Patterns
Chris Reddington and Will Eastbury cover three closely related messaging patterns in one packed episode. They start with the Publish-Subscribe (Pub/Sub) pattern — arguably the most transformative shift in enterprise messaging — where a single producer broadcasts to multiple isolated subscribers via Azure Service Bus topics or Azure Event Grid. Real-world use cases include insurance aggregators, credit check pipelines, and bank account sign-up workflows. From there they move to the Priority Queue pattern, which ensures high-priority messages are processed before lower-priority ones even when consumers are under load. Finally, the Pipes and Filters pattern decomposes complex message processing into a chain of discrete, reusable transformation steps — reducing complexity and enabling independent scaling of each stage. The episode also connects these patterns back to earlier topics like Competing Consumers and Queue-Based Load Leveling, and flags related patterns including Choreography and Compensating Transactions.
CGN1 - Cloud Gaming Notes Episode 1 - Hosting a Game Server
Ever thought about what it takes to host a game in the Cloud? Well, this is the series for you! On the first Wednesday of every month, we explore Cloud Concepts that impact your journey to a connected multiplayer gaming experience! In this first session, we'll play some Minecraft and talk to the concept of a hosted game server.