
Microservices have communication issues, especially when they fail
Communication between entities has been a long-lasting topic in software engineering. IPC, message brokers, and queues are only a few of the main actors in this drama. Building distributed microservices that communicate reliably — and fail gracefully — is one of the hardest engineering challenges teams face today.
In this episode, Chris is joined by Francesco, a software engineer based in Dublin building a payment gateway, to explore practical microservices communication patterns. Francesco shares lessons from handling real-world distributed transactions in production, including why naive REST-over-HTTP calls break down at scale and under failure conditions.
A central focus is the Saga pattern — a way to manage multi-step business transactions across microservices without relying on distributed locks or two-phase commits. The episode covers orchestration-based Sagas using AWS Step Functions, how to design compensating transactions for rollback scenarios, and when to accept eventual consistency over strong consistency.
The conversation also covers practical observability strategies for distributed systems — including distributed tracing and surfacing failures across service boundaries. Francesco and Chris discuss fire-and-forget messaging patterns, event buses (EventBridge), WebSocket use cases, and DynamoDB triggers for event-driven workflows.
Throughout, both speakers return to the importance of pragmatism: apply Occam’s Razor to your architecture — the simplest solution that solves the problem is usually the right one.
Related Content

21 - The Queue Based Load Levelling and Competing Consumers Pattern
Cloud with ChrisDo you have an application with specific scalability and continuity-of-service requirements? What happens when traffic spikes dramatically — think a major concert or FIFA World Cup ticket sale crashing a site? In this Architecting for the Cloud episode, Chris and Will Eastbury walk through three closely related patterns: Queue-Based Load Levelling, Competing Consumers, and the Asynchronous Request-Reply pattern. They explore how message queues act as shock absorbers for traffic spikes, how competing consumers enable elastic horizontal scaling, and how async request-reply lets you retrofit these patterns into existing architectures with minimal disruption. Key trade-offs covered include queue depth limits, Azure Service Bus configuration, distributed tracing with Application Insights, and when the added complexity genuinely justifies reaching for these patterns.
26 - The Pub Sub, Priority Queue and Pipes and Filter Patterns
Chris Reddington and Will Eastbury cover three closely related messaging patterns in one packed episode. They start with the Publish-Subscribe (Pub/Sub) pattern — arguably the most transformative shift in enterprise messaging — where a single producer broadcasts to multiple isolated subscribers via Azure Service Bus topics or Azure Event Grid. Real-world use cases include insurance aggregators, credit check pipelines, and bank account sign-up workflows. From there they move to the Priority Queue pattern, which ensures high-priority messages are processed before lower-priority ones even when consumers are under load. Finally, the Pipes and Filters pattern decomposes complex message processing into a chain of discrete, reusable transformation steps — reducing complexity and enabling independent scaling of each stage. The episode also connects these patterns back to earlier topics like Competing Consumers and Queue-Based Load Leveling, and flags related patterns including Choreography and Compensating Transactions.

24 - Health Endpoint Monitoring Pattern (Monitor your service and its dependencies!)
Stop waiting for users to tell you something is broken. The Health Endpoint Monitoring pattern gives your services a dedicated health-check endpoint that aggregates the status of all dependent components—databases, APIs, storage—into a single observable response. This episode covers the pattern in detail, including design considerations around caching, security, denial-of-service exposure, and integration with Azure Monitor and Application Insights.