
I can use any Azure Compute Service to solve any problem? (Azure Mythbusters)
Azure offers a broad spectrum of compute services, each optimised for different workload characteristics — picking the wrong one results in the wrong scaling envelope, the wrong cost model, and unnecessary management overhead.
The episode steps through the main options:
- App Service — fully managed multi-tenant hosting for web apps and APIs; drag a slider to scale.
- App Service Environment (ASE) — single-tenant App Service deployed inside your own virtual network, required for compliance scenarios such as PCI DSS where the public front-end of standard App Service is not acceptable.
- Container Instances — simple, fast container execution for single containers or small groups without the orchestration overhead of Kubernetes; ideal for short-lived tasks.
- Azure Kubernetes Service (AKS) — managed Kubernetes with the platform handling control-plane upgrades; you still configure node pools, networking, and auto-scale rules, and must understand Kubernetes concepts to use it effectively.
- Service Fabric — Microsoft’s native microservices platform (underpinning many Azure services) that natively supports stateful services by co-locating data with code, eliminating the network hop to an external state store — critical for ultra-low-latency workloads.
- Azure Batch — managed job scheduler for large-scale HPC and Monte Carlo-style workloads across thousands of VMs; think big scheduled jobs, not microservices.
- Azure Functions — event-driven serverless execution; consumption plan abstracts all infrastructure but you still choose between consumption and dedicated plans, and must design for stateless execution (or use Durable Functions for state management).
- Logic Apps — low-code graphical workflow designer billed per step execution; great for rapid prototyping and integration scenarios.
- Virtual Machines — maximum flexibility and compatibility for lift-and-shift or workloads that cannot run on PaaS; requires manual scaling or custom automation.
- VM Scale Sets — treats a fleet of VMs as a single object with auto-scale rules; workloads must be stateless since any VM can be removed at any time.
The key question across all options is whether your workload is stateful or stateless, how sensitive it is to latency, what compliance constraints apply, and how predictable or spiky the load profile is.
Related Content

Azure Myth 6: Cloud is expensive - Azure MythBuster
Cloud is not inherently more expensive than on-premises once you account for hardware depreciation, power, cooling, and network costs — but it requires designing cost into your architecture from the start. This Azure Mythbusters episode examines fixed versus variable cost envelopes, auto-scaling strategies for spiky workloads like Black Friday traffic, the IaaS/PaaS/serverless cost spectrum, and cost as an implicit sixth pillar alongside the five pillars of software quality.
Deploying a multi-region Serverless API Layer (Part 1)
In my spare time, I work on a pet project called Theatreers. The aim of this is a microservice based platform focused on Theatre / Musical Theatre (bringing a few of my passion areas together). I've recently re-architected the project to align to a multi-region serverless technology stack.

There are no clear architecture patterns for the Cloud? (Azure Mythbusters)
Cloud design patterns are abundant and well-documented on the Azure Architecture Center — from established patterns like cache-aside and materialized view to cloud-native ones like circuit breaker and health endpoint monitoring. This Azure Mythbusters episode tours the full pattern catalogue and deep-dives four key patterns: cache-aside, circuit breaker (open/half-open/closed states), health endpoint monitoring, and materialized view in CQRS/event sourcing scenarios.
