•
Edge latency vs regional ecosystem depth: Edge runtimes win on user-perceived latency and request-path compute, but come with tighter execution constraints and different state/networking patterns. Regional clouds offer broad integrations and event ecosystems, but add latency and can hide cold-start and concurrency cliffs until scale.
•
Cold starts, concurrency, and execution ceilings: Most serverless failures aren’t outages—they’re invisible limits: cold start latency, throttling, timeouts, and memory/CPU coupling. A platform that looks fine in dev can degrade under bursts or long-tail traffic when concurrency and warm capacity assumptions break.
•
Pricing physics and cost cliffs: Serverless pricing is often “pay per use,” but cost cliffs appear with sustained traffic, chatty APIs, and egress-heavy workloads. Requests, duration, memory allocation, and networking/egress can dominate costs. The winning choice depends on workload shape and how predictable usage is.