Product details — Cloud Compute
Cloudflare Workers
This page is a decision brief, not a review. It explains when Cloudflare Workers tends to fit, where it usually struggles, and how costs behave as your needs change. This page covers Cloudflare Workers in isolation; side-by-side comparisons live on separate pages.
Quick signals
What this product actually is
Edge compute runtime for running code close to users, optimized for latency and global distribution with runtime constraints.
Pricing behavior (not a price list)
These points describe when users typically pay more, what actions trigger upgrades, and the mechanics of how costs escalate.
Actions that trigger upgrades
- Need to move logic closer to users to reduce latency
- Need global distribution without multi-region operations
- Need a fallback path to general-purpose compute for workloads that don’t fit edge runtime constraints
When costs usually spike
- Platform constraints shape architecture
- Stateful services often require complementary storage products
- Data access patterns (latency to origin) often decide whether edge makes sense
- You’ll need to validate limits against your workload early to avoid refactors
Plans and variants (structural only)
Grouped by type to show structure, not to rank or recommend specific SKUs.
Plans
- Requests + CPU time - usage-based - Pricing is driven by request volume and execution time.
- State/storage add-ons - separate billing - KV/Durable Objects/queues and other storage primitives have their own pricing.
- Limits/quotas - plan for bursts - Validate per-request constraints and quotas for your workload.
- Official pricing: https://developers.cloudflare.com/workers/platform/pricing/
Costs & limitations
Common limits
- Runtime constraints compared to general-purpose compute
- State/storage model requires careful design
- Not a drop-in replacement for VM/container workloads
- Long-running/background tasks and heavier compute patterns may not fit
- You must design around how state and data access works in edge runtimes
- Some application architectures are simply better served by general-purpose compute
What breaks first
- Workloads that require long-running processes or heavy compute outside the runtime constraints
- Stateful architectures that assume local persistent storage or long-lived connections
- Data access latency if your state remains centralized far from users
- Debugging/observability needs if you treat edge runtimes like standard servers
- Vendor/runtime constraints that force architectural changes later
Fit assessment
Good fit if…
- Latency-sensitive edge APIs and request processing
- Teams wanting global distribution without managing regions
- Request-time logic and middleware patterns that benefit from being close to users
- Architectures that can cleanly separate edge logic from stateful backends
Poor fit if…
- You need full OS-level control and standard VM/container patterns
- Your workload requires heavy compute or complex stateful services
Trade-offs
Every design choice has a cost. Here are the explicit trade-offs:
- Edge distribution → runtime constraints
- Low latency → different state/storage patterns
- Less region ops → more dependency on runtime limits and platform model
- Great for request-time logic → not suitable for every compute workload
Common alternatives people evaluate next
These are common “next shortlists” — same tier, step-down, step-sideways, or step-up — with a quick reason why.
-
Fly.io — Step-sideways / app platformCompared when teams want broader app-platform flexibility for services that don’t fit edge runtime constraints.
-
Render — Step-sideways / managed PaaSConsidered when the workload is a standard web service and teams prefer managed PaaS simplicity over edge constraints.
-
AWS EC2 — Step-up / general-purpose computeShortlisted when runtime constraints or state model requirements make edge execution a poor architectural fit.
Sources & verification
Pricing and behavioral information comes from public documentation and structured research. When information is incomplete or volatile, we prefer to say so rather than guess.