Cloudflare Workers Setup
Cloudflare Workers give you a full serverless platform with zero cold starts, running at 300+ edge locations worldwide. But building a production system on Workers means more than deploying a single script — you need KV for caching, Durable Objects for stateful coordination, R2 for object storage, Queues for async processing, and a CI/CD pipeline that handles staging environments and gradual rollouts. We set up the entire Workers ecosystem for your product.
Need this done for your project?
We implement, you ship. Async, documented, done in days.
Project Architecture & Bindings
We structure your Workers project as a monorepo with shared libraries, typed bindings, and environment-specific configuration. Each Worker gets its own wrangler.toml with explicit bindings to KV namespaces, R2 buckets, Durable Objects, and service bindings for inter-worker communication.
// Type-safe environment bindings
interface Env {
// KV Namespaces
SESSIONS: KVNamespace;
CACHE: KVNamespace;
// R2 Buckets
UPLOADS: R2Bucket;
// Durable Objects
RATE_LIMITER: DurableObjectNamespace;
ROOM: DurableObjectNamespace;
// Queues
EMAIL_QUEUE: Queue<EmailMessage>;
WEBHOOK_QUEUE: Queue<WebhookPayload>;
// Service Bindings
AUTH_SERVICE: Fetcher;
// Secrets (set via wrangler secret put)
DATABASE_URL: string;
STRIPE_SECRET_KEY: string;
// Variables
ENVIRONMENT: string;
}We use wrangler types to auto-generate TypeScript definitions for all bindings, ensuring compile-time safety. Shared utilities — request parsing, error handling, CORS headers — live in a shared package imported by all Workers.
Durable Objects for Stateful Logic
Durable Objects provide strongly consistent, single-threaded state at the edge. We implement them for use cases like rate limiting, WebSocket rooms, shopping carts, and collaborative editing.
export class RateLimiter implements DurableObject {
private state: DurableObjectState;
private requests: Map<string, number[]> = new Map();
constructor(state: DurableObjectState, env: Env) {
this.state = state;
// Restore state from storage on wake
this.state.blockConcurrencyWhile(async () => {
const stored = await this.state.storage.get<Map<string, number[]>>('requests');
if (stored) this.requests = stored;
});
}
async fetch(request: Request): Promise<Response> {
const ip = request.headers.get('CF-Connecting-IP') || 'unknown';
const now = Date.now();
const window = 60_000; // 1 minute
const limit = 100;
const timestamps = (this.requests.get(ip) || []).filter(t => t > now - window);
if (timestamps.length >= limit) {
return new Response('Rate limit exceeded', { status: 429 });
}
timestamps.push(now);
this.requests.set(ip, timestamps);
await this.state.storage.put('requests', this.requests);
return new Response('OK', {
headers: { 'X-RateLimit-Remaining': String(limit - timestamps.length) }
});
}
}Durable Objects automatically hibernate when idle, so you only pay for active usage. We configure alarm handlers for periodic cleanup and implement the WebSocket Hibernation API for real-time features that scale to thousands of concurrent connections per object.
Queues & Async Processing
Cloudflare Queues let you decouple request handling from background processing. We configure producer Workers that enqueue messages and consumer Workers that process batches with retry logic.
// Producer — enqueue from API handler
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const body = await request.json<{ to: string; template: string }>();
await env.EMAIL_QUEUE.send({
to: body.to,
template: body.template,
enqueuedAt: new Date().toISOString(),
});
return Response.json({ status: 'queued' }, { status: 202 });
}
};
// Consumer — process email batch
export default {
async queue(batch: MessageBatch<EmailMessage>, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
await sendEmail(message.body, env);
message.ack();
} catch (err) {
if (message.attempts < 3) {
message.retry({ delaySeconds: Math.pow(2, message.attempts) * 10 });
} else {
// Dead letter — log and ack to prevent infinite retry
console.error(`DLQ: Failed email to ${message.body.to}`, err);
message.ack();
}
}
}
}
};We configure batch size, max retries, and visibility timeout based on your processing characteristics. Dead letter logging ensures no message is silently lost.
CI/CD & Deployment Pipeline
We set up a GitHub Actions pipeline that deploys your Workers with staging preview, integration tests, and gradual production rollout.
name: Deploy Workers
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci
- run: npm run typecheck
- run: npm run test # Vitest + miniflare
deploy-staging:
needs: test
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci && npm run build
- run: npx wrangler deploy --env staging
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CF_API_TOKEN }}
deploy-production:
needs: test
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci && npm run build
- run: npx wrangler deploy --env production
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CF_API_TOKEN }}Local development uses Miniflare for a full Workers runtime simulation including KV, R2, Durable Objects, and Queues. Integration tests run against Miniflare before any deployment. Secrets are managed via wrangler secret put and never stored in source control.
Why Anubiz Engineering
Ready to get started?
Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.