Serverless Migration
Migrating from a monolith or container-based architecture to serverless is not a weekend project. It requires careful decomposition, incremental traffic shifting, and comprehensive rollback plans. We execute serverless migrations using the strangler fig pattern — progressively routing traffic from your existing application to new serverless functions until the monolith can be decommissioned. No big bang. No downtime. No risk.
Need this done for your project?
We implement, you ship. Async, documented, done in days.
Migration Assessment & Planning
We start by analyzing your existing application to identify which components are good candidates for serverless and which should remain as-is.
# Migration suitability matrix
# Component | Serverless Fit | Reason
# ─────────────────────────────────────────────────────
# REST API endpoints | HIGH | Stateless, request/response
# Background jobs | HIGH | Event-driven, sporadic
# File processing | HIGH | S3 trigger + Lambda
# WebSocket server | MEDIUM | API GW WebSocket works but complex
# GraphQL API | MEDIUM | AppSync or Lambda, watch cold starts
# Long-running tasks | MEDIUM | Step Functions for >15min
# ML model serving | MEDIUM | Lambda containers up to 10GB
# Real-time streaming | LOW | Kinesis + Lambda has latency
# Stateful workers | LOW | Need persistent connections
# Database server | NOT SUITABLE | Use managed service instead
# Priority order:
# 1. Background jobs (lowest risk, highest ROI)
# 2. API endpoints (strangler fig, one route at a time)
# 3. File processing (simple event trigger)
# 4. Scheduled tasks (EventBridge rules replace cron)We document the migration plan with a phased timeline, risk assessment per phase, and rollback procedure for each component. Background jobs go first because they are the lowest risk — if a Lambda-based job fails, the existing job can be re-enabled in seconds.
Strangler Fig Pattern Implementation
The strangler fig pattern routes traffic incrementally from the monolith to new serverless functions using a reverse proxy or API Gateway.
# API Gateway as the strangler fig proxy
# Routes are migrated one at a time from monolith → Lambda
resource "aws_apigatewayv2_api" "main" {
name = "api-${var.env}"
protocol_type = "HTTP"
}
# Migrated route → Lambda
resource "aws_apigatewayv2_route" "get_orders" {
api_id = aws_apigatewayv2_api.main.id
route_key = "GET /api/orders"
target = "integrations/${aws_apigatewayv2_integration.get_orders_lambda.id}"
}
resource "aws_apigatewayv2_integration" "get_orders_lambda" {
api_id = aws_apigatewayv2_api.main.id
integration_type = "AWS_PROXY"
integration_uri = aws_lambda_function.get_orders.invoke_arn
payload_format_version = "2.0"
}
# Not-yet-migrated routes → Monolith (catch-all)
resource "aws_apigatewayv2_route" "default" {
api_id = aws_apigatewayv2_api.main.id
route_key = "$default"
target = "integrations/${aws_apigatewayv2_integration.monolith.id}"
}
resource "aws_apigatewayv2_integration" "monolith" {
api_id = aws_apigatewayv2_api.main.id
integration_type = "HTTP_PROXY"
integration_uri = "http://${var.monolith_alb_dns}:3000/{proxy}"
integration_method = "ANY"
}API Gateway routes specific endpoints to Lambda functions while forwarding everything else to the monolith via HTTP proxy integration. As you migrate each endpoint, you add a new Lambda route and the monolith handles less traffic. Eventually, the $default route catches nothing and the monolith can be decommissioned.
Data Layer Migration
The data layer is the hardest part of any migration. We implement dual-write patterns and change data capture to keep databases synchronized during the transition.
// Dual-write adapter — writes to both old and new databases
// Used during migration period only
class DualWriteOrderRepository {
constructor(
private legacy: PostgresOrderRepo, // Old database
private serverless: DynamoOrderRepo, // New database
) {}
async createOrder(order: Order): Promise<Order> {
// Write to new database first (source of truth going forward)
const result = await this.serverless.createOrder(order);
// Write to legacy database (best-effort, for backward compatibility)
try {
await this.legacy.createOrder(order);
} catch (err) {
// Log but don't fail — new DB is source of truth
console.error('Legacy write failed:', err);
await this.publishToDeadLetter('legacy-write-failed', order);
}
return result;
}
async getOrder(id: string): Promise<Order | null> {
// Read from new database
const order = await this.serverless.getOrder(id);
if (order) return order;
// Fallback to legacy during migration
return this.legacy.getOrder(id);
}
}
// Phase 1: Read legacy, write both
// Phase 2: Read new, write both (current)
// Phase 3: Read new, write new only
// Phase 4: Decommission legacy databaseThe dual-write adapter progresses through 4 phases. We instrument each phase with metrics showing read/write ratios per database. When 100% of reads are served from the new database with zero fallback hits, the legacy database can be safely decommissioned.
Traffic Shifting & Rollback
We shift traffic gradually from the monolith to serverless using weighted routing, with automated rollback on error threshold breach.
# Route 53 weighted routing for gradual migration
resource "aws_route53_record" "api_monolith" {
zone_id = data.aws_route53_zone.main.zone_id
name = "api.yourdomain.com"
type = "A"
alias {
name = aws_lb.monolith.dns_name
zone_id = aws_lb.monolith.zone_id
}
set_identifier = "monolith"
weighted_routing_policy {
weight = 90 # 90% to monolith initially
}
}
resource "aws_route53_record" "api_serverless" {
zone_id = data.aws_route53_zone.main.zone_id
name = "api.yourdomain.com"
type = "A"
alias {
name = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].target_domain_name
zone_id = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].hosted_zone_id
}
set_identifier = "serverless"
weighted_routing_policy {
weight = 10 # 10% to serverless initially
}
health_check_id = aws_route53_health_check.serverless.id
}
# Progression: 10% → 25% → 50% → 75% → 100%
# Each stage runs for 48 hours with monitoring
# Rollback: set serverless weight to 0, monolith to 100Health checks on the serverless endpoint automatically route traffic back to the monolith if the serverless API becomes unhealthy. We progress through weight stages over 2 weeks, with 48 hours of monitoring at each stage. Rollback is a single Terraform variable change that sets the serverless weight to 0. The entire migration is documented in a runbook with clear go/no-go criteria for each stage.
Why Anubiz Engineering
Ready to get started?
Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.