Serverless & Edge Computing

Serverless CI/CD Pipeline

Deploying serverless applications requires a different CI/CD approach than traditional containers. You are deploying dozens of functions, API routes, event rules, and permissions simultaneously — a single misconfiguration can break your entire backend. We build CI/CD pipelines for serverless applications using SAM, CDK, or Terraform with automated testing against local emulators, canary deployments, and instant rollback capabilities.

Need this done for your project?

We implement, you ship. Async, documented, done in days.

Start a Brief

Pipeline Architecture

Our serverless CI/CD pipelines follow a clear progression: lint and typecheck, unit test, build and bundle, integration test against local emulator, deploy to staging, run smoke tests, deploy to production with canary.

name: Serverless CI/CD
on:
  push:
    branches: [main, 'release/**']
  pull_request:
    branches: [main]

env:
  NODE_ENV: test
  AWS_REGION: us-east-1

jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20, cache: npm }
      - run: npm ci
      - run: npm run typecheck
      - run: npm run lint
      - run: npm run test:unit  # Vitest, no AWS calls

  integration:
    needs: validate
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20, cache: npm }
      - run: npm ci
      - run: npm run test:integration  # Against LocalStack/SAM local
        env:
          AWS_ENDPOINT: http://localhost:4566
          DYNAMODB_TABLE: test-table

  deploy-staging:
    needs: integration
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    environment: staging
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20, cache: npm }
      - run: npm ci && npm run build
      - uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_STAGING_ROLE }}
          aws-region: us-east-1
      - run: npx cdk deploy --all --require-approval never
        env: { STAGE: staging }

We use GitHub OIDC federation for AWS authentication — no long-lived credentials stored in secrets. Environment protection rules require approval for production deployments.

Local Testing with Emulators

Integration tests run against LocalStack or SAM Local, giving you confidence that Lambda functions, DynamoDB tables, SQS queues, and EventBridge rules work correctly before any cloud deployment.

# docker-compose.test.yml
services:
  localstack:
    image: localstack/localstack:latest
    ports:
      - "4566:4566"
    environment:
      SERVICES: lambda,dynamodb,sqs,sns,events,s3
      DEFAULT_REGION: us-east-1
      LAMBDA_RUNTIME_ENVIRONMENT_TIMEOUT: 30
    volumes:
      - "./localstack-init:/etc/localstack/init/ready.d"

# localstack-init/setup.sh
#!/bin/bash
awslocal dynamodb create-table \
  --table-name orders \
  --attribute-definitions AttributeName=PK,AttributeType=S AttributeName=SK,AttributeType=S \
  --key-schema AttributeName=PK,KeyType=HASH AttributeName=SK,KeyType=RANGE \
  --billing-mode PAY_PER_REQUEST

awslocal sqs create-queue --queue-name order-processing
awslocal events create-event-bus --name app-events
// Integration test — tests the full event flow
import { describe, it, expect, beforeAll } from 'vitest';
import { DynamoDBClient, GetCommand } from '@aws-sdk/client-dynamodb';

const dynamo = new DynamoDBClient({ endpoint: 'http://localhost:4566' });

describe('Order Creation Flow', () => {
  it('writes order to DynamoDB and publishes event', async () => {
    const response = await fetch('http://localhost:3000/orders', {
      method: 'POST',
      body: JSON.stringify({ customerId: 'cust_123', items: [{ sku: 'A1', qty: 2 }] }),
    });
    expect(response.status).toBe(201);
    
    const { orderId } = await response.json();
    const item = await dynamo.send(new GetCommand({
      TableName: 'orders',
      Key: { PK: { S: `ORDER#${orderId}` }, SK: { S: 'META' } },
    }));
    expect(item.Item).toBeDefined();
    expect(item.Item?.status?.S).toBe('created');
  });
});

LocalStack runs in Docker with pre-seeded resources. Tests execute in under 30 seconds, giving fast feedback without AWS costs.

Canary Deployments & Traffic Shifting

Production deployments use Lambda alias-based traffic shifting to gradually route traffic to the new version while monitoring for errors.

# AWS SAM canary deployment
Resources:
  OrderHandler:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs20.x
      AutoPublishAlias: live
      DeploymentPreference:
        Type: Canary10Percent5Minutes
        Alarms:
          - !Ref OrderHandlerErrorAlarm
          - !Ref OrderHandlerLatencyAlarm
        Hooks:
          PreTraffic: !Ref PreTrafficHook
          PostTraffic: !Ref PostTrafficHook

  PreTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: hooks/pre-traffic.handler
      Runtime: nodejs20.x
      Policies:
        - Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Action: codedeploy:PutLifecycleEventHookExecutionStatus
              Resource: '*'

  OrderHandlerErrorAlarm:
    Type: AWS::CloudWatch::Alarm
    Properties:
      MetricName: Errors
      Namespace: AWS/Lambda
      Statistic: Sum
      Period: 60
      EvaluationPeriods: 1
      Threshold: 5
      ComparisonOperator: GreaterThanThreshold
      Dimensions:
        - Name: FunctionName
          Value: !Ref OrderHandler

The Canary10Percent5Minutes strategy routes 10% of traffic to the new version for 5 minutes. If error or latency alarms trigger, CodeDeploy automatically rolls back to the previous version. Pre-traffic hooks run validation against the new version before any real traffic is shifted.

Rollback & Disaster Recovery

Instant rollback is critical for serverless applications where a broken function can affect all users immediately.

# Manual rollback — shift alias back to previous version
aws lambda update-alias \
  --function-name order-handler \
  --name live \
  --function-version 42  # Previous known-good version

# Automated rollback script
#!/bin/bash
FUNCTION=$1
ALIAS="live"

# Get current and previous versions
CURRENT=$(aws lambda get-alias \
  --function-name $FUNCTION --name $ALIAS \
  --query 'FunctionVersion' --output text)

PREVIOUS=$((CURRENT - 1))

echo "Rolling back $FUNCTION from v$CURRENT to v$PREVIOUS"
aws lambda update-alias \
  --function-name $FUNCTION \
  --name $ALIAS \
  --function-version $PREVIOUS

echo "Rollback complete. Verifying..."
aws lambda invoke \
  --function-name "$FUNCTION:$ALIAS" \
  --payload '{"healthcheck": true}' \
  /dev/stdout

Rollback is instantaneous because Lambda keeps previous function versions. We maintain the last 10 versions per function and document the rollback procedure in your runbook. For infrastructure-level rollbacks (DynamoDB table changes, API Gateway route modifications), Terraform state enables terraform apply with the previous commit hash to revert all infrastructure changes atomically.

Why Anubiz Engineering

100% async — no calls, no meetings
Delivered in days, not weeks
Full documentation included
Production-grade from day one
Security-first approach
Post-delivery support included

Ready to get started?

Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.