Serverless & Edge Computing

Serverless Microservices Architecture

Microservices on serverless infrastructure give you independent scaling, per-function cost billing, and zero idle costs — but only if the service boundaries and communication patterns are designed correctly. We architect and implement serverless microservices using EventBridge for inter-service communication, DynamoDB for per-service data stores, and independent CI/CD pipelines per service so your team deploys 10 times a day without coordination overhead.

Need this done for your project?

We implement, you ship. Async, documented, done in days.

Start a Brief

Service Decomposition Strategy

We decompose your application into services aligned with business capabilities, not technical layers. Each service owns its data, exposes events, and can be deployed independently.

# Service catalog — typical SaaS decomposition
services/
  identity/           # Auth, user profiles, API keys
    functions/
      signup.ts
      login.ts
      verify-email.ts
    events:
      - USER_CREATED
      - USER_VERIFIED
    data: DynamoDB (users table)

  billing/            # Subscriptions, invoices, payments
    functions/
      create-subscription.ts
      process-webhook.ts
      generate-invoice.ts
    events:
      - SUBSCRIPTION_CREATED
      - PAYMENT_RECEIVED
      - PAYMENT_FAILED
    data: DynamoDB (subscriptions table)

  notifications/      # Email, push, in-app
    functions/
      send-email.ts
      send-push.ts
    consumes:
      - USER_CREATED → welcome email
      - PAYMENT_FAILED → retry notification
    data: SQS queues (buffered delivery)

Each service has a clear interface contract: the events it publishes and the API routes it owns. We document these contracts in an AsyncAPI specification that serves as the source of truth for inter-service communication.

EventBridge Communication Layer

Services communicate exclusively through Amazon EventBridge. Direct Lambda-to-Lambda invocation is forbidden — it creates tight coupling and cascading failures.

resource "aws_cloudwatch_event_bus" "main" {
  name = "${var.project}-${var.env}"
}

# Billing service publishes PAYMENT_RECEIVED
resource "aws_cloudwatch_event_rule" "payment_received" {
  name           = "payment-received"
  event_bus_name = aws_cloudwatch_event_bus.main.name
  event_pattern = jsonencode({
    source      = ["billing"]
    detail-type = ["PAYMENT_RECEIVED"]
  })
}

# Notifications service consumes it
resource "aws_cloudwatch_event_target" "send_receipt" {
  rule           = aws_cloudwatch_event_rule.payment_received.name
  event_bus_name = aws_cloudwatch_event_bus.main.name
  arn            = aws_lambda_function.send_receipt_email.arn
  
  dead_letter_config {
    arn = aws_sqs_queue.dlq.arn
  }
  
  retry_policy {
    maximum_retry_attempts       = 3
    maximum_event_age_in_seconds = 3600
  }
}

Every event rule has a dead-letter queue. Failed events are retried with exponential backoff, and permanently failed events land in a DLQ with CloudWatch alarms. We build a central event monitoring dashboard showing event volume, failure rates, and processing latency per service.

Per-Service Data Isolation

Each microservice owns its DynamoDB table with single-table design patterns. No service queries another service's table — data access happens through events or synchronous API calls.

// Single-table design for billing service
// Access patterns:
// 1. Get subscription by ID → PK=SUB#id, SK=META
// 2. List subscriptions by customer → PK=CUST#id, SK=SUB#created
// 3. Get invoice by ID → PK=INV#id, SK=META
// 4. List invoices by subscription → GSI1PK=SUB#id, GSI1SK=INV#date

const items = {
  subscription: {
    PK: `SUB#${subId}`,
    SK: 'META',
    GSI1PK: `CUST#${customerId}`,
    GSI1SK: `SUB#${createdAt}`,
    plan: 'pro',
    status: 'active',
    mrr: 4900,
  },
  invoice: {
    PK: `INV#${invoiceId}`,
    SK: 'META',
    GSI1PK: `SUB#${subId}`,
    GSI1SK: `INV#${invoiceDate}`,
    amount: 4900,
    status: 'paid',
    paidAt: '2025-01-15T10:30:00Z',
  },
};

We document every access pattern before writing a single line of code. The table design, GSI projections, and capacity settings are all derived from the access pattern matrix. Point-in-time recovery and DynamoDB Streams are enabled on every table for audit trails and event sourcing.

Independent Deployment Pipelines

Each service has its own CI/CD pipeline. A change to the billing service deploys only billing functions — the identity and notifications services are untouched.

# .github/workflows/billing-service.yml
name: Deploy Billing Service
on:
  push:
    branches: [main]
    paths:
      - 'services/billing/**'
      - 'shared/lib/**'

jobs:
  deploy:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: services/billing
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm ci
      - run: npm test
      - run: npm run build
      - uses: hashicorp/setup-terraform@v3
      - run: |
          cd infrastructure
          terraform init -backend-config=envs/prod.hcl
          terraform apply -auto-approve -var-file=envs/prod.tfvars
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_KEY }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET }}

Path-based triggers ensure only the affected service pipeline runs. Shared libraries trigger all dependent service pipelines. We configure Terraform state per-service in separate S3 prefixes so state locking never blocks parallel deployments. Average deployment time: under 90 seconds per service.

Why Anubiz Engineering

100% async — no calls, no meetings
Delivered in days, not weeks
Full documentation included
Production-grade from day one
Security-first approach
Post-delivery support included

Ready to get started?

Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.