Serverless & Edge Computing

Lambda Layers & Shared Dependencies

Lambda Layers let you share libraries, custom runtimes, and utility code across multiple functions without duplicating them in every deployment package. But managing Layers in production requires versioning strategies, automated publishing pipelines, cross-account sharing policies, and size optimization to stay under the 250 MB unzipped limit. We set up a complete Layer management system that keeps your functions lean and your shared code consistent.

Need this done for your project?

We implement, you ship. Async, documented, done in days.

Start a Brief

Layer Architecture & Design

We organize your Layers by purpose: shared dependencies, common utilities, and custom runtimes. Each Layer has its own versioning lifecycle independent of your function deployments.

# Layer structure
layers/
  common-deps/
    nodejs/
      package.json      # Shared npm packages
      package-lock.json
    build.sh            # Build + zip script
  
  shared-utils/
    nodejs/
      node_modules/
        @myapp/
          logger.js     # Shared logging utility
          auth.js       # JWT verification
          db.js         # DynamoDB client wrapper
          errors.js     # Standard error classes
    build.sh
  
  ffmpeg-runtime/
    bin/
      ffmpeg            # Binary for video processing
    build.sh

# Terraform Layer resources
resource "aws_lambda_layer_version" "common_deps" {
  layer_name          = "${var.project}-common-deps"
  filename            = data.archive_file.common_deps.output_path
  source_code_hash    = data.archive_file.common_deps.output_base64sha256
  compatible_runtimes = ["nodejs20.x"]
  compatible_architectures = ["arm64", "x86_64"]
  description         = "Shared npm dependencies v${var.layer_version}"
}

resource "aws_lambda_layer_version" "shared_utils" {
  layer_name          = "${var.project}-shared-utils"
  filename            = data.archive_file.shared_utils.output_path
  source_code_hash    = data.archive_file.shared_utils.output_base64sha256
  compatible_runtimes = ["nodejs20.x"]
}

# Attach Layers to functions
resource "aws_lambda_function" "api_handler" {
  layers = [
    aws_lambda_layer_version.common_deps.arn,
    aws_lambda_layer_version.shared_utils.arn,
  ]
  # Function code only contains handler logic — under 500KB
}

With Layers, your function deployment package drops from 15 MB to under 500 KB. Deployments are 10x faster because you only upload the changed handler code, not the entire dependency tree.

Automated Layer Publishing Pipeline

We build a CI/CD pipeline that automatically publishes new Layer versions when dependencies change and updates all consuming functions.

name: Publish Lambda Layers
on:
  push:
    branches: [main]
    paths:
      - 'layers/**'
      - 'shared-utils/**'

jobs:
  publish-layers:
    runs-on: ubuntu-latest
    outputs:
      common-deps-arn: ${{ steps.publish.outputs.common-deps-arn }}
      shared-utils-arn: ${{ steps.publish.outputs.shared-utils-arn }}
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }

      - name: Build common-deps layer
        run: |
          cd layers/common-deps/nodejs
          npm ci --production --arch=arm64 --platform=linux
          cd ..
          zip -r common-deps.zip nodejs/

      - name: Build shared-utils layer
        run: |
          cd layers/shared-utils
          npm run build  # Compile TypeScript
          zip -r shared-utils.zip nodejs/

      - name: Publish layers
        id: publish
        run: |
          DEPS_ARN=$(aws lambda publish-layer-version \
            --layer-name myapp-common-deps \
            --zip-file fileb://layers/common-deps/common-deps.zip \
            --compatible-runtimes nodejs20.x \
            --compatible-architectures arm64 \
            --query LayerVersionArn --output text)
          echo "common-deps-arn=$DEPS_ARN" >> $GITHUB_OUTPUT

  update-functions:
    needs: publish-layers
    runs-on: ubuntu-latest
    steps:
      - name: Update all functions to new layer version
        run: |
          FUNCTIONS=$(aws lambda list-functions \
            --query 'Functions[?contains(Layers[].Arn, `myapp-common-deps`)].FunctionName' \
            --output text)
          for fn in $FUNCTIONS; do
            aws lambda update-function-configuration \
              --function-name $fn \
              --layers ${{ needs.publish-layers.outputs.common-deps-arn }}
          done

The pipeline only triggers when Layer source files change. After publishing, it automatically updates all functions that use the Layer to the new version. We keep the previous 5 Layer versions for rollback capability.

Size Optimization Techniques

Lambda Layers have a 250 MB unzipped limit across all Layers combined. We apply aggressive optimization to maximize what fits.

# Size audit script
#!/bin/bash
echo "Layer size analysis:"
for layer in layers/*/; do
  UNZIPPED=$(du -sh "$layer" | cut -f1)
  ZIPPED=$(stat -c%s "$layer"/*.zip 2>/dev/null | numfmt --to=iec)
  echo "  $layer: $UNZIPPED unzipped, $ZIPPED zipped"
done

# Common optimizations:

# 1. Remove TypeScript source maps and declarations
find nodejs/node_modules -name '*.d.ts' -delete
find nodejs/node_modules -name '*.map' -delete
find nodejs/node_modules -name '*.ts' ! -name '*.d.ts' -delete

# 2. Remove documentation and tests from node_modules
find nodejs/node_modules -type d -name 'test' -exec rm -rf {} +
find nodejs/node_modules -type d -name 'docs' -exec rm -rf {} +
find nodejs/node_modules -name 'README*' -delete
find nodejs/node_modules -name 'CHANGELOG*' -delete
find nodejs/node_modules -name 'LICENSE*' -delete

# 3. Use npm prune with production flag
npm ci --production --ignore-scripts

# 4. For AWS SDK v3 — only install needed clients
# Instead of: @aws-sdk/client-* (all clients = 100MB+)
# Install: @aws-sdk/client-dynamodb @aws-sdk/client-s3 (5MB each)

# Before optimization: 180 MB
# After optimization:   45 MB

We automate these optimizations in the Layer build script. A typical shared dependencies Layer goes from 180 MB to under 50 MB after removing type definitions, documentation, tests, and unused SDK clients. For binary Layers (FFmpeg, ImageMagick), we use statically compiled binaries stripped of debug symbols.

Cross-Account & Cross-Region Sharing

For organizations with multiple AWS accounts, we configure Layer sharing policies that allow specific accounts to use your Layers.

# Share Layer with specific accounts
resource "aws_lambda_layer_version_permission" "cross_account" {
  for_each = toset(var.shared_account_ids)
  
  layer_name     = aws_lambda_layer_version.shared_utils.layer_name
  version_number = aws_lambda_layer_version.shared_utils.version
  principal      = each.value
  action         = "lambda:GetLayerVersion"
  statement_id   = "share-with-${each.value}"
}

# For public Layers (open source tools)
resource "aws_lambda_layer_version_permission" "public" {
  layer_name     = aws_lambda_layer_version.ffmpeg.layer_name
  version_number = aws_lambda_layer_version.ffmpeg.version
  principal      = "*"
  action         = "lambda:GetLayerVersion"
  statement_id   = "public-access"
}

# Cross-region replication
resource "null_resource" "replicate_layer" {
  for_each = toset(var.additional_regions)

  provisioner "local-exec" {
    command = <<-EOT
      # Download layer from primary region
      aws lambda get-layer-version-by-arn \
        --arn ${aws_lambda_layer_version.common_deps.arn} \
        --query 'Content.Location' --output text | \
        xargs curl -o /tmp/layer.zip
      
      # Publish to additional region
      aws lambda publish-layer-version \
        --layer-name ${var.project}-common-deps \
        --zip-file fileb:///tmp/layer.zip \
        --compatible-runtimes nodejs20.x \
        --region ${each.value}
    EOT
  }
}

Cross-account sharing uses resource-based policies so consuming accounts reference the Layer ARN directly. For multi-region deployments, we replicate Layers to all regions where your functions run. The replication pipeline runs automatically when a new Layer version is published in the primary region.

Why Anubiz Engineering

100% async — no calls, no meetings
Delivered in days, not weeks
Full documentation included
Production-grade from day one
Security-first approach
Post-delivery support included

Ready to get started?

Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.