AWS Lambda with Docker Containers
AWS Lambda container image support lets you package functions as Docker images up to 10 GB, enabling use cases that are impossible with ZIP deployments: machine learning model inference, custom runtimes, large native dependencies, and consistent local-to-cloud development workflows. We build and deploy Lambda container images with optimized multi-stage Dockerfiles, ECR lifecycle policies, and CI/CD pipelines that push, scan, and deploy in under 5 minutes.
Need this done for your project?
We implement, you ship. Async, documented, done in days.
Multi-Stage Dockerfile for Lambda
We build Lambda container images using multi-stage Dockerfiles that minimize image size and maximize layer caching. The base image uses AWS-provided runtime images for compatibility with the Lambda execution environment.
# Dockerfile for Node.js Lambda with native dependencies
FROM public.ecr.aws/lambda/nodejs:20 AS base
# Build stage — install and compile
FROM base AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY src/ ./src/
RUN npm run build
# Production stage — minimal image
FROM base AS production
COPY --from=builder /app/node_modules ${LAMBDA_TASK_ROOT}/node_modules
COPY --from=builder /app/dist ${LAMBDA_TASK_ROOT}/
CMD ["index.handler"]
# For Python with ML dependencies
FROM public.ecr.aws/lambda/python:3.12 AS ml-lambda
COPY requirements.txt .
RUN pip install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
COPY app/ ${LAMBDA_TASK_ROOT}/app/
COPY models/ ${LAMBDA_TASK_ROOT}/models/
CMD ["app.handler.handler"]We use .dockerignore to exclude tests, documentation, and development dependencies. Layer ordering puts rarely-changing dependencies first for maximum cache reuse. A typical Node.js Lambda image is under 200 MB; a Python ML image with PyTorch is under 3 GB.
ECR Repository & Lifecycle Management
Container images are stored in Amazon ECR with vulnerability scanning, lifecycle policies, and cross-region replication for multi-region deployments.
resource "aws_ecr_repository" "lambda_functions" {
for_each = toset(["order-handler", "payment-processor", "ml-inference"])
name = "${var.project}/${each.key}"
image_tag_mutability = "IMMUTABLE"
image_scanning_configuration {
scan_on_push = true
}
encryption_configuration {
encryption_type = "AES256"
}
}
resource "aws_ecr_lifecycle_policy" "cleanup" {
for_each = aws_ecr_repository.lambda_functions
repository = each.value.name
policy = jsonencode({
rules = [{
rulePriority = 1
description = "Keep last 10 images"
selection = {
tagStatus = "any"
countType = "imageCountMoreThan"
countNumber = 10
}
action = { type = "expire" }
}]
})
}Immutable tags prevent accidental overwrites. Lifecycle policies keep the last 10 images per repository and expire untagged images after 1 day. ECR scan results are checked in the CI pipeline — deployment fails if critical vulnerabilities are detected.
Lambda Function Configuration
Container-based Lambda functions require specific configuration for image URI, architecture, and ephemeral storage.
resource "aws_lambda_function" "ml_inference" {
function_name = "ml-inference-${var.env}"
package_type = "Image"
image_uri = "${aws_ecr_repository.lambda_functions["ml-inference"].repository_url}:${var.image_tag}"
role = aws_iam_role.lambda_exec.arn
architectures = ["arm64"] # Graviton — 20% cheaper, 20% faster
memory_size = 3008 # ML models need more memory
timeout = 60
ephemeral_storage {
size = 2048 # 2 GB /tmp for model loading
}
environment {
variables = {
MODEL_PATH = "/var/task/models"
LOG_LEVEL = "INFO"
ENVIRONMENT = var.env
}
}
image_config {
command = ["app.handler.handler"] # Override CMD
}
}We use ARM64 (Graviton2) architecture wherever possible — it is 20% cheaper and often 20% faster than x86. Ephemeral storage is increased to 2–10 GB for functions that need to download or extract large files. The image_config block lets us override the CMD from the Dockerfile for environment-specific entry points.
CI/CD Pipeline for Container Lambda
The deployment pipeline builds the Docker image, pushes to ECR, scans for vulnerabilities, and updates the Lambda function — all in under 5 minutes.
name: Deploy Container Lambda
on:
push:
branches: [main]
paths: ['functions/ml-inference/**']
jobs:
build-and-deploy:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_DEPLOY_ROLE }}
aws-region: us-east-1
- uses: aws-actions/amazon-ecr-login@v2
id: ecr
- name: Build and push
env:
REGISTRY: ${{ steps.ecr.outputs.registry }}
TAG: ${{ github.sha }}
run: |
docker build \
--platform linux/arm64 \
--cache-from $REGISTRY/ml-inference:latest \
-t $REGISTRY/ml-inference:$TAG \
-t $REGISTRY/ml-inference:latest \
-f functions/ml-inference/Dockerfile .
docker push $REGISTRY/ml-inference --all-tags
- name: Check scan results
run: |
aws ecr wait image-scan-complete \
--repository-name ml-inference \
--image-id imageTag=${{ github.sha }}
CRITICAL=$(aws ecr describe-image-scan-findings \
--repository-name ml-inference \
--image-id imageTag=${{ github.sha }} \
--query 'imageScanFindings.findingSeverityCounts.CRITICAL' \
--output text)
if [ "$CRITICAL" != "None" ] && [ "$CRITICAL" != "0" ]; then
echo "Critical vulnerabilities found!"
exit 1
fi
- name: Update Lambda
run: |
aws lambda update-function-code \
--function-name ml-inference-prod \
--image-uri ${{ steps.ecr.outputs.registry }}/ml-inference:${{ github.sha }}We use OIDC federation for GitHub Actions instead of long-lived AWS credentials. Docker layer caching via --cache-from keeps build times under 2 minutes. The vulnerability scan gate prevents deploying images with known critical CVEs.
Why Anubiz Engineering
Ready to get started?
Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.