AWS Lambda Setup
AWS Lambda lets you run code without provisioning servers, but a production-grade Lambda deployment involves far more than uploading a ZIP file. You need proper IAM execution roles, VPC configuration for private resource access, environment-based configuration management, structured logging, and deployment automation. We set up your Lambda functions with battle-tested patterns so your serverless backend is production-ready from day one.
Need this done for your project?
We implement, you ship. Async, documented, done in days.
Function Architecture & Packaging
We structure your Lambda functions using a monorepo-friendly layout with shared layers for common dependencies. Each function gets its own deployment package with tree-shaken dependencies to minimize cold start times.
A typical project structure looks like this:
infrastructure/
terraform/
modules/
lambda/
main.tf # Function resource + IAM
variables.tf # Runtime, memory, timeout
api-gateway.tf # HTTP trigger config
outputs.tf
functions/
src/
handlers/
create-order.ts
process-payment.ts
shared/
db-client.ts
validator.ts
layers/
common-deps/
nodejs/
package.json # Shared node_modules layer
We use esbuild or webpack to bundle each handler into a single-file artifact under 5 MB, keeping cold starts under 200ms for Node.js runtimes. Lambda Layers hold shared dependencies like AWS SDK extensions, database drivers, or utility libraries that change less frequently than your application code.
IAM & Security Configuration
Every Lambda function gets a dedicated IAM execution role scoped to the exact resources it needs. We never use wildcard permissions or share roles across functions with different access patterns.
resource "aws_iam_role" "order_handler" {
name = "lambda-order-handler-${var.env}"
assume_role_policy = data.aws_iam_policy_document.lambda_assume.json
}
resource "aws_iam_role_policy" "order_handler" {
role = aws_iam_role.order_handler.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["dynamodb:PutItem", "dynamodb:GetItem"]
Resource = aws_dynamodb_table.orders.arn
},
{
Effect = "Allow"
Action = ["sqs:SendMessage"]
Resource = aws_sqs_queue.payment_queue.arn
}
]
})
}If your functions need access to RDS or ElastiCache inside a VPC, we configure VPC-attached Lambdas with private subnet placement, security groups, and NAT Gateway for outbound internet access. We also set up AWS Secrets Manager integration for database credentials with automatic rotation.
- Per-function execution roles with least-privilege policies
- VPC configuration for private resource access
- Secrets Manager integration for credentials
- Environment variable encryption with KMS customer-managed keys
API Gateway Integration
We wire your Lambda functions to API Gateway (HTTP API or REST API depending on your requirements) with proper route configuration, request validation, CORS headers, and custom domain mapping.
resource "aws_apigatewayv2_api" "main" {
name = "${var.project}-api-${var.env}"
protocol_type = "HTTP"
cors_configuration {
allow_origins = var.allowed_origins
allow_methods = ["GET", "POST", "PUT", "DELETE", "OPTIONS"]
allow_headers = ["Content-Type", "Authorization"]
max_age = 3600
}
}
resource "aws_apigatewayv2_stage" "main" {
api_id = aws_apigatewayv2_api.main.id
name = var.env
auto_deploy = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.api.arn
format = jsonencode({
requestId = "$context.requestId"
ip = "$context.identity.sourceIp"
method = "$context.httpMethod"
path = "$context.path"
status = "$context.status"
latency = "$context.responseLatency"
integrationErr = "$context.integrationErrorMessage"
})
}
}Rate limiting, throttling, and usage plans are configured per-route. We set up custom domain names with ACM certificates and Route 53 alias records so your API lives at api.yourdomain.com instead of an auto-generated AWS URL.
Observability & Deployment Pipeline
Every function ships with structured JSON logging via a lightweight middleware layer, X-Ray tracing for distributed request tracking, and CloudWatch alarms for error rates and duration spikes.
# GitHub Actions deployment example
deploy-lambda:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci && npm run build
- run: |
cd dist/handlers
for handler in */; do
cd "$handler"
zip -r "../../${handler%/}.zip" .
cd ..
done
- uses: hashicorp/setup-terraform@v3
- run: terraform init && terraform apply -auto-approve
working-directory: infrastructure/terraform
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_KEY }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET }}We configure CloudWatch dashboards showing invocation counts, error rates, duration percentiles (p50/p95/p99), and concurrent executions. Alarms trigger SNS notifications when error rates exceed thresholds. The full deployment pipeline runs in under 3 minutes for most projects.
Why Anubiz Engineering
Ready to get started?
Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.