Serverless & Edge Computing

Google Cloud Functions Setup

Google Cloud Functions 2nd generation runs on Cloud Run under the hood, giving you longer timeouts, larger instances, traffic splitting, and concurrency control. We set up your Cloud Functions with Terraform, configure Eventarc triggers for Pub/Sub, Cloud Storage, and Firestore events, integrate Secret Manager for credentials, and build Cloud Build pipelines for automated deployment with canary rollout.

Need this done for your project?

We implement, you ship. Async, documented, done in days.

Start a Brief

Function Deployment with Terraform

We deploy Cloud Functions 2nd gen using Terraform with proper IAM bindings, VPC connector for private resource access, and environment-specific configuration.

resource "google_cloudfunctions2_function" "order_handler" {
  name     = "order-handler-${var.env}"
  location = var.region
  
  build_config {
    runtime     = "nodejs20"
    entry_point = "handleOrder"
    source {
      storage_source {
        bucket = google_storage_bucket.functions_source.name
        object = google_storage_bucket_object.order_handler.name
      }
    }
  }
  
  service_config {
    max_instance_count             = 100
    min_instance_count             = 1  # Prevent cold starts
    available_memory               = "512Mi"
    timeout_seconds                = 60
    max_instance_request_concurrency = 10
    
    environment_variables = {
      PROJECT_ID  = var.project_id
      ENVIRONMENT = var.env
    }
    
    secret_environment_variables {
      key        = "DATABASE_URL"
      project_id = var.project_id
      secret     = google_secret_manager_secret.db_url.secret_id
      version    = "latest"
    }
    
    vpc_connector = google_vpc_access_connector.main.id
    vpc_connector_egress_settings = "PRIVATE_RANGES_ONLY"
    
    service_account_email = google_service_account.function_sa.email
  }
}

2nd gen functions support up to 10 concurrent requests per instance, reducing the number of instances needed and cutting costs. We set min_instance_count = 1 for latency-sensitive functions to eliminate cold starts.

Eventarc Triggers

Eventarc connects Cloud Functions to over 90 Google Cloud event sources. We configure triggers for Pub/Sub, Cloud Storage, Firestore, and custom events.

# Pub/Sub trigger — process incoming messages
resource "google_cloudfunctions2_function" "process_order" {
  name     = "process-order-${var.env}"
  location = var.region
  
  build_config {
    runtime     = "nodejs20"
    entry_point = "processOrder"
    source {
      storage_source {
        bucket = google_storage_bucket.functions_source.name
        object = google_storage_bucket_object.process_order.name
      }
    }
  }
  
  service_config {
    available_memory = "256Mi"
    timeout_seconds  = 120
    service_account_email = google_service_account.function_sa.email
  }
  
  event_trigger {
    trigger_region = var.region
    event_type     = "google.cloud.pubsub.topic.v1.messagePublished"
    pubsub_topic   = google_pubsub_topic.orders.id
    retry_policy   = "RETRY_POLICY_RETRY"
  }
}

# Cloud Storage trigger — process uploaded files
resource "google_cloudfunctions2_function" "process_upload" {
  # ... build_config omitted for brevity
  
  event_trigger {
    trigger_region        = var.region
    event_type            = "google.cloud.storage.object.v1.finalized"
    retry_policy          = "RETRY_POLICY_RETRY"
    service_account_email = google_service_account.trigger_sa.email
    
    event_filters {
      attribute = "bucket"
      value     = google_storage_bucket.uploads.name
    }
  }
}

Each trigger type has specific IAM requirements. We create dedicated service accounts for triggers with minimal permissions. Retry policies are configured per-trigger with dead-letter topics for events that fail after all retries.

IAM & Secret Manager Integration

We create dedicated service accounts for each function with least-privilege IAM bindings. Secrets are managed via Secret Manager with automatic injection.

resource "google_service_account" "function_sa" {
  account_id   = "fn-order-handler-${var.env}"
  display_name = "Order Handler Function SA"
}

# Grant specific permissions
resource "google_project_iam_member" "function_firestore" {
  project = var.project_id
  role    = "roles/datastore.user"
  member  = "serviceAccount:${google_service_account.function_sa.email}"
}

resource "google_project_iam_member" "function_pubsub" {
  project = var.project_id
  role    = "roles/pubsub.publisher"
  member  = "serviceAccount:${google_service_account.function_sa.email}"
}

# Secret Manager — create and grant access
resource "google_secret_manager_secret" "db_url" {
  secret_id = "database-url-${var.env}"
  replication {
    auto {}
  }
}

resource "google_secret_manager_secret_iam_member" "function_access" {
  secret_id = google_secret_manager_secret.db_url.secret_id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${google_service_account.function_sa.email}"
}

We never use the default compute service account. Each function gets its own SA so you can audit which function accesses which resources. Secret versions are pinned in production but use latest in development for convenience.

Cloud Build CI/CD Pipeline

We configure Cloud Build pipelines with traffic splitting for safe production deployments.

# cloudbuild.yaml
steps:
  - name: node:20
    entrypoint: npm
    args: ['ci']

  - name: node:20
    entrypoint: npm
    args: ['run', 'test']

  - name: node:20
    entrypoint: npm
    args: ['run', 'build']

  - name: gcr.io/google.com/cloudsdktool/cloud-sdk
    entrypoint: gcloud
    args:
      - functions
      - deploy
      - order-handler-prod
      - --gen2
      - --region=us-central1
      - --runtime=nodejs20
      - --entry-point=handleOrder
      - --source=./dist
      - --trigger-http
      - --no-allow-unauthenticated

  # Traffic splitting — 10% to new revision
  - name: gcr.io/google.com/cloudsdktool/cloud-sdk
    entrypoint: gcloud
    args:
      - run
      - services
      - update-traffic
      - order-handler-prod
      - --region=us-central1
      - --to-revisions=LATEST=10

options:
  logging: CLOUD_LOGGING_ONLY

Since 2nd gen Cloud Functions are Cloud Run services under the hood, we use Cloud Run traffic splitting for canary deployments. The pipeline deploys with 10% traffic to the new revision, monitors error rates for 10 minutes, then promotes to 100%. Cloud Build triggers fire on push to the main branch with path-based filtering per function.

Why Anubiz Engineering

100% async — no calls, no meetings
Delivered in days, not weeks
Full documentation included
Production-grade from day one
Security-first approach
Post-delivery support included

Ready to get started?

Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.