Docker Image Optimization — Shrink Your Images from Gigabytes to Megabytes
Your Docker image is 1.2 GB. It takes 3 minutes to push, 2 minutes to pull, and contains a full operating system, build tools, and test dependencies that have no business running in production. We optimize your Dockerfiles with multi-stage builds, minimal base images, layer caching strategies, and security scanning that produces images under 100 MB — faster to build, faster to deploy, and with a smaller attack surface.
Need this done for your project?
We implement, you ship. Async, documented, done in days.
Why Image Size Matters
Docker image size directly impacts three things: build time (larger images take longer to build and push to the registry), deployment time (each node pulls the image before starting the container — a 1 GB image on 10 nodes means 10 GB of transfer), and security surface (every package in the image is a potential vulnerability — fewer packages means fewer CVEs to track and patch).
The most common bloat sources: using node:latest instead of node:20-alpine (900 MB vs 130 MB base image), installing dev dependencies in the production image, copying the entire source directory instead of only build artifacts, and accumulating layers from package manager caches that are not cleaned up.
A properly optimized Docker image for a Node.js application should be 50-150 MB. For a Go application, 10-30 MB. For a Python application, 80-200 MB. If your images are significantly larger, there are easy wins available.
Our Optimization Techniques
Multi-Stage Builds: The most impactful optimization. Stage 1 installs all dependencies (including dev), runs the build (TypeScript compilation, asset bundling), and produces artifacts. Stage 2 starts from a clean base image, copies only the build artifacts and production dependencies, and sets up the runtime. Build tools, source code, and dev dependencies never make it into the final image.
For Node.js with Next.js standalone output:
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS runner
WORKDIR /app
RUN addgroup -S app && adduser -S app -G app
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
USER app
EXPOSE 3000
CMD ["node", "server.js"]Base Image Selection: We choose the smallest appropriate base image. alpine variants are 5-10 MB instead of 100+ MB for Debian-based images. For applications that do not need a full OS, distroless images provide just the language runtime with no shell, no package manager, and minimal attack surface. For Go applications, scratch (empty image) works since Go produces static binaries.
Layer Caching: We order Dockerfile instructions so that frequently changing layers (source code) come after infrequently changing layers (dependency installation). COPY package*.json before COPY . . ensures that dependencies are only reinstalled when package.json changes. In CI, we configure BuildKit cache mounts for package manager caches so repeated builds reuse downloaded packages.
Security Scanning: We integrate Trivy or Grype into the build pipeline to scan images for known vulnerabilities. Critical and high-severity CVEs block the build. We configure a base image update schedule so your images stay patched against newly discovered vulnerabilities.
What You Get
Optimized Docker images and build process:
- Multi-stage Dockerfiles — build dependencies excluded from production images
- Minimal base images — alpine, distroless, or scratch as appropriate
- Layer caching — optimized instruction ordering with BuildKit cache mounts
- Non-root user — containers run as non-root for security
- Security scanning — Trivy/Grype in CI with severity-based blocking
- Image size report — before/after comparison with layer-by-layer analysis
- Build speed improvement — cached builds completing in seconds instead of minutes
- Registry configuration — ECR/GCR/Docker Hub with lifecycle policies to clean old images
Why Anubiz Engineering
Ready to get started?
Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.