Technical Senior Level

How do you optimise Docker build times in CI? Discuss BuildKit cache mounts, registry-based caching, and cache invalidation strategies.

Quick Tip

Show CI-specific optimisation: "I use BuildKit with --cache-from pointing to the registry so CI runners pull cached layers. Cache mounts persist the npm/pip cache across builds. Dependency changes rebuild one layer, not everything."

What good answers include

BuildKit features: --mount=type=cache for persistent package manager caches (pip, npm, apt) across builds, --cache-from and --cache-to for exporting and importing layer caches from registries, and parallel execution of independent build stages. Registry-based caching: push cache to the registry so CI runners without local state can reuse layers. Cache invalidation: use COPY for dependency files before source code, leverage .dockerignore to prevent unnecessary invalidation, and use build arguments carefully (each unique ARG value busts the cache). Strong candidates discuss: the difference between inline and registry cache backends, using GitHub Actions cache with BuildKit, cache key strategies for monorepos, and the time/storage trade-off of caching large layers.

What interviewers are looking for

Senior build optimisation question. Candidates whose CI builds take 10 minutes because they rebuild everything from scratch do not understand Docker caching. Those who use BuildKit cache mounts and registry-based caching have fast, efficient pipelines.

← All Docker questions