Containers make cloud migration seamless
Joe Zeng, software engineer at Statsig, on using Docker
Container portability turns infrastructure changes from a rewrite into a scheduling problem. At Statsig, the app is packaged once with its runtime and dependencies, then Kubernetes places that same image on whatever nodes are available, whether that means a different VM family inside AKS or a different managed Kubernetes environment. That matters most for event heavy products, where moving APIs and workers without rebuilding machines cuts downtime, migration work, and ops risk.
-
A VM based workflow usually ties the app to a specific machine image, package setup, and OS configuration. A container image bundles that setup ahead of time, so the deploy unit is the same across environments. Kubernetes then handles placement, restart, and scaling of those images across nodes.
-
This is especially useful when a team changes machine types. AKS node pools are built from sets of VMs, and workloads are scheduled onto those nodes. If the app already runs as containers, shifting from one node pool setup to another is mostly an infrastructure operation, not an application porting project.
-
The broader market adopted Docker for exactly this reason. Containers became the default delivery format for microservices because they separated application code from the underlying server, and Kubernetes won orchestration by making that format portable across cloud vendors instead of locking teams into one provider specific runtime.
Going forward, portability becomes more valuable as teams mix clouds, specialized compute, and tighter cost controls. The companies that standardize on container images plus Kubernetes gain the freedom to chase cheaper capacity, new hardware, or multi cloud resilience without changing how developers build and ship software.