Since the end of March, Kubernetes has full, production-level support for running Windows Server nodes in clusters. With Kubernetes version 1.14, running Windows-based containers has been moved from beta to stable support, meaning Kubernetes can now natively schedule Windows containers.
During a number of releases, Windows support was gradually baked into Kubernetes over the course of three years; in earlier releases, Kubernetes worker nodes only ran on Linux, and by extension, only Linux containers.
Currently, the control plane components (like the API server and management console) continue to run on Linux-based nodes only. But worker nodes can consist of any mix of Windows and Linux, ranging from 100% Windows to 100% Linux, including mixed worker node scenarios.
What production-ready Kubernetes-on-Windows means
Adding Windows-based worker support was not a simple task. The most work went into enhancements in Windows Server container networking via Container Network Interface (CNI) plugins. Without this support, many of the additional networking features were not possible, like overlay networking and advanced routing. This meant working with a number of network providers to ensure their CNI plugins also work on Windows; an effort that is ongoing.
Work is never completely done. Currently, there are some limitations and future roadmap items to consider; it’s important to realize that Kubernetes was originally built for Linux systems; Windows has different specific behaviors to take into account. Some Linux features do not translate well into Windows (yet); for some a new Windows-specific implementation is required. The longer-term goal is to offer consistency in management, but not to make Windows interchangable with Linux.
Why Kubernetes on Windows is important
Just like VMware vSphere was once the interface to virtual machine management, Kubernetes is quickly becoming the de-facto interface for container workloads. Ranging from on-prem to the public cloud players: everyone has a Kubernetes play. This allows developers to interface with containers using a common, abstracted API regardless of where the containers run.
This enables portability of containers across not only underlying infrastructures, but also across Windows and Linux, increasing the reach of Kubernetes. Many applications are built using the Microsoft stack, and having to stand up a completely separate infrastructure including a different container orchestrator did not make sense; at least not at scale. For Windows operators and developers, this now means they can use the same tools and processes to manage their containers, taking full advantage of the vast knowledge and experience around the Kubernetes ecosystem.
Containers needed for cloud-native microservices
Being able to run containers at scale is a key requirement to run a microservices architecture. Without it, there’s no suitable delivery mechanism to deploy and run many services in a complex microservices landscape. Kubernetes unlocks many of the advantages of a cloud-native microservice approach, and adding Windows container support extends this approach into the Microsoft ecosystem.
These advantages are crucial to building modern cloud-native applications, providing quick scaling in response to peak loads, smoother release management and upgrades and the ability to manage dependencies when multiple teams build applications on the same platform.