How the managed Kubernetes services on the major clouds stack up—and how well they integrate with the clouds that host them Credit: IBM, Maersk First came containers, then came Kubernetes. The world needed relief from the tedium and complexity of deploying, managing, and scaling containerized applications, and Kubernetes answered the call. One factor that has propelled Kubernetes forward is that Kubernetes clusters can run on-premises or in the cloud, or even span the two, making container apps portable across environments. Thus it’s no surprise that all three of the major public cloud providers offer managed Kubernetes services—where all you need is to bring your containers and let the cloud do the rest of the work. But just as each cloud has its own roster of exclusive features, each cloud’s Kubernetes service has its own peculiarities. In this article we’ll look at how the hosted Kubernetes offerings of Amazon, Google, and Microsoft are stacking up, especially in terms of their support for the storage and other services of the underlying cloud by way of the Kubernetes plug-in ecosystem. Amazon Elastic Kubernetes Service AWS support for containers began in late 2014 with the Amazon Elastic Container Service, which allowed you to run Docker containers on AWS EC2 instances. Today ECS allows you to deploy Docker to either EC2 or AWS Fargate, a serverless option. Amazon Elastic Kubernetes Service, or EKS, brought Kubernetes support to AWS a few years later. Before EKS, which has been generally available since June 2018, the only way to run a Kubernetes cluster in AWS was to spin up a slew of EC2 instances and configure the cluster manually. With EKS, launching a Kubernetes cluster is much easier: You deploy the worker nodes in EC2 and simply point them to the management nodes. Amazon lets you run Kubernetes job nodes on different kinds of EC2 instances. For instance, a low-demand, background job could be scheduled on EC2 Spot Instances as a cost-saving measure. By contrast, a job that needs nodes to be continuously available over a long period of time could be scheduled on EC2 Reserved Instances. Amazon EKS offers an upgrade mechanism for new versions of Kubernetes. When you trigger an upgrade, it attempts to relaunch the cluster using the new version, and if anything fails it reverts to the last known-good configuration. Add-ons have to be upgraded manually, though, and any version of Kubernetes older than four revisions isn’t supported. Using Amazon Key Management Services to store Kubernetes secrets isn’t directly supported but can be cobbled together. Kubernetes offers many official plug-ins and add-ons for working with Amazon infrastructure. These include a Key Management Service provider for data encryption, an AWS IAM authenticator, and Container Storage Interface drivers for Elastic Block Storage, Elastic File System, and Amazon FSx for Lustre (more about Lustre here). Note that the CSI drivers are still considered preliminary and not ready for production use. Amazon also has its own roster of Kubernetes controllers for integration with AWS. Note that many of these tools are still in alpha and not recommended for production use: The AWS Service Operator allows you to create AWS resources using kubectl and Kubernetes CRDs (custom resource definitions). The Kubernetes Custom Metrics Adapter allows AWS CloudWatch metrics to be used to scale Kubernetes deployments. A Kubernetes Ingress controller for managing Amazon API gateways. A plug-in to manage EKS clusters across Amazon accounts. A plug-in to use Kubernetes CRDs to work with AWS VPNs. A plug-in to allow Kubernetes nodes to automatically place pods in an optional secondary subnet. A plug-in that allows AWS Network Load Balancers to be created and managed as Kubernetes custom resources. Microsoft Azure Kubernetes Service Azure support for containers began with the Azure Container Service, or ACS, which was introduced in 2015. Azure Container Service supported basic Docker containers, Kubernetes, and Mesosphere DC/OS. In late 2017, Microsoft rolled out Azure Kubernetes Service (AKS) as a replacement for ACS. Azure Container Service is now deprecated and scheduled to be disabled in 2020. While Kubernetes users have a superior option in Azure Kubernetes Service, those who want to use Docker without Kubernetes are being directed to Docker’s own Docker on Azure offering, in either its community or enterprise edition. Azure Kubernetes Service allows you to run both Linux and Windows containers. However, AKS support for Windows Server containers is still patchy, if only because the underlying state of containers on Windows as a work-in-progress. For instance, AKS lets you natively schedule jobs in Azure Container Instances (hypervisor-level isolated containers)—but only if they’re running Linux. For Windows containers, you need to use the Virtual Kubelet provider, which Microsoft describes as “an experimental open source project and should be used as such.” Upgrading to a new version of Kubernetes can be done from the Azure command line. You can also schedule upgrades to individual node pools, as a way to make rolling upgrades less disruptive. Node pools can also be used to define groups of VMs that have greater or less CPU or memory demands, since by default all the nodes in a cluster must be the same machine type. Azure’s Kubernetes support includes running node pools on GPUs, though this is restricted to Linux nodes. You also need to perform some manual heavy lifting on the nodes in question, like installing the Nvidia device plug-in. But most GPU-based Kubernetes jobs, like TensorFlow machine learning, can be set up and scheduled on Azure. Kubernetes plug-ins provide Container Storage Interface support for Azure Files and for Azure Disk Storage. Unfortunately, both of these plug-ins are still in the alpha stage, so it’s unwise to rely on them for production work. Kubernetes also offers an official plug-in for supporting the cluster API on Microsoft Azure. It’s also possible to use a plug-in to manage Kubernetes secrets from Azure Key Vault, although this tool doesn’t support automatic key rotation. Google Kubernetes Engine Google Kubernetes Engine, or GKE, hooks into many Google Cloud features. Access management and identity are handled through existing Google accounts and permissions infrastructure. Stackdriver logging and monitoring, used for apps elsewhere in Google Cloud, provide insights into apps in the Kubernetes cluster as well. All Kubernetes functions can be controlled from the Google Cloud Console or a command line. By default, new Kubernetes clusters run the most recent stable Kubernetes version. Cluster masters are automatically upgraded to the newest version of Kubernetes by default, but you can elect to disable this and perform manually initiated upgrades if you want. All nodes in a cluster must run the same Google Compute Engine machine type. The default machine type offers one virtual CPU and 3.75 GB of memory, but you can change that when you create the cluster. Kubernetes instances run Google’s own Container-Optimized OS, which is derived from the Chromium OS project. Like Rancher, CoreOS, and other container-centric operating systems, Container-Optimized OS includes only the components needed to run and oversee containers, minimizing the storage and memory footprint, complexity, and attack surface. Like AWS and Azure, Google provides its own container registry, but also its own cloud container build service for those who need to custom-create containers as part of their Kubernetes deployments. Kubernetes secrets can be encrypted using keys held in Google Cloud Key Management Service. Google Kubernetes Engine has long supported GPU-powered nodes. However, you do need to manually provision the Nvidia GPU device drivers for each node. Two plug-ins provide Container Storage Interface functionality for GKE, one for Google Compute Engine Persistent Disk and one for Google Cloud Filestore. Neither is officially supported by Google, and neither should be used in production work just yet. Other Kubernetes projects for Google Cloud Platform include a Terraform module to create Kubernetes clusters on GCE and a Kubernetes Key Management Service plug-in for Google Cloud Key Management Service. These are functional, but also still to be considered in the early stages. Related content opinion The dirty little secret of open source contributions It isn’t the person making the contributions—it’s how easy the contributions make it to use the software. By Matt Asay Nov 18, 2024 4 mins Technology Industry Open Source opinion Breaking down digital silos Want your digital initiatives to succeed? Put the business and tech teams in the same sandbox and let them work together. By Matt Asay Nov 11, 2024 4 mins Chief Digital Officer CIO Technology Industry opinion The cloud reaches its equilibrium point Workloads in the cloud and workloads on premises are both reasonable choices. Enterprises are about evenly split between the two. By Matt Asay Nov 04, 2024 5 mins Technology Industry Cloud Architecture Cloud Computing analysis Overlooked cloud sustainability issues Unsustainable demands for AI are on the horizon. People will soon have to stop fake-caring about cloud sustainability and give a damn for real. By David Linthicum Nov 01, 2024 5 mins Technology Industry Cloud Architecture Cloud Computing Resources Videos