The “container as a service” lets you rapidly create and launch containerized applications, including from Kubernetes, without any overhead and with an easily scriptable set of commands Credit: Thinkstock Azure is rapidly turning into a container-driven public cloud, with strategic investments in tools and hires. It’s also running fast, launching new container-focused products and services on a regular basis. At first, Azure was catching up with Amazon Web Services’ features, but the release of the new Azure rapid-deployment container service that acts as a bridge between platform as a service and infrastructure as a service leapfrogs Amazon. Introducing container as a service Perhaps best thought of as a new class of cloud platform — call it “container as a service”— Azure Container Instances (ACI) let you rapidly create and launch containerized applications, without any overhead and with an easily scriptable set of commands. Designed to work both on its own and with tools like Kubernetes, ACI adds container-management commands to Azure, coupling them with a billing model that’s based on per-second usage, with no need to create and deploy (and pay for) container hosts. However, the billing model is complex, with three elements making up the charge. First, there’s a flat fee of $0.0025 per request for creating a container instance. Then, once set up, you’re billed for both memory, at $0.0000125 per gigabyte per second, and for cores used, at $0.0000125 per core per second. So, you’ll need to keep an eye on what you’re using and for how long, especially if you’re using ACI to handle scaling for a large application. Using ACI to deploy containers Setting up your first ACI container is easy enough, because ACI uses the Azure command line. The current version can run only Linux containers, though Windows containers should follow soon. Just as with working with Azure’s Container Service, you’ll find yourself using the Azure Cloud Shell or a remote Azure command-line instance to build and manage containers, starting by creating a resource group for your ACI containers. The command-line container commands are easy enough to use, and you can use them to define the containers you want to use with ACI. Deploying a container from an existing repository is quick, because there’s no need to build and start up an underlying host VM. ACI will simply assign an existing VM to your container, so you’ll be up and running in seconds. The request used to deploy a container can also define the number of cores it uses, along with the memory needed. There are commands to get details of the container state, in JSON form. In practice, you’ll be writing scripts to deploy containers on ACI, driving it from a datacenter OS or as part of a build process in a continuous-integration platform. The JSON state data configures any networking services or connections between public and private services. There’s a lot of scope for automation here, and because the Azure CLI is based on Bash it’s particularly well suited for scripting. Microsoft has promised a PowerShell release of Cloud Shell and the Azure CLI, and it will be interesting to see how PowerShell will work with ACI. Once deployed, your container is up and running with either a public IP address or a private, internal address. Public addresses can host public-facing services, while private addresses work well for private applications or internal services used to support your public interfaces. Much of Microsoft’s initial documentation for ACI focuses on it as a host for web services, deploying high-density web servers as hosts for scalable web applications. You’ll also find a useful way of deploying Node.js applications. Using ACI at scale with Kubernetes Although ACI makes a lot of sense as a tool for quickly deploying containers for test and development, it’s also a useful tool for production, especially with Kubernetes. Sadly, one of the more useful features is still experimental: its ability to surface a group of containers as a Kubernetes pod. The ACI Connector for Kubernetes is an open source project, hosted on GitHub, which allows Kubernetes to deploy containers on ACI. Working with the ACI Connector is just like working with any kubelet, with a command line for hosting and managing pods of containers. The connector registers ACI as an unlimited-capacity node, and once registered, all you need to use is the node name to launch pods in your ACI account, creating and destroying containers using familiar Kubernetes commands. You don’t need to run Kubernetes in Azure to use this feature; all you need is a public IP address for the ACI Kubernetes APIs. That should make it possible to use ACI with any Kubernetes master, running on any service. It’ll be interesting to see how Microsoft works with the Kubernetes community to bring ACI into a production state. It’s useful enough that I can see people using it very quickly, even if it’s not recommended for production loads. But I expect people mainly to use ACI in Kubernetes running in Azure rather than on-premises or across-cloud. The cost advantages of per-second billing in ACI make a lot of sense when using Kubernetes to burst-scale applications and services. The open source ACI connector should also help with the development of connectors for Mesos and for Docker Swarm. Azure’s container future Microsoft’s commitment to containers on Azure via ACS and ACI makes a lot of sense, especially if you consider it as part of a migration away from expensive and resource-hungry IaaS models to something that’s a lot more in line with Azure’s roots as a platform. Containers running in a PaaS are a logical step once you’re using Kubernetes or something similar to manage your containers. With Azure managing container scaling and container deployment, there’s really not much point to spending time building and managing complex virtual infrastructures yourself. So, your focus shifts to understanding how to construct and manage resource groups, and to working with message-driven frameworks like Azure’s Service Fabric to link containers into an application. Building code that works in this new environment might be more complex to start with, but using containers as your main deployment mode can simplify updates, as well as scaling. Up to now, Microsoft’s been a fast follower on the container path, but with its recent acquisition of Deis and now its membership of the Cloud Native Computing Foundation, it looks set to shift gear and start pushing the development of containers, not just using them. Related content news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence feature What is cloud computing? Everything you need to know Cloud computing has become the ideal way to deliver enterprise applications—and the preferred solution for companies extending their infrastructure or launching new innovations. By Eric Knorr and dan_muse Nov 01, 2024 18 mins Hybrid Cloud IaaS PaaS feature Rapid B2B integrations with Ballerina and Choreo How WSO2’s Ballerina language and Choreo platform can be used to quickly develop, test, and deploy partner-specific EDI processing modules. By Chathura Ekanayake Apr 08, 2024 9 mins Development Libraries and Frameworks PaaS Programming Languages feature 7 mistakes to avoid when developing RPAs Bots at their best offer a high return on investment—but there are risks. Here are seven mistakes software developers should watch out for. By Isaac Sacolick Oct 23, 2023 8 mins PaaS Robotics SaaS Resources Videos