Simon Bisson
Contributor

Microsoft adds a new Linux: CBL-Mariner

analysis
Nov 10, 20206 mins
Cloud ComputingMicrosoft AzureSoftware Development

Azure’s container infrastructure Linux host gets a public outing on GitHub.

nww linux predictions slide 1
Credit: Thinkstock

Think of Microsoft and Linux, and you’re likely to think about its work building an optimized Linux kernel for the Windows Subsystem for Linux (WSL). Pushed out through Windows update, Microsoft supports all the WSL2 Linux distributions, including Ubuntu and SUSE.

But WSL2’s kernel isn’t Microsoft’s only Linux offering. We’ve looked at some of the others here in the past, including the secure Linux for Azure Sphere. Others include the SONiC networking distribution designed for use with Open Compute Project hardware and used by many public clouds and major online services, and the hosts for Azure ONE (Open Network Emulator) used to validate new networking implementations for Azure.

Microsoft’s Linux Systems Group

With an ever-growing number of Microsoft Linux kernels and distributions, there’s now an official Linux Systems Group that handles much of the company’s Linux work. This includes an Azure-tuned kernel available as patches for many common Linux distributions, optimizing them for use with Microsoft’s Hyper-V hypervisor, and a set of tools to help deliver policy-based enforcement of system integrity, making distributions more secure and helping manage updates and patches across large estates of Linux servers and virtual machines.

The team recently released a new Linux distribution: CBL-Mariner. Although the release is public, much of its use isn’t, as it is part of the Azure infrastructure, used for its edge network services and as part of its cloud infrastructure. The result is a low-overhead, tightly focused distribution that’s less about what’s in it, and much more about what runs on it.

Introducing CBL-Mariner: Microsoft’s Linux container host

Investing in a lightweight Linux such as CBL-Mariner makes a lot of sense, considering Microsoft’s investments in container-based technologies. Cloud economics require hosts to use as few resources as possible, allowing services such as Azure to get a high utilization. At the same time, Kubernetes containers need as little overhead as possible, allowing as many nodes per pod as possible, and allowing new nodes to be launched as quickly as feasible.

The same is true of edge hardware, especially the next generation of edge nodes intended for use with 5G networks. Here, like the public cloud, workloads are what’s most important, shifting them and data closer to users. Microsoft uses its growing estate of edge hardware as part of the Azure Content Delivery Network outside its main Azure data centers, caching content from Azure Web apps and from hosted video and file servers, with the aim of reducing latency where possible. The Azure CDN is a key component of its Jamstack-based Azure Static Websites service, hosting pages and JavaScript once published from GitHub.

In the past Red Hat’s CoreOS used to be the preferred host of Linux containers, but its recent deprecation means that it’s no longer supported. Anyone using it has had to find an alternative. Microsoft offers the Flatcar Linux CoreOS-fork for Azure users as part of a partnership with developers Kinvolk, but having its own distribution for its own services ensures that it can update and manage its host and container instances on its own schedule. Development in public is available for anyone who wants to make and use their own builds or who wants to contribute new features and optimizations, for example adding support for new networking features.

Running CBL-Mariner and containers

Out the box, CBL-Mariner only has the basic packages needed to support and run containers, taking a similar approach to CoreOS. At heart, Linux containers are isolated user space. Keeping shared resources to a minimum reduces the security exposure of the host OS by making sure that application containers can’t take dependencies on it. If you’re using CBL-Mariner in your own containers, ensure that you’ve tested any public Docker images before deploying, as they may not contain the appropriate packages. You may need to have your own base images in place as part of your application dockerfiles.

CBL-Mariner uses familiar Linux tools to add packages and manage security updates, offering updates either as RPM packages or as complete images that can be deployed as needed. Using RPM allows you to add your own packages to a base CBL-Mariner image to support additional features and services as needed.

Getting started with CBL-Mariner can be as simple as firing up an Azure service. But if you want hands-on experience or want to contribute to the project, all the source code is currently on GitHub, along with instructions for building your own installations. Prerequisites for a build on Ubuntu 18.04 include the Go language, the QEMU (Quick EMUlator) utilities, as well as rpm.

Build your own installation using the GitHub repository

You have several different options for building from the source. Start by checking out the source from GitHub, making a local clone of the project repository. Various branches are available, but for a first build you should choose the current stable branch. From here you can build the Go tools for the project before downloading the sources.

For quick builds you have two options, both of which use prebuilt packages and assemble a distribution from them. The first, for bare-metal installs, creates an ISO file ready for install. The second, for using CBL-Mariner as a container host, builds a ready-to-use VHDX file with a virtual machine for use with Hyper-V. An alternative option builds a container image that can be used as a source for your Mariner-based dockerfiles, giving you everything you need to build and run compatible containers with your applications.

If you prefer to build from source, the option is available, although builds will be considerably slower than using precompiled packages. However, this will allow you to target alternative CPUs, for example building a version that works with the new generation of ARM-based edge hardware similar to that being used for AWS’s Graviton instances. You can bootstrap the entire build toolchain to ensure that you have control over the whole build process. The full build process can even be used to build supported packages, with the core files listed in a JSON configuration file.

Once built, you can start to configure CBL-Mariner’s features. Out the box, these include an iptables-based firewall, support for signed updates, and a hardened kernel. Optional features can be set up at the same time, with tools to improve process isolation and encrypt local storage, important features for a container host in a multitenant environment where you need to protect local data.

The result is an effective replacement for CoreOS, and one I’d like to see made available to Azure users as well as to Microsoft’s own teams. CBL-Mariner may not have the maturity of other container-focused Linuxes, but it’s certainly got enough support behind it to make it a credible tool for use in hybrid cloud and edge network architectures, where you’re running code on your own edge servers and in Microsoft’s cloud. If Microsoft doesn’t make it an option, at least you can build it yourself.

Simon Bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon Bisson prefers to think of “career” as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author