Simon Bisson
Contributor

What you need to know about Docker in Windows

how-to
Feb 01, 20176 mins
Small and Medium BusinessSoftware Development

Microsoft has made application containers available in Windows itself, not only in Windows 10's Linux subsystem

I spent the end of last week at Monki Gras, a London developer conference focused on the craft of software development. It’s a fascinating event, and this year focused on how to package software.

Not surprising, many of the speakers talked about the role of containers in devops and continuous delivery. But there was a general misconception of Windows’ support for containers, generally characterized as support for Docker running in Linux VMs.

That’s not true: Windows has its own container technologies, building on Docker but giving it a uniquely Microsoft spin. That’s probably the source of the confusion, with Windows 10 adding support for a Linux subsystem and Microsoft adding Docker tools to Windows Server 2016 around the same time. Both are part of Microsoft’s approach to cloud-native application development, which is a key element of its Azure platform going forward.

Microsoft’s commitment to containers, one of the more important cross-industry developments of the last few years, shouldn’t be surprising. Perhaps best thought of as a way of encapsulating an entire user land of processes and namespaces to isolate it from other instances running on the same server, containers have rapidly become a key component of devops and continuous-integration implementations. Microsoft has been a quick adopter of these approaches internally, and as always, its tools reflect how Redmond is using software and how it builds applications.

Understanding containers

By separating the services an application uses from the services an OS needs, modern containers have become a powerful tool for packaging and deploying applications on servers. Containers offer portability among development, on-premises datacenters, and private, hybrid, and public clouds. Applications wrapped in a container are independent of the host OS, and they can run on any similar container host without changes.

Wrapping an application in a container means that the application is easy to deploy alongside all the appropriate configuration files and dependencies: If a container runs on a development machine or passes all your integration tests, then it’ll run on a server without any changes. You can change out a container for a new version without affecting the underlying OS, and you can move a container from server to server without affecting your code. It’s the logical endpoint of a devops model, allowing you to deploy infrastructure and applications separately — and manage them separately.

Originally a mainframe technology, containers (or at least similar forms of namespace and process isolation) could be found in many Unix OSes, including Linux and Solaris.

Inside Windows containers

Now, with the release of Windows Server 2016, Windows has its own container technology. It’s based around the popular open source Docker container service, but it adds support for using the PowerShell command line and for additional isolation with the combination of the thin container-focused Nano Server and Hyper-V Containers.

Docker remains at the heart of Microsoft’s container strategy. Its tools, like Swarm and Machine, are widely used, and its Data Center product can manage both Windows and Linux containers. You can even use Docker’s client from the Bash shell that’s part of Windows 10, installing it in the Windows subsystem for Linux. That approach does require you juggle certificates, so you may prefer to use Docker’s Windows app as a development and basic management tool for both your Windows and Linux containers.

Windows containers are, like many Windows Server features, a role that can be installed either via the familiar Windows features dialog or via PowerShell. Taking the PowerShell route makes the most sense because there’s a OneGet PowerShell module that installs both the Windows containers feature and Docker, with only one reboot needed to get started. (You’ll also need to enable Hyper-V virtualization if you want to use Hyper-V containers.)

There’s a surprising amount of enthusiasm for Windows containers from both developers and ops teams; Microsoft has reported more than 1 million downloads of the base Windows images from Docker’s Hub container library since Windows Server 2016 went into general availability.

Building and deploying containers on Windows

Containers aren’t only a server tool; the Professional and Enterprise versions of Windows 10 Anniversary Edition also support containers. You’ll need to enable them from the Windows Features dialog, but once they’re enabled you can install and manage Windows containers on a development PC using PowerShell. Because Windows 10 only supports Hyper-V containers, you’ll need to install Hyper-V as well.

Once the Windows containers have been enabled, you’ll need to download and install the Docker Engine and Docker client, and install the base images you’ll need to configure for your application.

Microsoft’s suggested base image for new-build Windows containers is Nano Server, its low-footprint cloud-focused server implementation. Nano Server makes a lot of sense as a container base: It’s small and fast, with no UI, so it’s quick to deploy and relatively secure.

One important note: Although you can use it to host runtimes like Node.js, Nano Server is intended to host .Net Core applications, including ASP.Net Core, so you won’t get all the .Net features you’re used to. There’s enough of a difference from the familiar Windows Server that it’s perhaps best to think of Nano Server-hosted Windows containers as a tool for new applications rather than as a host for existing code.

Those differences explain why many businesses are using Windows Server Core as a base image. Although it is larger and takes longer to deploy than Nano Server, Windows Server Core offers support for current Windows SDKs and a full .Net implementation. It’s a lot easier to quickly move existing code to Server Core, giving you the option to, as Lead Program Manager for Windows Server and Hyper-V Containers Taylor Brown calls it, “lift and shift” from existing servers to containers, so they’re deployable wherever you want. Once the application is in a container, developers can decompose it further; for example, moving API connectors to their own Nano Server-based containers to simplify application maintenance.

Container support is being built into Windows tools at the very lowest level, with Windows containers now a deployment target for Visual Studio 2017. You can build and deliver applications as a container, ready for test. Making containers a simple mouse click away is an important step.

With Windows Azure soon to support nested virtualization, the ability to add more isolation in the public cloud will help regulated industries justify a move to both containers and to the cloud.

Simon Bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon Bisson prefers to think of “career” as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author