Simon Bisson
Contributor

How to choose the right Azure cloud VMs

how-to
Dec 12, 20176 mins
Cloud ComputingIaaSMachine Learning

As Microsoft’s cloud gets more complex, building infrastructure as a service gets easier

data center - network server room - cloud computing
Credit: Thinkstock

Back when Microsoft first launched Azure’s virtual machines, there were only a handful of default server sizes you could use. The question you had to ask yourself then was a simple one: Is there a server that can support my workload? But now there’s an ever-growing list of different server sizes and server types, all targeted at different use cases. That’s changed the question. Now you must ask: Which one is the right one for me?

In the beginning of the public cloud, the key factor was economies of scale. The first two or three generations used the same hardware across entire datacenters, giving massive price advantages but at the same time limiting the capabilities of the servers used to host infrastructure and platform as a service. The rise of the Open Compute Project, and its support by the main cloud vendors, changed things by giving those clouds common hardware standards that could support a wider range of functions without significantly adding costs.

Today’s cloud: A variety of real servers and virtual machines

The latest generation of OCP hardware is even more flexible. Microsoft’s Project Olympus chassis, the basis of its new generation of Azure datacenters, is a prime example, building on its x86 heritage to support adding extra processing via GPUs or FPGAs. With GPU technology at the heart of many machine learning algorithms, and FPGAs providing accelerated networking as well as supporting dedicated machine learning for services like Bing, there’s now a lot more flexibility, both in CPU capabilities and in how those servers support cloud services.

Currently, Azure offers 36 separate VM types, focused across six different use cases. That’s a lot of VM options, with not all available in all regions. You need to think carefully about your workloads before you pick an option, because picking the wrong type could make your application more expensive to run. The 36 VM types in Azure are available with both Windows and Linux support, so you’ve got a choice of operating systems for your code, making it easier to lift and shift existing applications or providing endpoints that fit into your development tool chain.

Choosing the right Azure VM

The six use cases Microsoft suggests are:

  • general-purpose
  • compute-optimized
  • memory-optimized
  • Ssorage-optimized
  • GPU
  • high-performance compute

Once you’ve decided on your workload and the VM type you want to use, you can tune it by picking the number of virtual CPUs, the amount of available memory, and the size of your local storage. Other options add data disks and support for more network connections, giving you higher bandwidth.

To simplify things, Microsoft has normalized the compute capabilities of its VMs, making it easier to compare capabilities, with a chart that uses performance to help you choose the right VM for your application.

General-purpose Azure VMs

General-purpose VMs are your everyday server, much like you’d specify when buying an off-the-shelf box from HPE or Dell. They’re not specialized in any way and so work well as hosts for development workloads, as well as for servers handling the UI layer of a modern application. Because they can be low-cost, you can roll them out as needed—and throw them away just as easily.

The hardware Azure uses for these VMs comes from several generations of datacenter hardware. You’ll still need to pick and choose the VM type you want, because they do have different characteristics. Some, like the A-series, are designed so you won’t see much performance difference between them no matter what the underlying hardware might be, because the Azure VMs they host are throttled. Others, like the D-series, have higher performance, with access to different generations of server hardware.

You can run any workload on a general-purpose VM, but you won’t get the best performance, especially if you’re supporting large numbers of users. For a few users, they’re an excellent idea for, say, a development and test team building and testing code on a low-cost virtual server, before transferring it to a more specialized host in production.

Special-purpose Azure VMs

Azure’s specialized VMs focus on specific issues that affect key enterprise workloads. Some offer increased compute, ready for dynamic web content, for application servers, and for offline batch processing. Others add memory, for when you’re working with in-memory databases and for analytics, where having as much data in memory as possible is key. Other servers add storage bandwidth, for when you need a lot of I/O and a lot of disk. Microsoft recently deployed a new generation of storage VMs that run on AMD hardware rather than Intel, a big change to its purchasing strategy.

Other options support newer workloads, with GPU-based instances that offer Nvidia GPUs. Two versions support both visualization workloads and GPU-based computation using CUDA and OpenCL. GPU-compute instances like this support working with data-parallel code, as well as building your own neural networks for machine learning.

As an alternative to GPU-based programming, there are also VMs for high-performance computing problems, running on fast processors with fast network interfaces. These are the cutting edge of the cloud, offering it the same scientific computing capabilities that used to require significant investments. If you’re working with computational engineering tools, using techniques like finite element analysis or computational fluid dynamics, these are the images for you and for your code.

VMs aren’t just for infrastructure as a service

Many workloads don’t need dedicated servers; if you design your code to be stateless, you should be looking to work with Azure’s hosted containers, especially now that there’s support for massive scalability with Azure Container Instances, and for Kubernetes-managed applications and services with Azure Container Service, now known as AKS.

Even so, it’s still worth understanding the capabilities of the underlying VMs used to host your containers, as that can determine the both the number of VMs you’re using, and the speed with which new containers deploy.

Whatever you do, you should keep an eye on VM utilization in your Azure Portal. If a VM is showing regular 100-percent utilization, it’s a sign you’ve chosen something too small for your workload. Similarly, if your utilization is low, you’re likely to have chosen a VM that’s overpowered for your workload. Redeploying applications and services to new, more suitable VMs is relatively simple and, if your application architecture is fault-tolerant, will need little or no downtime.

One server does not fit all, and having options makes a lot of sense for Azure. Hardware platforms like Project Olympus give cloud services many more options than just using white-labeled x86 servers, while still giving them the cost advantages that come with scale.

Simon Bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon Bisson prefers to think of “career” as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author