Simon Bisson
Contributor

Accelerating cloud native development in Microsoft Azure

analysis
Dec 07, 20237 mins
Cloud ComputingMicrosoft AzureSoftware Development

From GPU support to reference implementations, the latest updates to Azure Container Apps combine Microsoft’s commitment to developer productivity with its latest AI development tools.

Construction worker and blue sky    147256163
Credit: Thinkstock

One big advantage of developing cloud native applications is that you can often leave all the tedious infrastructure work to someone else. Why build and manage a server when all you need is a simple function or a basic service?

That’s the rationale behind the various implementations of serverless computing you find hosted on the major cloud providers. AWS’s Lambda may be the best known, but Azure has many of its own serverless options—in the various Azure App Services, Azure Functions, and the newer Azure Container Apps

Azure Container Apps might be the most interesting, as it offers a more flexible approach to delivering larger, scalable applications and services.

A simpler container platform

A simpler alternative to Azure Kubernetes Service designed for smaller deployments, Azure Container Apps is a platform for running containerized applications that handles scaling for you. All you need to do is ensure that the output of your build process is an x64 Linux container, deploy it to Azure Container Apps, and you’re ready to go.

Because there’s no required base image, you’re free to use the new chiseled .NET containers for .NET-based services, ensuring rapid reload as the container that hosts your code is as small as possible. You can even take advantage of other distro-less approaches, giving you a choice of hosts for your code.

Unlike other Kubernetes platforms, Azure Container Apps behaves much like Azure Functions, scaling down to zero when services are no longer needed. However, only the application containers are paused. The Microsoft-run Kubernetes infrastructure continues to run, making it much faster to reload a paused container than restarting a virtual machine. Azure Container Apps is also much cheaper than running an AKS instance for a simple service.

GPU instances for container apps

Microsoft announced a series of updates for Azure Container Apps at its recent Ignite 2023 event, with a focus on using the platform for working with machine learning applications. Microsoft also introduced tools to deliver best practices in microservices design and to improve developer productivity.

Using Azure Container Apps to host service elements of a large-scale distributed application makes sense. By allowing compute-intensive services to scale to zero when not needed, while expanding to meet spikes in demand, you don’t have to lock into expensive infrastructure contracts. That’s especially important when you’re planning on using GPU-equipped tenants for inferencing.

Among the big news for Azure Container Apps at Ignite was support for GPU instances, with a new dedicated workload profile. GPU profiles will need more memory than standard Azure Container Apps profiles, as they can support training as well as inferencing. By using Azure Container Apps for training, you can have a regular batch process that refines models based on real-world data, tuning your models to support, say, different lighting conditions, or new product lines, or additional vocabulary in the case of a chatbot.

GPU-enabled Azure Container Apps hosts are high end, using up to four Nvidia A100 GPUs, with options for 24, 48, and 96 vCPUs, and up to 880GB of memory. You’re likely to use the high-end options for training and the low-end options for inferencing. Usefully you have the ability to constrain usage for each app in a workload profile, with some reserved by the runtime that hosts your containers.

Currently these host VMs are limited to two regions, West US and North Europe. However, as Microsoft rolls out new hardware, upgrading its data centers, expect to see support in additional regions. It will be interesting to see if that new hardware includes Microsoft’s own dedicated AI processors, also announced at Ignite.

Adding data services to your containers

Building AI apps requires much more than a GPU or a NPU; there’s a need for data in non-standard formats. Azure Container Apps has the ability to include add-on services alongside your code, which now include common vector databases such as Milvus, Qdrant, and Weaviate. These services also are intended for use during development, without incurring the costs associated with consuming an Azure managed service or your own production instances. When used with Azure Container Apps, add-in services are billed as used, so if your app and associated services scale to zero you will only be billed for storage.

Adding a service to your development container allows it to run inside the same Azure Container Apps environment as your code, scaling to zero when not needed, using environment variables to manage the connection. Other service options include Kafka, MariaDB, Postgres, and Redis, all of which can be switched to Azure-managed options when using your containers in production. Data is stored in persistent volumes, so it can be shared with new containers as they scale.

Like most Azure Container Apps features, add-on services can be managed from the Azure CLI. Simply create a service from the list of available options, then give it a name and attach it to your environment. You can then bind it to an application, ready for use. This process adds a set of environment variables that can be used by your containers to manage their connection to your development service. This approach allows you to swap in the connection details of an Azure managed service when you move to production.

Baking in best practices for distributed apps

Providing a simple platform for running containerized applications brings its own challenges, not least of which is educating prospective users in the fundamentals of distributed application development. Having effective architecture patterns and practices helps developers be more productive. And as we’ve seen with the launch of tools like Radius and .NET 8, developer productivity is at the top of Microsoft’s agenda.

One option for developers building on Azure Container Apps is to use Dapr, Microsoft’s Distributed Applications Runtime, as a way of encapsulating best practices. For example, Dapr allows you to add fault tolerance to your container apps, wrapping policies in a component that will handle failed requests, managing timeouts and retries.

These Dapr capabilities help manage scaling. While additional application containers are being launched, Dapr will retry user requests until new instances are ready and able to take their share of the load. You don’t have to write code to do this. Rather, you configure Dapr using Bicep, with declarative statements for timeouts, retries, and backoffs.

Prepare your container apps for landing

Microsoft has bundled its guidance, reference architectures, and sample code for building Azure Container Apps into a GitHub repo it calls the Azure Container Apps landing zone accelerator. It’s an important resource, with guidance for handling access control, managing Azure networking, monitoring running services, and providing frameworks for security, compliance, and architectural governance.

Usefully the reference implementations are designed for Azure. So in addition to application code and container definitions, they include ready-to-run infrastructure as code, allowing you to stand up reference instances quickly or use that code to define your own distributed application infrastructures.

It’s interesting to see the convergence of Microsoft’s current development strategies in this latest release of Azure Container Apps. By decoupling development from the underlying platform engineering, Microsoft is providing a way to go from idea to working microservices (even AI-driven intelligent microservices) as quickly as possible, with all the benefits of cloud orchestration and without jeopardizing security.

What’s more, using Azure Container Apps means not having to wrangle with the complexity of standing up an entire Kubernetes infrastructure for what might be only a handful of services.

Simon Bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon Bisson prefers to think of “career” as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author