Simon Bisson
Contributor

How Azure Functions is evolving

analysis
Jul 11, 20247 mins
Azure FunctionsCloud ComputingMicrosoft Azure

Microsoft has delivered major updates to its serverless compute service to align it more closely with development trends including Kubernetes and generative AI.

shutterstock 424982974 five light bulbs on a blue background one glowing light bulb
Credit: arda savasciogullari / Shutterstock

Just a handful of years ago we couldn’t stop talking about serverless computing. Then it seemed to disappear, overtaken first by Kubernetes and then by generative AI. But of course, like most technologies, it’s still there, delivering the on-demand compute that’s part of most modern cloud-native design.

Microsoft’s Azure Functions has continued to evolve, supporting updated APIs and languages, as well as becoming part of the Azure recommended patterns and practices, supporting everything from AI to Internet of Things. What remains key to Azure Functions is its ability to deliver on-demand capacity on a “pay for what you use” basis. With serverless tools there’s no need to provision infrastructure; instead you write stateless code that responds to events.

This event-driven model is key to the importance of serverless computing, as it allows you to process and use data as it arrives. As a result, you can consider serverless a key component of modern message-based applications, taking advantage of publish-and-subscribe technologies like Azure Event Grid and the open standards-based CloudEvents messaging format.

Build 2024 saw Microsoft deliver a set of major updates to Azure Functions to align it more closely with current development trends, including the ability to run in managed Kubernetes environments and to support on-demand AI applications built around Azure OpenAI.

Running Azure Container Functions in Azure Container Apps

Microsoft’s Azure Container Apps is a managed Kubernetes environment, where all you’re concerned with is your application containers and basic scaling rules. There’s no need to work with the underlying Kubernetes platform, as much of its functionality has been automated. If Azure Functions is serverless, Azure Container Apps (ACA) is perhaps best thought of as platformless.

One key Azure Functions feature is its portability, with the option of building Functions apps and a basic runtime in containers for deployment and testing. It’s a useful option, and Microsoft used Build to announce that ACA now supports Functions containers as part of a mix of container-hosted services. With support for event-driven scaling, ACA is an ideal host for your Functions, launching containers on demand.

The container hosts the same runtime that’s used elsewhere in Azure, so you can continue to use your existing toolchain with the same integrations as a standard Azure Function. That includes the same bindings, with ACA handling scaling for you, using Kubernetes-based event-driven autoscaling (KEDA). One important difference is instead of being billed as an Azure Function, your ACA-hosted Functions will be billed as part of your ACA plan, using either a serverless Consumption model or the more complex Dedicated plan. If you’re using a Dedicated plan, your Functions can access GPU resources, which can help accelerate local AI models.

The containers that host Functions are Linux-based, with different base images for each language option. The Functions command-line development tool will create the correct image for you when you use the –docker option as part of creating a new Functions project. You will need to keep this image up to date to ensure you have the latest security patches, which will require redeploying your image on a roughly monthly cadence, as new images are released regularly.

Using Azure Functions with Azure OpenAI

Build 2024 was very much led by Microsoft’s AI investments. Azure Functions may not have its own Copilot (yet), but it’s now part of Microsoft’s AI programming toolchain, with a set of APIs that lets you use Functions to drive generative AI operations, as well as to respond to triggers from Azure OpenAI.

Triggering Functions from Azure OpenAI goes some way to delivering a serverless agent. Prompts call known Functions, and Function outputs are themselves used as prompts that can deliver natural language outputs. Other tools let you update and read from vector search indexes, allowing you to use Functions to maintain your local semantic store.

Microsoft is previewing a Functions extension that ships for all the supported languages. Depending on your choice of development tools, start by installing the extension package into your Function. For example, if you’re using C#, you use NuGet to install the package from the .NET CLI. You’ll need to install packages for the vector database you’re using, like Azure AI Search or Cosmos DB’s new DiskANN-based vector search.

One useful option is the ability to build a function that acts as an assistant in an Azure OpenAI workflow, using it as a skill in a Prompt Flow operation. When a chatbot detects a request that needs to be passed to your function, based on the function description, it passes the data as a string. Your function can then process that string, for example, adding it to another application’s database.

If you prefer, you can use a function to create a chatbot (or at least a generative AI interaction) based on an external trigger. This allows you to dynamically generate prompts, using a function to gatekeep your AI, providing filters and avoiding known prompt injection attacks. A simple interaction sends a request to an Azure OpenAI endpoint and gets a response that can be forwarded to another function.

There’s a lot that can be done using serverless computing techniques with Azure OpenAI. Azure Functions work well as plug-ins and skills for tools like Prompt Flow or Semantic Kernel, going beyond basic chatbots and using Azure OpenAI as the basis of an AI-powered agent, with no direct interaction between the large language model and your users. It’s worth thinking about how you can use the combined capabilities of an LLM and Azure Functions, as a retrieval-augmented generation (RAG)-grounded application or as a natural language interface for more complex applications.

Flex Consumption billing supports new application models

Alongside new application development features, Azure Functions gets a new billing option. The Flex Consumption plan is an extension of the existing Consumption plan, adding more flexibility around the underlying infrastructure. Instances are built on top of Linux servers, with support for familiar Functions programming environments, including C# .NET 8 code and Node.js.

Although much of the value in serverless computing is the ability to ignore infrastructure and simply run with available resources, there’s still a need for some fixed options that control how your function deploys and what resources it can use. These include using private networking, choosing the memory size for your host instances, and supporting different scale-out models.

You can think of this plan as a premium option, one that adds features focused on large implementations that require rapid responses. Scaling is still event-driven, but you can now have up to 1,000 instances rather than a maximum of 200. At the same time, new instances support fast deployments, as well as a set number of instances that are ready to run at all times. This approach reduces application latency, spinning up additional always-ready instances as soon as your current set becomes active. Larger memory instances get access to more network bandwidth and more CPU.

As well as the default 2048MB memory option, Flex Consumption plans allow larger instances, supporting 4096MB. This can help with applications that need larger memory or more compute, for example, running vector searches for event-driven RAG applications. Support for private virtual networks is another important option for enterprise serverless computing, as it ensures links to other Azure services and to storage remaining inside your own secure network.

Serverless technologies such as Azure Functions remain an important tool for building modern, cloud-based applications. They provide a framework for rapid scaling and help keep costs to a minimum, only being billed when running. As they respond to events and messages, they’re a useful companion to Kubernetes and modern AI platforms, providing an effective way to rapidly add new functions and services to your Azure applications.

Simon Bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon Bisson prefers to think of “career” as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author