Simon Bisson
Contributor

Using Redis Enterprise in Azure

analysis
Jun 09, 20207 mins
Cloud StorageMicrosoft AzureSoftware Development

Microsoft and RedisLabs are collaborating to bring advanced Redis features to Azure, adding new in-memory database tools to Azure’s cache service

Streaming data
Credit: thinkstock

NoSQL storage comes in many types. Some are document databases, others store key/value pairs, all supporting many different types of index and query. There are disk-based systems and ones designed to work in memory. Some handle large amounts of data efficiently; others focus on delivering speed. With so many different products it’s sometimes hard to pick one.

One of the more popular in-memory systems is Redis, the Remote Dictionary Server. It’s built on the open source Redis server, sponsored by RedisLabs, with a set of commercial enterprise options. Microsoft has offered its own implementation of the open source Redis on Azure for some time now, where it’s mainly used as a high-performance cache. However, it recently announced a partnership with RedisLabs, bringing a fully managed Redis Enterprise stack to Microsoft’s cloud.

Adding Redis Enterprise to Azure

The new service is perhaps best thought of as adding two new tiers to the existing Basic, Standard, and Premium services: Enterprise and Enterprise SSD. Microsoft’s Redis implementation has been focused on delivering a high-performance cache for your data in large cloud-native applications, where the cache helps manage messages for event-driven code or session state when you’re building containerized or serverless systems.

Caches aren’t only for managing incoming data. Modern apps can use them as a way to preload content that’s regularly accessed by users. You can preload Azure’s Redis with your common assets, such as headers and logos, which don’t change that often. By hosting them in memory they can be delivered much more quickly, rather than pulling them from disk every time a page is loaded.

Using Redis is all about performance. Putting your cache data in an in-memory system can reduce application latency significantly, especially when you’re building and running distributed applications at scale. Content in Redis stores can be replicated between Azure regions, reducing the risk of users in one region having to access content stored half the world away.

Starting with Azure Cache for Redis

Microsoft’s open source implementation, Azure Cache for Redis, comes in Basic, Standard, and Premium, with a maximum size of 1.2TB for Premium databases. Basic is a relatively simple single-node implementation, with no SLA but a choice of memory sizes. Standard gives you more reliability by implementing a two-node system and adding a SLA. If you need better performance and lower latency, the Premium option uses a different grade of Azure hardware, giving higher throughput than Standard for what would otherwise be the same configuration.

It’s easy enough to set up a Redis cache in Azure. Start with a DNS name, then add the cache to a resource group and choose a location. This sets up the underlying virtual machines and launches your cache; once Azure reports it as running, you can use it in your code. The credentials needed to connect to Redis are in your Azure Portal, with access keys and connection strings. The portal shows the address of your instance plus the port your code needs to connect on. By default, this will be via SSL.

There are various NuGet packages for using Redis with your .NET applications, with calls for getting and setting items in the Redis cache, as well as for checking that your application is connected to Redis. All you need to do is set your cache connection string and then use that to create a cache object from your Redis database. If you’re using Visual Studio you can work with Redis using familiar .NET database tools such as the Entity Framework.

Redis-based applications are easy to implement using MVC (model, view, and controller) patterns, using controllers to write serialized data into the cache and to read it when necessary. Microsoft recommends using JSON formats to write and read data, with returned JSON data easy to format and display using common JavaScript and .NET libraries.

Azure Cache for Redis is more than a database and a set of APIs, as it contains a complete set of management tools, including monitoring. These can help you scale your Redis instance as necessary. You can only scale up tiers, moving from Basic to Standard to Premium.

Any size changes are a separate operation, and you can change size up or down within the same tier (with the proviso that you can’t scale down to the smallest Standard size offering). If you want to go down a tier, create a new Redis instance, and then copy any data or structures to the new database before deleting the older version. If you need to automate scaling, you can use PowerShell or the Azure CLI, or with code using the Azure Management Libraries.

Scaling up to Redis Enterprise’s in-memory database features

Azure’s Redis implementation is good, but it’s not the whole story. It’s based on the open source Redis, so it doesn’t have all the features of the commercial Redis Enterprise. That’s why Microsoft and Redis collaborated to deliver two additional tiers, managed by Microsoft and supported by both companies, with full integration in the Azure Portal. Enterprise, the base tier, uses standard Azure storage, whereas the Enterprise SSD tier adds support for flash storage for faster access to data that’s not available in memory.

Currently in a private preview, the new service adds support for key Redis Enterprise modules, allowing you to use the service for a lot more than purely cached data. That’s an important distinction, as a fast, in-memory database is an important part of an at-scale, event-driven system, especially one that relies on time-series data. Other supported features include RedisBloom, which adds probabilistic data filtering, and RediSearch, which improves indexing and allows you to use full-text search on your data.

Additional features will be added as the service moves from private preview to general availability (currently scheduled for the end of 2020). These will allow you to use active-active replication between geographic regions and hybrid deployments that work between private and Azure-hosted Redis instances. There’s no need to have a dedicated connection between on-premises and Azure Redis; active-active replication will work over a VPN.

The new Redis Enterprise implementation looks like the existing Azure Cache for Redis inside the portal, and you will be able to scale up from existing instances or start from scratch. If you’re looking for better performance, scaling up is an option, but you’ll probably want to create a whole new instance if you’re using any of the new database features. You can enable them as part of the creation process, from the portal or via an Azure Resource Manager template. Although much of your management and monitoring will be from inside the Azure Portal, you have the option to use Redis’ own management tools to help tune and optimize your data.

The combination of Azure’s Redis implementation and RedisLabs’ Redis Enterprise is an interesting one that shows how a vendor with a premium offering built on an open source foundation can coexist with hyperscale clouds. Azure is able to offer a service based on the open source platform, while more complex implementations can use RedisLabs’ tools. This route gives the company access to a new revenue stream without having to change its licensing model to one that shuts out cloud providers.

With a simple path from Azure’s Redis-based cache service to Redis Enterprise, and with no change in management tools or billing relationships, it’s also transparent to end-users. They get access to new tiers and new features without having to change the way they work.

Simon Bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon Bisson prefers to think of “career” as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author