Cache your Task objects to improve performance by preventing the unnecessary expensive operations from being executed Credit: IDG Caching is a state management strategy that long has been used in applications to store relatively stable and frequently accessed data for faster access. It helps you to boost your application’s performance primarily because you can save on the consumption of resources in your system to access the data from the cache. Caching can also make sure that your application gets the data it needs even when the data store is not available. So far so good. Now, we have used data caching many times in our applications for improving the application’s performance. But we can cache Task objects as well! Task caching Task caching is an approach towards caching in which instead of caching the results of the execution of a Task, you cache the Tasks instances themselves. In doing so, you can reduce the overhead of expensive operations each time a new Task instance is started. Now, what if the Task fails? Note that we should not cache failed or faulted Tasks, i.e., negative caching should not be considered. Let’s now dig into some code and understand how this can be implemented. Implementing Task caching The following method attempts to get or add data from the cache based on the key passed as a parameter. public static async Task<object> GetOrAddDataInCache(string key) { if (key == null) return Task.FromResult<object>(null); object result; if (!cache.TryGetValue(key, out result)) { result = await SomeMethodAsync(); cache.TryAdd(key, result); } return result; } The cache collection being used here is: static ConcurrentDictionary<string, object> cache = new ConcurrentDictionary<string, object>(); Now, in this example, we have cached the data, not the Task instance. The following method illustrates how Task caching can be achieved. public static async Task<object> GetOrAddTaskInCache(string key) { if (key == null) return Task.FromResult<object>(null); Task<object> result; if (!cache.TryGetValue(key, out result)) { result = await Task.FromResult(SomeMethodAsync()); cache.TryAdd(key, result); } return result; } Note how the Task object is added to the cache if it’s not available. If the Task instance is available in the cache, it is returned else, a call is made to the asynchronous method and the resultant Task instance is inserted in the cache. And, here’s the cache object for your reference. static ConcurrentDictionary<string, Task<object>> cache = new ConcurrentDictionary<string, Task<object>>(); Note that we are using a ConcurrentDictionary to store the cached data. You can also take advantage of other cache stores if you want to. The asynchronous method SomeMethodAsync can perform some long running operation. I leave it to my readers to change it as per the needs. Here’s a simple version of the method anyway for your reference. public static async Task<object> SomeMethodAsync() { //Write your code here for some long running operation await Task.Delay(100); //Simulates a delay of 100 milliseconds return "Hello World!"; } OK, but should I be caching the Task objects always? Absolutely not! Usually, the overhead of state machine allocation when creating a Task instance is negligible, but you can still take advantage of Task caching when you would need to perform relatively expensive operations in your applications often. This was just a simple implementation to illustrate how Task caching works. We didn’t consider many things here. You can refer to this nice post to read a more elaborate implementation on Task caching. Related content feature What is Rust? Safe, fast, and easy software development Unlike most programming languages, Rust doesn't make you choose between speed, safety, and ease of use. Find out how Rust delivers better code with fewer compromises, and a few downsides to consider before learning Rust. By Serdar Yegulalp Nov 20, 2024 11 mins Rust Programming Languages Software Development how-to Kotlin for Java developers: Classes and coroutines Kotlin was designed to bring more flexibility and flow to programming in the JVM. Here's an in-depth look at how Kotlin makes working with classes and objects easier and introduces coroutines to modernize concurrency. By Matthew Tyson Nov 20, 2024 9 mins Java Kotlin Programming Languages analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks Resources Videos