Spending on Kubernetes and container-related infrastructure has soared in the past two years. Follow these steps to eliminate the guesswork and waste. Credit: Free-Photos What is the cost of being late to market? What is the impact of being unable to respond to unforeseen changes in the business environment like those experienced in 2020? Loss of customers, loss of market share, and loss of reputation, to name a few. To be best positioned to innovate at the pace demanded in today’s market, companies are turning to cloud-native computing as the engine to accelerate innovation. Cloud-native technologies have transformed the software development landscape in recent years, fueled by enterprises seeking greater flexibility, scalability, and efficiency. Such is the scale of this transition that the CNCF recently termed it the “new normal.” A report last year titled Leveraging the Benefits of Cloud Native Technologies found that 77% of enterprises were already using cloud-native technologies across all or some of their applications. The CNCF reported 79%. Recent reports have also shown that adoption has fueled a dramatic growth in spending on Kubernetes and container-related infrastructure by businesses over the past two years. But it would be a mistake for businesses to interpret these reports as a cue to put off or slow the adoption of cloud-native methodologies. The cost of fueling innovation is far less than the cost of being left behind, stymied by legacy infrastructure and slow delivery processes. The keys, as in every disruptive technology transition, are first understanding the drivers of the costs and then maturing processes and governance structures to manage costs and maximize business ROI. With the specter of growing costs looming especially large given the current economic uncertainty, businesses must understand the drivers behind these cost increases and the strategies available to manage them. From a poor architecture or inefficient resource usage to unanticipated business demands, there are myriad reasons for rising costs. For example, self-service access to Kubernetes environments may result in development teams spinning up an excessive number of Kubernetes clusters. Also, companies often find it difficult to estimate Kubernetes costs before rolling out apps to production. More broadly, companies frequently lack a way to track unoptimized Kubernetes configurations, resulting in higher on-going infrastructure consumption. At a more granular, practical level, what steps can organizations internalize to manage and align their Kubernetes costs with the business value of new innovative applications? Step 1. Understand where your Kubernetes costs are coming from Leverage open-source Kubernetes observability tools to implement Kubernetes cost management and take the guesswork out of Kubernetes spending. An example is the CNCF sandbox project OpenCost, which gives teams visibility into current and historical Kubernetes spend and resource allocation. Step 2. Understand the costs of your Kubernetes service options Perform cost modelling in UAT (user acceptance testing) environments by passing test data to predict rough Kubernetes costs. This data can also provide insights into expensive microservices that may be spinning up too many pods and overusing compute resources that drive up costs. These microservices should be rearchitected or reconfigured. Step 3. Don’t lock yourself into one Kubernetes provider Adopt a hybrid-cloud, multi-cloud approach for your Kubernetes workloads to ensure that your Kubernetes costs are not locked into one provider or architecture. This will give you the flexibility to place workloads either on the cloud provider or on private, on-prem infrastructure, whichever provides the most cost-effective solution that meets operational requirements. Step 4. Consider lightweight Kubernetes distributions for your workloads Consume less. For instane, K3s, a CNCF sandbox project, delivers a lightweight yet powerful certified Kubernetes distribution that uses significantly less resources than standard Kubernetes. Step 5. Define best practices and governance processes Incorporate some levels of governance into self-service access to Kubernetes clusters. Many companies are turning to a centralized “platform engineering” team to provide consistent services and pre-configured best practices to development teams, resulting in higher productivity and lower overall costs. Ultimately, when making the transition to cloud-native computing, leaders must recognize there will be rising costs in new areas. However, in my view, cloud-native solutions delivered on Kubernetes and containers offer the most cost-effective means of delivering new, innovative services to the market today. The alternatives of VMs and bare metal are both less efficient at delivering the scale and agility to new business services. Costs won’t be prohibitive to those with reasonable expectations and implementation plans. In preparation, organizations must define best practice policies and governance processes for delivering cloud-native workloads running on Kubernetes. Digital transformation is not without its challenges. Balancing costs joins integration issues, scalability, and security on the list of hurdles facing enterprises. In this light, cost issues are better seen as a factor to be managed rather than an existential threat. With customers demanding new, more effective services, innovation will continue to take priority. And, as I said at the beginning, these costs pale in comparison to the consequences of failing to respond to the market. Brent Schroeder is the global chief technology officer and head of the office of the CTO at SUSE. — New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. Related content feature What is Rust? Safe, fast, and easy software development Unlike most programming languages, Rust doesn't make you choose between speed, safety, and ease of use. Find out how Rust delivers better code with fewer compromises, and a few downsides to consider before learning Rust. By Serdar Yegulalp Nov 20, 2024 11 mins Rust Programming Languages Software Development how-to Kotlin for Java developers: Classes and coroutines Kotlin was designed to bring more flexibility and flow to programming in the JVM. Here's an in-depth look at how Kotlin makes working with classes and objects easier and introduces coroutines to modernize concurrency. By Matthew Tyson Nov 20, 2024 9 mins Java Kotlin Programming Languages analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks Resources Videos