Granular visibility can help enterprises keep cloud costs in check. Follow these best practices when using monitoring methods to control Kubernetes-related spending. It can be a little too easy to let Kubernetes-related cloud costs slip out of control—and for many enterprises, that’s exactly what’s happening. Programmatic resource provisioning and access to high-cost resources like GPUs are just a couple of the factors that will balloon budgets without a conscious effort to temper expenses. And, as enterprises continue to scale their use of Kubernetes, every small bug and cost inefficiency scales in lockstep. The answer lies in visibility and ownership. Enterprises need to see where and how they are spending with enough granularity to enact change when needed, and they need to cultivate a culture of cost responsibility and accountability that touches engineering and finance teams alike. In many cases, the mere act of making engineering teams aware of their Kubernetes spending has a substantial influencing effect on more efficient spend. More mindful Kubernetes utilization also leads to more streamlined, productive, and secure environments, in addition to cost savings. Enterprises should understand they have four methods for Kubernetes cost monitoring at their disposal, with each option best suited for particular use cases: Limited cost monitoring. Under this method, a centralized team or teams (often finance or devops) are responsible for receiving monthly Kubernetes billing and then addressing unnecessary costs and any contributing issues. Organizations with small applications engineering teams and less advanced environments are the best fit for this method. Those with larger, multi-tenant environments need a more robust approach. Showbacks. The showback method introduces detailed cost breakdowns of Kubernetes and cloud spending for each team across the organization. Each team is given this accurate cost data so they can better understand and more proactively manage their spending responsibilities. Showbacks are ideal for organizations with three or more applications engineering teams and 20-plus engineers. Chargebacks. Chargebacks are showbacks with teeth. Here teams must pay from their own budgets to cover the Kubernetes and cloud costs they create. This method is best suited to the same larger organizations as showbacks. For a chargeback approach to succeed, though, enterprises must commit to the culture of chargebacks and agree that controlling these costs is a crucial shared goal they are capable of achieving. Limit-set cost monitoring. This approach requires teams to pay from their budgets if/when their resource costs go beyond set spending limits or, in some cases, to pay from their budgets for selected resources only. As with chargebacks, the company culture must be on board for this method to thrive. Whatever method an organization uses, Kubernetes cost controls will fail if their implementations are too abrupt, perceived as unfair, or poorly managed. To gain the trust, cooperation, and organization-wide buy-in you’ll need for your Kubernetes cost controls to succeed, follow these five best practices. Build up to a chargeback strategy, rather than trying to impose one overnight. Teams often get sticker shock at their first spending reviews, and need time to get a handle on why certain costs are occurring and how to change practices to reduce them. Putting them on the hook for the bill immediately—before they have time to deliberate and draw up careful spending reduction plans—will only lead to panic, poor decisions, and heaping resentment from team leaders. Starting with limited cost monitoring or showbacks lets teams ease into cost responsibility and provides fair warning for the bills that are coming. Make cost allocations fair and transparent. Teams need total trust in the cost metrics they’re held responsible for. However, without careful curating, costs in Kubernetes’ distributed system aren’t so cut and dry. To build buy-in, use transparent cost allocation models that ensure those metrics are reproducible, audited, and verified. Also, be sure to provide teams with actionable data and make it clear how they play a role in getting overspending under control. Take care with the allocation of idle resources, which usually fall to the team making cluster-level provisioning decisions. System-wide and shared resources also require watchful allocation. Assigning costs by namespace is a particularly powerful method for delineating spending responsibilities. Ideally, assign costs based on the maximum of teams’ resource requests and usage, but only if they have control over those settings (making it fair). Similarly, find fair approaches for handling high-cost one-off jobs, like research projects. Make ownership over each resource crystal clear. Leveraging an admission controller and “escalation approach” can clarify each resource’s owner. The escalation approach consists of defining the owner’s label at deployment, namespace, and cluster levels, thereby establishing an escalation path in case of issues. To enforce those labels, use an Open Policy Agent or admission controller webhook. Review spending data weekly. Planned, weekly data reviews allow teams to flag overspending and eliminate future waste while avoiding sticker shock when monthly bills come due. Automated alerts should also sound the alarm if resource usage becomes excessive or abnormal and needs attention to avoid cost overruns. Focus on the culture shift. For enterprises trying to lower Kubernetes costs as they scale, achieving a culture that values savings and respects the cost management approaches in place is the true hurdle. The technical methods behind these cost controls aren’t difficult to implement and follow—if all teams are motivated to do so. Make sure costs are clear, fair, transparent, and actionable, then give teams the tools they need to succeed, and the culture will come. In most cases, enterprises that implement a culture where teams actively regulate their own Kubernetes spending can expect to see cost savings of 30% or more, along with further boosts to productivity and security. Distributing responsibility for the costs of Kubernetes’ distributed system is a worthwhile pursuit, and one that is easier to instill earlier than later. Rob Faraj is a co-founder of Kubecost. — New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. Related content feature What is Rust? Safe, fast, and easy software development Unlike most programming languages, Rust doesn't make you choose between speed, safety, and ease of use. Find out how Rust delivers better code with fewer compromises, and a few downsides to consider before learning Rust. By Serdar Yegulalp Nov 20, 2024 11 mins Rust Programming Languages Software Development how-to Kotlin for Java developers: Classes and coroutines Kotlin was designed to bring more flexibility and flow to programming in the JVM. Here's an in-depth look at how Kotlin makes working with classes and objects easier and introduces coroutines to modernize concurrency. By Matthew Tyson Nov 20, 2024 9 mins Java Kotlin Programming Languages analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks Resources Videos