David Linthicum
Contributor

3 multicloud myths that cloud pros still believe

analysis
Feb 01, 20225 mins
Cloud ArchitectureCloud ComputingSoftware Development

We have enough experience with multicloud that there are no more surprises, right? The ideas that multicloud prevents lock-in, is cheaper, and is more resilient are still out there.

fact fiction debunk myths truth
Credit: Getty Images

I consistently hear false information about multicloud in the press, meetings, training, podcasts, and other sources where cloud professionals share information. These are rarely deliberate attempts at misinformation; people just lack some understanding of what’s real and what isn’t, based on their experiences with multicloud. 

Here are three multicloud myths I keep running into that need to be better understood:

Multicloud solves lock-in problems. I’ve opined about this topic for a while, so I won’t get too deep into it here. In brief, when you leverage more than a single public cloud provider brand (aka multicloud), you assume your multicloud status will avoid lock-in to a single cloud provider. 

That’s not the case. Whenever you leverage the native services of a specific cloud provider, multicloud or not, you lock into that provider. 

Even though you already have another cloud provider’s services within your multicloud service catalog, that does not eliminate the fact that you’ve leveraged services that are native to a specific cloud provider within an application, and thus you’re pretty much coupled to that cloud platform. The alternative is expensive and costly refactoring (meaning recoding) to move an application to another public cloud.

I see this myth the most, considering the number of times cloud professionals sell the benefits of multicloud internally, with avoiding lock-in emphasized front and center. Sorry to kill the lock-in avoidance party.

Multicloud is cheaper than a single cloud deployment. Almost never. But this does not mean that the value of using multicloud to justify the increase in cloud computing spending won’t be justified. Confused yet? I’ll explain.

In the large majority of multicloud deployments, the cost of deploying and operating more than a single public cloud is always going to be more than a single public cloud deployment, most things being equal.

You’re paying for the heterogeneity and the complexity of multicloud, which is going to expand the talent and the types of operational tools that you need. Also, you’ll need to leverage cross-cloud security solutions and other things that make multicloud way more costly. 

I often see cloud pros cite the fact that they can play one public cloud provider against another to find better prices, or that they have the ability to pick the cheapest cloud services dynamically at the time of need. However, given the issue with lock-in we just covered, that may be more of a hollow threat. 

Although there are ways you may find some operational cost savings, they are nowhere near enough to counter the additional costs of heterogeneity and complexity as previously mentioned. Instead, we move to multicloud for the value that it can generate, considering that you’re able to pick the best-of-breed cloud services among the different cloud providers, as well as mix and match more cloud services to support better innovation in the company. Multicloud should be about leveraging technology that’s strategic to your business, not just attempting to eliminate a few dollars for IT. 

Multicloud provides more resiliency. I covered this topic here. However, I’ve seen this myth popping up more and more, typically after a public cloud outage makes the news. 

The core idea is that if I’m able to use more than a single cloud brand and I can leverage active-active types of configurations for a single application and data set between the two cloud brands, I should never be taken down by a single public outage.

Using the multicloud option for disaster recovery, you’ll end up paying the same standard operating costs twice for a single application. Also, you’ll pay to customize the application and data for a different cloud (say two clouds total), especially when you consider the need for special development, databases, and administrative skills for each. This pushes the value of moving to multiple cloud-based platforms for outage protection out the window.

Another consideration is the number of actual outages that occur. Although some large outages have occurred with the major cloud providers, most of these are localized to a single region and are corrected in a reasonable amount of time. Certainly, it’s a better uptime record than most enterprises have for their own internal systems. I bet most of you are nodding your head.

Thus, the question is not “can you?” It’s “should you?” For most practical purposes, it’s not a good option, other than for the most business-critical applications. Count on paying more than twice as much to operate that application, which most businesses will reject after considering the true risk.

These types of myths are going to continue no matter how much I attempt to push back. The problem comes when enterprises accept the use of technology based on things that are not true. I suspect there will be a day of reckoning at some point. 

David Linthicum
Contributor

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insider’s Guide to Cloud Computing. Dave’s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Computing blog for InfoWorld. His views are his own.

More from this author