David Linthicum
Contributor

The dirty little secret about edge computing

analysis
Sep 30, 20223 mins
Cloud ComputingEmerging TechnologyTechnology Industry

Some people say edge is the next revolution, but the gap between promised performance and actual results needs to be discussed.

handsome male executive holding finger up to be quiet keep a secret
Credit: Thinkstock

Edge computing is one of those confusing terms, much like cloud computing. Where there is a factorial of 50 kinds of cloud solutions, there is a factorial of 100 edge solutions or architectural patterns that exist today. This article does a better job of describing the types of edge computing solutions that are out there, saving me from relisting them here.

It’s safe to say that there are all types of compute and data storage deployments that qualify as edge computing solutions these days. I’ve even noticed vendors “edge washing” their technology, promoting it to “work at the edge.” If you think about it, all mobile phones, PCs, and even your smart TV could now be considered edge computing devices.

One of the promises of edge computing—and the main reason for picking edge computing architecture—is the ability to reduce network latency. If you have a device that’s 10 feet from where the data is gathered and that’s also doing some rudimentary processing, the short network hop will provide almost-instantaneous response time. Compare this versus a round trip to the back-end cloud server that exists 2,000 miles away.

So, is edge better because it provides better performance due to less network latency? In many instances, that’s not turning out to be the case. The shortfalls are being whispered about at Internet of Things and edge computing conferences and are becoming a limitation on edge computing. There may be good reasons not to push so much processing and data storage to “the edge” unless you understand what the performance benefits will be.

Driving much of these performance problems is the cold start that may occur on the edge device. If code was not launched or data not gathered recently, those things won’t be in cache and will be slow to launch initially.

What if you have thousands of edge devices that may only act on processes and produce data as requested at irregular times? Systems calling out to that edge computing device will have to endure 3- to 5-second cold-start delays, which for many users is a dealbreaker, especially compared to consistent sub-second response times from cloud-based systems even with the network latency. Of course, your performance will depend on the speed of the network and the number of hops.

Yes, there are ways to solve this problem, such as bigger caches, device tuning, and more powerful edge computing systems. But remember that you need to multiply those upgrades times 1,000+. Once these problems are discovered, the potential fixes are not economically viable.

I’m not picking on edge computing here. I’m just pointing out some issues that the people designing these systems need to understand before finding out after deployment. Also, the primary benefit of edge computing has been the ability to provide better data and processing performance, and this issue would blow a hole in that benefit.

Like other architectural decisions, there are many trade-offs to consider when moving to edge computing:

  • The complexity of managing many edge computing devices that exist near the sources of data
  • What’s needed to process the data
  • Additional expenses to operate and maintain those edge computing devices 

If performance is a core reason you’re moving to edge computing, you need to think about how it should be engineered and the additional cost you may have to endure to get to your target performance benchmark. If you’re banking on commodity systems always performing better than centralized cloud computing systems, that may not always be the case.

David Linthicum
Contributor

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insider’s Guide to Cloud Computing. Dave’s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Computing blog for InfoWorld. His views are his own.

More from this author