Some people say edge is the next revolution, but the gap between promised performance and actual results needs to be discussed. Credit: Thinkstock Edge computing is one of those confusing terms, much like cloud computing. Where there is a factorial of 50 kinds of cloud solutions, there is a factorial of 100 edge solutions or architectural patterns that exist today. This article does a better job of describing the types of edge computing solutions that are out there, saving me from relisting them here. It’s safe to say that there are all types of compute and data storage deployments that qualify as edge computing solutions these days. I’ve even noticed vendors “edge washing” their technology, promoting it to “work at the edge.” If you think about it, all mobile phones, PCs, and even your smart TV could now be considered edge computing devices. One of the promises of edge computing—and the main reason for picking edge computing architecture—is the ability to reduce network latency. If you have a device that’s 10 feet from where the data is gathered and that’s also doing some rudimentary processing, the short network hop will provide almost-instantaneous response time. Compare this versus a round trip to the back-end cloud server that exists 2,000 miles away. So, is edge better because it provides better performance due to less network latency? In many instances, that’s not turning out to be the case. The shortfalls are being whispered about at Internet of Things and edge computing conferences and are becoming a limitation on edge computing. There may be good reasons not to push so much processing and data storage to “the edge” unless you understand what the performance benefits will be. Driving much of these performance problems is the cold start that may occur on the edge device. If code was not launched or data not gathered recently, those things won’t be in cache and will be slow to launch initially. What if you have thousands of edge devices that may only act on processes and produce data as requested at irregular times? Systems calling out to that edge computing device will have to endure 3- to 5-second cold-start delays, which for many users is a dealbreaker, especially compared to consistent sub-second response times from cloud-based systems even with the network latency. Of course, your performance will depend on the speed of the network and the number of hops. Yes, there are ways to solve this problem, such as bigger caches, device tuning, and more powerful edge computing systems. But remember that you need to multiply those upgrades times 1,000+. Once these problems are discovered, the potential fixes are not economically viable. I’m not picking on edge computing here. I’m just pointing out some issues that the people designing these systems need to understand before finding out after deployment. Also, the primary benefit of edge computing has been the ability to provide better data and processing performance, and this issue would blow a hole in that benefit. Like other architectural decisions, there are many trade-offs to consider when moving to edge computing: The complexity of managing many edge computing devices that exist near the sources of data What’s needed to process the data Additional expenses to operate and maintain those edge computing devices If performance is a core reason you’re moving to edge computing, you need to think about how it should be engineered and the additional cost you may have to endure to get to your target performance benchmark. If you’re banking on commodity systems always performing better than centralized cloud computing systems, that may not always be the case. Related content opinion Stopping the rot in AI spending Emerging AI governance tools offer a step toward visibility and control of what’s happening in a company’s AI applications. By Matt Asay Oct 21, 2024 5 mins Generative AI Risk Management Artificial Intelligence opinion Making generative AI work for you Find the sweet spot where genAI boosts your productivity but doesn’t get you in over your head where you can’t tell good output from bad. By Matt Asay Oct 14, 2024 5 mins Generative AI Development Tools Emerging Technology opinion California’s vetoed AI bill: Bullet dodged, but not for long SB 1047 missed the mark. A far better solution to managing AI risks would be a unified federal regulatory approach that is adaptable, practical, and focused on real-world threats. By Kjell Carlsson Oct 08, 2024 8 mins Technology Industry Generative AI Artificial Intelligence opinion Crescendo makes AI boring—and profitable An AI startup is turning call centers into a successful model of using AI to support human employees. By Matt Asay Sep 30, 2024 6 mins Technology Industry Generative AI Artificial Intelligence Resources Videos