Just because you can, doesn’t mean you should. Complexity, latency, and network outages may give you pause. Credit: MF3d / Getty Images The notion of the intelligent edge has been around for a few years. It refers to placing processing out on edge devices to avoid sending data all the way back to the centralized server, typically existing on public clouds. While not always needed, the intelligent edge is able to leverage machine learning technology at the edge, moving knowledge building away from centralized processing and storage. Applications vary, from factory robotics to automobiles to on-premises edge systems residing in traditional data centers. It’s good in any situation where it makes sense to do the processing as close to the data source as you can get. We’ve wrestled with this type of architectural problem for many years. With any distributed system, including cloud computing, you have to consider the trade-off of process and storage placement on different physical or virtual devices. The intelligent edge is no different. It’s easy to place processing and storage at the edge, but in many cases it becomes a management and operations nightmare. Keep in mind that your edge devices to centralized systems are always many-to-one. Managing a centralized systems is fairly simplistic considering that it’s in one virtual location. When you have to manage hundreds or thousands of intelligent edge devices, including configuration management, security, and governance, it becomes an operational nightmare. I’m finding companies that pushed processing and data storage out to the edge often pull them back to the centralized servers just due to management complexity. Latency and network outages can bite you in the butt. We depend on networks to keep us connected with edge computers, in many cases mobile and thus connected via cellular networks. You’ll probably have to deal with disconnected situations more often than you like, and you must figure out a way to ensure that these outages and performance issues don’t kill your overall system, both edge and centralized. If you don’t, you’ll find that data does not sync and processing is not managed properly. You may get to a point where the systems become unreliable and untrusted. Try explaining to a commercial pilot that the in-flight engine diagnostics on the intelligent edge failed due to a network problem. The resulting flameout won’t go over well on the flight deck. Of course, not all edge limitations are that profound. Typically you’re making architectural mistakes that won’t be discovered until the system begins to scale. By that time, too much has been committed to the intelligent edge architecture, and fixes require a systemic change. Try telling your boss that. It’s won’t go over well there, either. Make sure you consider the trade-offs. Related content analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools analysis Succeeding with observability in the cloud Cloud observability practices are complex—just like the cloud deployments they seek to understand. The insights observability offers make it a challenge worth tackling. By David Linthicum Nov 19, 2024 5 mins Cloud Management Cloud Computing news Akka distributed computing platform adds Java SDK Akka enables development of applications that are primarily event-driven, deployable on Akka’s serverless platform or on AWS, Azure, or GCP cloud instances. By Paul Krill Nov 18, 2024 2 mins Java Scala Serverless Computing analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing Resources Videos