With a newly expanded distributed workforce, many enterprises are considering a move to the edge. Make sure you've thought about security and data volume Credit: akurtz Getty Images / Thinkstock When computing first began computers that were way too expensive for most companies were shared via timeshare services. Processing was centralized, using multiuser systems. Then minicomputers, PC, and LANs came along, and we moved processing out to PC workstations and smaller compute platforms. We saw the decentralization of computing. Now, years later, we’re centralizing processing again on public cloud hyperscalers, but this time using a multitenant approach. Getting dizzy? These days we’re also considering decentralization again, with the rise of edge computing. We’ve talked about edge here before, and my conclusion remains that there are reasons to leverage edge computing, certainly to reduce latency and to store data locally. The pandemic has pushed employees and processing to a highly distributed model and not by choice. Edge computing is front and center as something that should be leveraged alongside cloud computing—and instead of cloud. Let’s clear a few things up. A few edge computing models are emerging. First is processing data directly on an IoT device, say a thermostat or an autonomous vehicle. Let’s call this “device oriented.” Second is using some compute platforms of services that are geographically distributed and used by multiple clients, typically workstations. Let’s call this “edge server oriented.” The second model is the most interesting to enterprises that are rethinking compute distribution post-pandemic. It’s also the newest use of the edge computing model and comes in two different flavors: the use of proprietary edge devices that are sold by the public cloud providers, and the use of private servers that sit in small geographically disbursed data centers, in office buildings, and even homes. In moving to these new edge models, most enterprises are skipping a few considerations, including: Security. Edge architectures add complexity, considering that the data must be secured on the client workstation as well as in the cloud, with some adding an intermediary server that also requires security. Rather than focusing on securing the data in a single public cloud, we’ve gone to securing the information on multiple systems that store data. Data volume. When you add lower-powered, distributed compute platforms, the volume of data may overwhelm them. A public cloud storage system with built-in automated database scaling can handle pretty much any volume of data that’s tossed at it. The same can’t be said for edge servers or client workstations. This is not to say that edge computing can’t be a focus of your post-pandemic move to cloud computing. I’m saying that you need to understand the issues you’ll face. Related content analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools analysis Succeeding with observability in the cloud Cloud observability practices are complex—just like the cloud deployments they seek to understand. The insights observability offers make it a challenge worth tackling. By David Linthicum Nov 19, 2024 5 mins Cloud Management Cloud Computing news Akka distributed computing platform adds Java SDK Akka enables development of applications that are primarily event-driven, deployable on Akka’s serverless platform or on AWS, Azure, or GCP cloud instances. By Paul Krill Nov 18, 2024 2 mins Java Scala Serverless Computing analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing Resources Videos