Peter Wayner
Contributing writer

Serverless in the cloud: AWS vs. Google Cloud vs. Microsoft Azure

feature
Mar 26, 201820 mins
Amazon Web ServicesCloud ComputingMicrosoft Azure

With AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions, a little bit of business logic can go a very long way

cloud computing - metal gears - hybrid - private and public combination - sealed and open
Credit: Thinkstock

If you’ve ever been woken up at 3 a.m. because a server went haywire, you’ll understand the appeal of a buzzword like “serverless.” The machines can take hours, days, or sometimes even weeks to configure and then they need to be updated constantly to fix bugs and security holes. These updates usually bring hassles of their own because the new updates cause incompatibilities forcing other updates or so it seems ad infinitum.

The endless chain of headaches from running a server is one of the reasons that major cloud companies have embraced the “serverless” architecture. They know that the boss has heard the excuses—the server this, the server that—for far too long. If we could only get rid of those servers, the boss must think.

It’s a wonderful sales term with the only problem being it’s not strictly true. These apps are serverless in the same way that restaurants are kitchenless. If what you want is on the menu and you like how the cook prepares it, sitting down in a restaurant is great. But if you want a different dish, if you want different spices, well, you better get your own kitchen.

Amazon, Google, and Microsoft are three of the bigger companies that are battling to host applications of the future, ones that they hope will be written to their serverless API and managed through their automation layer. If the platforms do what you want—and the new models are pretty general—they can be the simplest and fastest way to create your own multibillion dollar unicorn web app. You write only the crucial bits of logic and the platform handles all of the details.

Serverless functions are becoming the glue or scripting language that links together all of the cloud features. The mapping or AI tools that were once fairly independent are now linked through the event-driven serverless functions. Now more of your work can be solved by requests that ripple and bounce through the various corners of each cloud, triggering and being triggered by a flow of events. If you want to explore machine learning and use it to analyze your data, one of the fastest ways to do it is to create a serverless app and start sending events to the machine learning corner of the cloud.

The implicit promise is that slicing everything thinner makes it easier to share resources in the cloud. In the past, everybody would be frantically creating new instances with, say, Ubuntu Server running in its own virtual machine. Everyone used the same OS and it was duplicated a zillion times on the same real box that was pretending to be a dozen or more virtual Ubuntu boxes. Serverless operations avoid all of that duplication, making cloud computing dramatically cheaper, especially for jobs that run sporadically and never really jammed up the old box sitting in your air conditioned server room.

Of course all of this convenience does have a hidden cost. If you ever want to leave or move your code to another site, you’ll probably be stuck rewriting most of the stack. The APIs are different, and while there is some standardization around popular languages like JavaScript, they’re pretty close to proprietary. There is plenty of opportunity for lock-in.

To understand the appeal of serverless options, I spent some time building out a few functions and poking around the stacks. I didn’t write much code, but that was the point. I spent more time clicking on buttons and typing into web forms to configure everything. Do you remember when we configured everything with XML and then JSON? Now we fill out a web form and the cloud does it for us. You still have to think like a programmer, though, to understand what’s going on behind the scenes and beyond your control.

AWS Lambda

AWS Lambda is growing into the shell script layer for Amazon’s entire cloud. It’s a basic system that lets you embed functions that respond to events that might be generated by almost any part of the vast Amazon cloud infrastructure. If a new file is uploaded to S3, you could have it trigger a function that does something interesting with it. If some video is being transcoded by the Amazon Elastic Transcoder, you could have a Lambda function waiting to be triggered when it finishes. These functions, in turn, can trigger other Lambda operations or maybe just send someone an update.

You can write Lambda functions in JavaScript (Node.js), Python, Java, C#, and Go. Given that these languages can embed many other languages, it’s quite possible to run other code like Haskell, Lisp, or even C++. (Check out this story on compiling legacy C++ to a library to use with AWS Lambda.)

Writing Lambda functions often feels much more complex than you expect because Amazon offers so many options for configuration and optimization. While it’s technically true that you can write just a few lines of code and accomplish great things, I felt like I then had to allocate more time to configuring how the code runs. Much of this is accomplished by filling out forms in the browser instead of typing into text files. At times it feels like we’ve just traded a text editor for a browser form, but that’s the price of retaining all of the flexibility that Amazon extends to the Lambda user.

Some of the additional steps are due to Amazon exposing more options to the user and expecting more of the first-time function writer. Once I was done writing a function at Google or Microsoft, I could point my browser to the right URL and test it immediately. Amazon had me clicking to configure the API gateway and open up the right hole in the firewall.

aws lambda configuration IDG

AWS Lambda’s configuration page lets you click on the source of the events that trigger a function and the destination for more events.

In the end, all of this clicking adds a layer of handholding that makes the job just a bit easier than starting with a text file. When I was creating one function, the browser had a warning, “This function contains external libraries.” Back in the days of pure Node, that was something that I would just be expected to know, or I would learn it by Googling the error message while crossing my fingers and hoping that the answer was out there. Now the cloud is rushing in to help.

Amazon has a number of other options that are just about as “serverless” as AWS Lambda, if serverless means relieving you of server management chores. It has elastic tools like Amazon EC2 Auto Scaling and AWS Fargate that spin up and shut down servers, and AWS Elastic Beanstalk, which takes your uploaded code, deploys it to web servers, and handles the load balancing and scaling. Of course, with many of these automation tools, you’re still responsible for creating the server image.

One of the more useful offerings is AWS Step Functions, a sort of code-less flowcharting tool for creating state machines to model what software architects call workflow. Part of the issue is that all of the serverless functions are meant to be entirely free of state, something that works when you’re enforcing pretty basic business logic but that can be a bit of a nightmare when you’re walking some client through a checklist or a flowchart. You’re constantly going out to the database to reload the information about the client. Step Functions glue together Lambda functions with state.

Google Cloud Functions and Firebase

If getting rid of the hassle of configuring servers is your goal, Google Cloud has a number of services that offer various amounts of freedom from things like needing a root password or even using a command line at all.

Starting with the Google App Engine in 2008, Google has been slowly adding different “serverless” options with various combinations of messaging and data transparency. One called Google Cloud Pub/Sub hides the messaging queue from you so all you need to do is write the code for the data producer and consumer. Google Cloud Functions offers event-driven computation for many of the major products including some of the marquee tools and APIs. And then there’s Google Firebase, a database on steroids that lets you mix JavaScript code into a data storage layer that delivers the data to your client.

Of these, Firebase is the most intriguing to me. Some suggest that databases were the original serverless app, abstracting away the data structures and disk storage chores to deliver all of the information through a TCP/IP port. Firebase takes this abstraction to the extreme by also adding JavaScript code and messaging to do almost everything you might want to do with the server-side infrastructure including authentication. Technically it’s just a database but it’s one that can handle much of the business logic and messaging for your stack. You really can get away with a bit of client HTML, CSS, JavaScript, and Firebase.

You might be tempted to call Firebase’s JavaScript layers “stored procedures,” just like Oracle did, but that would be missing the bigger picture. The Firebase code is written in JavaScript so it will run in a local version of Node.js. You can embed much of the business logic in this layer because the world of Node is already filled with libraries for handling this workflow. Plus you’ll be enjoying the pleasures of isomorphic code that runs on the client, the server, and now the database.

The part that caught my eye was the synchronization layer built into Firebase. It will synchronize copies of objects from the database throughout the network. The trick is that you can set up your client app as just another database node that subscribes to all of the changes for the relevant data (and only the relevant data). If the data changes in one place, it changes everywhere. You can avoid all of the hassles of messaging and concentrate on just writing the information to Firebase because Firebase will replicate it where it needs to be.

google firebase error console IDG

Google Firebase provides an error console that shows a timeline for good and bad events as they’ve unfolded.

You don’t need to focus on just Firebase. The more basic Google Cloud Functions is a simpler approach to embedding customized code throughout the Google cloud. At this time, Cloud Functions is largely just an option for writing Node.js code that will run in a pre-configured Node environment. While the rest of the Google Cloud Platform supports a wide variety of languages—from Java and C# to Go, Python, and PHP—Cloud Functions is strictly limited to JavaScript and Node. There have been hints that other language options are coming and I wouldn’t be surprised if they appear soon.

Google Cloud Functions does not reach as deeply into the Google Cloud as AWS Lambda reaches into AWS, at least at this point. When I poked around looking at building a function to interact with Google Docs, I found that I would probably have to use the REST API and write the code in something called Apps Script. In other words, the Google Docs world has its own REST API that was serverless long before the buzzword was coined.

It’s worth noting that Google App Engine keeps going strong. In the beginning, it just offered to spin up Python applications to meet the demand of anyone coming to the website, but has been extended over the years to handle many different language runtimes. Once you bundle your code into an executable, the App Engine handles the process of starting up enough nodes to handle your traffic, scaling up or down as your users send in requests.

There continue to be a few hurdles to keep in mind. As with Cloud Functions, your code must be written in a relatively stateless way, and it must finish each request in a limited amount of time. But App Engine doesn’t toss away all the scaffolding or forget everything between requests. App Engine was a big part of the serverless revolution and it remains the most accessible to those who keep one foot back in the old school method of building their own stack in Python, PHP, Java, C#, or Go.

Microsoft Azure Functions

Microsoft, of course, is working just as hard as the others to make sure that people can do all of these clever serverless things with the Azure cloud too. The company has created its own basic functions for juggling events—the Azure Functions—and built some sophisticated tools that are even more accessible to semi-programmers.

The biggest advantage Microsoft may have may be its collection of Office applications, the former desktop executables that are slowly but surely migrating into the cloud. Indeed one accounting of cloud revenue put Microsoft ahead of Amazon, in part by lumping some of its Office revenue into the ephemeral rubric of “cloud.”

One of the best examples from the Azure Functions documentation shows how a cloud function can be triggered when someone saves a spreadsheet to OneDrive. Suddenly the little elves in the cloud come alive and do things to the spreadsheet. This is bound to be a godsend to IT shops supporting teams that love their Excel spreadsheets (or other Office docs). They can write Azure Functions to do practically anything. We often think that HTML and the web are the only interface to the cloud but there’s no reason why it can’t be through documents formats like Microsoft Word or Excel.

Azure’s Logic Apps caught my eye as one of the tools that lets you fill out forms instead of worrying about semantics and syntax. You still need to think like a programmer and make smart decisions about abstractions and data, but you might convince yourself that you’re not writing “code” as much as filling out forms.

microsoft azure functions ide IDG

Microsoft Azure’s web IDE lets you write your Azure function, run it, and debug it by inserting logging calls.

Like Amazon’s Step Functions, Logic Apps is meant to encode “workflows,” a buzzword that is slightly more complex than the average “function” that’s tossed around, thanks to the availability of some state. You can still write logic that links various functions and connectors in a flowchart-like way, but you don’t spell it out as much in an official computer language.

The big advantage of Logic Apps are the pre-built “connectors” that drill down into some of the bigger Microsoft and third-party apps out there. You can effectively push or pull data to and from your Logic Apps and the likes of Salesforce, Twitter, and Office 365. These connections will be very valuable to company IT folks who can now link together outside tools by writing Logic Apps just like they created shell scripts in the past.

Another intriguing corner of Azure is Azure Cosmos DB, a database that is both NoSQL and SQL at the same time. Microsoft has duplicated the APIs for Cassandra and MongoDB so you can push information in and out without rewriting your Cassandra or MongoDB code. Or if you want to write SQL, you can do that too. Cosmos DB keeps things straight and builds out indexes for everything to keep it running quickly. This makes it an intriguing central nexus if you’ve got lots of SQL and NoSQL code that you want make work together. Or maybe you just want to leave the door open to different approaches in the future.

Serverless cloud comparison

Which serverless platform is right for you? Writing basic functions is pretty much the same in all three silos, but there are differences. The most obvious may be the available languages because each plays favorites after they’re finished supporting Node.js and JavaScript. It’s not surprising that you can write C# for Microsoft’s Azure, but its support for F# and TypeScript is unique. Amazon embraces Java, C#, and Python. Google is strictly limited to JavaScript for its basic functions for now, although it supports many more languages in the App Engine.

The hardest part of comparing the serverless clouds is getting a handle on price and speed because so much more is hidden under the hood. I often felt like a crazy spender when I was spinning up VM instances that were priced in pennies per hour. Now the providers are slicing the salami so thinly that you can get hundreds of thousands of function invocations for less than a dollar. You’ll be tossing around the word “million” like Dr. Evil in the “Austin Powers” movies. Of course, this apparent cheapness soon bamboozles the rational, budget-conscious part of our brain, just like when we’re on vacation in a strange country with wildly different denominations of currency. Soon you’ll be ordering up another million database calls, just like that time you bought the bar in Cancun a round of drinks because you couldn’t divide fast enough to figure out what it really cost. When the cloud was selling you a raw virtual machine, you could guesstimate what you get from looking at the amount of RAM and CPU power, but in the world of serverless you’ve got no real clue what’s going on.

It’s worth noting that the serverless model pretty much forces you to stash data in the local cloud database because you’re not really allowed to keep any state with your code. You’ve got to trust these back ends. Your function must run without any local caches or configuration because other versions are always being created and destroyed. So the database glue code fills up your code like those vines in the Upside Down in “Stranger Things.”

The only real way to compare costs is to build out your app on all of the platforms, a daunting challenge. It’s possible to move some of the code between the three because they all run Node.js, but even then you’ll encounter differences that you just need to live with. (For instance, you handle HTTP requests directly in Microsoft and Google, but through the API Gateway in AWS.)

The good news is that you don’t need to be so paranoid. In my experiments, many basic apps use next to no resources and you can go a long way on the free layers all three offer to lure in poor developers. The serverless model really is saving us a bundle on the overhead. Unless you’re the type that ran your servers at close to full load all of the time and got free air conditioning, it’s likely you’ll end up saving some big money by moving to a serverless approach. You’ll be saving so much money that you won’t want to argue whether it’s $1 per million invocations or $1.50.

There is a deeper problem. If you ever get fed up enough with any of these clouds, you’re pretty much stuck. It’s not like it’s easy to just pull your code off and run it on a commodity server somewhere else, something you might do with a Docker container filled with your own code. If you’re lucky, you can duplicate the same raw architecture and basic JavaScript functions, but after that you’ll be rewriting database glue code all over the place. All three of the companies have their own proprietary data storage layers.

It’s also not clear just what happens when things go wrong. Running your own server means that, well, your boss can choke your neck when it doesn’t work. It’s not so clear what happens in this space. One page at Google contains this benign warning, “This is a beta release of Google Cloud Functions. This API might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.”

Amazon’s terms of service have gotten better than they were when they first entered the space but they still include warnings to keep in mind like, “We may delete, upon 30 days’ notice to you and without liability of any kind, any of Your Content uploaded to AWS Lambda if it has not been run for more than three (3) months.” Make sure your code runs if you want to keep it around. Warnings like this are certainly fair (I know that my old Lambda functions won’t ever be used again), but it shows how you’re surrendering some control.

Microsoft offers a service level agreement for Azure services that promises financial compensation for downtime via service credits. Will these apply when your functions go down? Perhaps—as long as you don’t wander into some beta area of the service. It’s worth spending a bit of time paying attention to these details if you’re going to be building something more mission critical than a chat room for kids.

In most cases, the real comparison you’ll want to do is between the other features and services of the Amazon, Google, and Microsoft clouds, not the function layer. The ability to trigger Azure Functions with Office files on OneDrive will be a big attraction if you support people who love their Office applications. Google Firebase makes it easy to use functions to provide supporting services like messaging and authentication to web apps. AWS Lambda taps into so many Amazon services, it seems the sky really is the limit.

It is technically possible to mix and match all of these clouds and functions because they all speak the same PUT and GET language of HTTP API calls. There is no reason you couldn’t whip together an app filled with microservices that mix the best of the three clouds. But you’ll end up with greater latency as the packets leave the local clouds and travel the wilderness of the open Internet. And then there will be slight differences in parsing and structure that make it simpler to just sit in one company’s warm embrace.

So it probably makes sense to stay in the safe section of a single cloud, at least when it comes to apps that are fairly interconnected. Do you really like Google Maps and do you want to use them for your project? Then you might as well use Google Cloud Functions even if in your heart of hearts you rather use F# with Microsoft’s Azure Functions. The same goes for Amazon’s voice recognition or Google’s image analysis API or any of the dozens of different services and machine learning APIs. The functions aren’t so important—it’s what they link together that really matters.