Cloud Computing

AWS Serverless Architecture In Practice

by Jake Bennett

Five key takeaways for designing, building and deploying serverless applications in the real world

Blue Cloud on Chalkboard

The term “serverless architecture” is a recent addition to the technology lexicon, coming into common use within the last year or so, following the launch of AWS Lambda in 2014. The term is both quizzical and provocative. Case in point: while explaining the concept of serverless architecture to a seasoned systems engineer recently, he literally stopped me mid-sentence—worried that I had gone insane—and asked: “You realize there is actual hardware up there in the cloud, right?” Not wanting to sound crazy, I said yes. But secretly I thought to myself: “Yet, if my team doesn’t have to worry about server failures, then for all practical purposes, hardware doesn’t exist in the cloud—it might as well be unicorn fairy dust.” And that, in a nutshell, is the appeal of serverless architecture: the ability to write code on clouds of cotton candy, without a concern for the dark dungeons of server administration.

But is the reality as sweet as the magical promise? At POP, we put this question to the test when we recently deployed an app in production utilizing a serverless architecture for one of our clients. However, before we review the results, let’s dissect what serverless architecture is.

AWS Lambda is a pure compute service that allows you to deploy a single thread of execution. With Lambda, you simply write a function (in Python 2.7, JavaScript/NodeJS 4.3 or Java 8), deploy it to AWS and get charged for the amount of memory used per second. Brilliantly simple, right? Yes, at first, but then the questions start to arise. How do you actually call a Lambda function? How do you manage return values and exceptions? Applications typically contain hundreds of functions; do you deploy all of them as Lambda functions? How should you structure a serverless app given the extreme level of deployment granularity that Lambda provides?

To help make sense of it, first of all, think of Lambda functions as nanoservices. They should be more course-grain than the code-level functions you typically use to structure your app, and you shouldn’t expose all of your internal application functions as Lambdas. However, they are more fine grained than a typical microservice, even though like a microservice, they are mini-servers in themselves that execute their own code. Behind-the-scenes, Lambda functions use containers, clustered and fully-managed by Amazon. As a result, each Lambda function is stateless and runs in an isolated process, making it possible for AWS to scale up or down based on usage.