Five key takeaways for designing, building and deploying serverless applications in the real world
The term “serverless architecture” is a recent addition to the technology lexicon, coming into common use within the last year or so, following the launch of AWS Lambda in 2014. The term is both quizzical and provocative. Case in point: while explaining the concept of serverless architecture to a seasoned systems engineer recently, he literally stopped me mid-sentence—worried that I had gone insane—and asked: “You realize there is actual hardware up there in the cloud, right?” Not wanting to sound crazy, I said yes. But secretly I thought to myself: “Yet, if my team doesn’t have to worry about server failures, then for all practical purposes, hardware doesn’t exist in the cloud—it might as well be unicorn fairy dust.” And that, in a nutshell, is the appeal of serverless architecture: the ability to write code on clouds of cotton candy, without a concern for the dark dungeons of server administration.
But is the reality as sweet as the magical promise? At POP, we put this question to the test when we recently deployed an app in production utilizing a serverless architecture for one of our clients. However, before we review the results, let’s dissect what serverless architecture is.
To help make sense of it, first of all, think of Lambda functions as nanoservices. They should be more course-grain than the code-level functions you typically use to structure your app, and you shouldn’t expose all of your internal application functions as Lambdas. However, they are more fine grained than a typical microservice, even though like a microservice, they are mini-servers in themselves that execute their own code. Behind-the-scenes, Lambda functions use containers, clustered and fully-managed by Amazon. As a result, each Lambda function is stateless and runs in an isolated process, making it possible for AWS to scale up or down based on usage.