If we could go back in time and start using public cloud in 2009, we’d probably be better off today. The AWS beta started in 2006 and was entirely API-driven, without either a console or a command line interface to make interacting with the service easier than what we know so well now. Three years later, it was more mature. Early adopters started to solve real problems with it, padding their resumes and bringing value to their organizations in ways that seemed impossible before.
Serverless computing in 2018 is about where cloud computing was in 2009. But what exactly does serverless mean, and what are some easy ways to get started with it?
Function-as-a-Service: making serverless architectures possible
As cool as the technology is, serverless computing is a terrible name because (spoiler alert) there are, in fact, servers under the hood. The name comes from the idea that developers don’t have to worry about the server, or even a container, as a unit of compute any more as public cloud services like AWS Lambda, IBM OpenWhisk, Google Cloud Functions and Azure Functions handle the details.
A better way to think about serverless computing is that serverless is the software architecture built on top of function-as-a-service (FaaS), which is the underpinning that enables small pieces of software to get loaded into memory off of a disk in a few milliseconds.
Imagine having a number of containers spun up on standby and loaded with a language runtime like Node.js, Java, Go, .NET or Python, but with no application code in them yet. Only when some event occurs, like a database table getting written or a file showing up in object storage, does the piece of application code in the form of a function get loaded into that standby container. It’s executed there, and when done, the entire container is deleted, although there are scenarios where it might be cached for later reuse. The public cloud provider only charges you for the milliseconds of compute time and memory used, but otherwise your function stays on disk waiting to be executed at a much lower cost.
In this design, the functions are little pieces of code without state that take inputs based on the event that triggers them. By chaining events and functions together, a larger application architecture is formed—but one whose pieces are much smaller and easier to iterate over. Consider the scenario where someone drops an image file into an object storage bucket. That act triggers three functions to run in parallel, each responsible for taking the original image and creating thumbnails of three different sizes. As each of those functions writes its results to a second object storage bucket, another function takes the result and caches it on a CDN.
The first function is unaware of the second function and the second function is unaware of how the first function gets its work done. Instead of the component pieces being dependent upon each other with API contracts, they communicate with each other through event triggers. Extending this architecture with a fourth image thumbnail converter operates independently from the other three and leverages the same CDN forwarding function, making changes to the architecture far simpler.
Learning serverless with admin automation
The benefits that serverless provides come with tradeoffs. Typically each function execution is limited to five minutes, and memory available is restricted. Not all pieces of functionality can be made stateless, so making a database call to get context given some key passed in through the event trigger can eat into that five-minute time limit. It requires a different way of thinking to take advantage of the benefits within these new confines.
Fortunately, if you are already a public cloud user, you are likely already generating events all over the place with your day-to-day operations. The low-hanging fruit for learning serverless is to use those events to find ways to automate common administrative functions.
For example, perhaps you’d like to automate the DNS name of EC2 instances as they come up. You can set a trigger on CloudWatch to fire once the instance reaches a running state that will execute a Lambda function, which can talk to Route 53 to issue a new subdomain name off of your custom domain. In a more complex version of this scenario, maybe you use a DynamoDB table to track unique DNS name usage before you make the call to Route 53 within the Lambda.
Simple tasks, like that you might ordinarily use bash shell scripts for within an EC2 instance, can be externalized and more easily reused as Lambda functions. This lets you take simple, known tasks and use them as learning opportunities for this new programming paradigm at low risk.
From there you might graduate to batch jobs before graduating to full-blown multiple tier web applications.
FaaS and serverless as the future?
Aside from slicing the usage-based costs into milliseconds, FaaS and serverless show promise as the future of programming because of their ability to decouple components from one another, enabling development teams to architect smaller, more independent pieces that can be iterated over more quickly. That will likely lead to innovation gains not possible even with microservices and containers, but it’s early still. You’ll find a lack of testing and debugging tools, solid CI/CD, and other tooling that is commonplace in the VM and container world today. Still, it’s time to learn serverless with some simple use cases, so you can stay ahead of the curve both personally and for your organization. This will ensure that you are positioned to take advantage of the next round of innovation gains.
This article is published as part of the IDG Contributor Network. Want to Join?