Dude, Where’s My Server? CI/CD for Serverless

When infrastructure is spun up per request, your pipeline becomes supercritical.

By Ravi Lachhman
July 2, 2019

Serverless infrastructure has been gaining popularity in recent years ushering in the next generation of right-sizing infrastructure per user request. Serverless function invocations typically are 1:1 for user requests. Though in recent years thanks to technology like Step Functions this can be 1: many. With this bloom of serverless technologies, the reliance on our pipelines become even more critical. 

The Rise of Serverless

Building business logic is usually just half of the equation for a software engineer. The other half is navigating application infrastructure. In the Java world, tuning the application server/Java runtime is not uncommon. Having an understanding of what your application is doing is just as important as where/what is running the application.

The other half of where/what is that running your application takes away from the core innovation work that software engineers strive for. Imagine a paradigm where you can just focus on the function or business logic without having to worry too much about where your application is running, or even scaling the function. You are not alone, thus the serverless boom is underway.

The benefit of a serverless function is if that Docker image takes tens of seconds to turn into a running container, a serverless invocation should take sub-second. When the function is complete, the resource should be returned to the available pool of resources to spin back up again. 

More than just Lambda

Those starting on their serverless journey with any sort of cursory Googling will come across Amazon’s Lambda. Amazon introduced its Lambda Service in November 2014 which for most introduced serverless to the masses. 

There are several alternatives to AWS Lambda with the major cloud vendors offering up function services. Notable mentions are Apache OpenWhisk, and more recently  Google KNative which exposes a Kubernetes and Istio stack have been increasing in workloads. Serverless infrastructure has also been finding themselves into Platform-as-a-Service offerings as organizations look to provide infrastructure that modern developers require. 

Serverless functions typically follow a trigger, execute, output model. The trigger will indicate the start of the function time for the function to warm up and do all of the magic that the engineer requires. At the end of the function execution, an output is created of some sort let that be in memory or persisted somewhere. 

Give me Serverless or Give me Death (Dearth)?

If we can successfully decompose our applications into the trigger, execute, and output model we can kiss complexity goodbye. If only life was so easy; rarely are serverless functions in a vacuum. Organizations might shift or build newer workloads and have a serverless component. Eventually, there will need to be state and thus the server re-appears. 

Because from a technical standpoint, deploying and triggering a function is low overhead when compared to more traditional approaches, iteration tends to be higher. Iteration is the birth of innovation in the software world; we never get things right the first time. 

With the rapid interaction, we might start to introduce smaller and smaller changes which in theory would build confidence in the function. By the time we waive the white flag and start accepting workload, we need to be cautious that we did not take shortcuts that would bite us. With any technology though, if you have not tried out your own serverless deployment, getting started is simple. 

When to Serverless?

If you have not written a Lambda before, FreecodeCamp has one of the best tutorials out there in creating a NodeJS Lambda. In their example, they walk through a palindrome Lambda function soup to nuts.

With the basics out of the way and now that serverless functions are “never odd or even” (see what I did there, palindrome!), like a Linux Container aka Docker, re-writing your entire application stack in serverless wouldn’t solve all your problems. 

Hackernoon has one of the best articles I have seen around the technical edges that one firm had with migrating totally serverless with AWS Lambda in production. The author goes through the shorter lived and more transitory nature of workload than a Linux Container. Basically a race for a response aka the output in our trigger-execute-output equation. 

The author in the Hackernoon article also talks about balancing cold starts; this it the time that the function takes to warm up to start executing. The application container aka the Lambda Container might be available to start executing quickly but when you start layering on dependencies especially if there if there is a need to communicate/persist with an external source e.g a Redis/Memcached cluster, your experience.

Cost is something not to be forgotten either. An excellent Medium article shows the complexities of Lambda cost calculation.  A Lambda’s cost by itself has to do with number of executions + data transfer + compute time. As the AWS services start to pile on, that cost can go up for CDN, APIGateway, Storage, etc for your functions. No need to fret, get cracking but don’t forget our friend the SDLC for more adoption. 

Don’t Forget the SDLC!

With such a rapid way to see your hard work dreaming up the next generation of logic being executed, too easy to forget our SDLC discipline. The beauty and depth of serverless infrastructure is lower overhead and rapid change. The adage of what separates a junior and senior software engineer is knowing where the rough edges / where items can go wrong proves true. 

The rigor in writing non-serverless infrastructure by systematically building confidence into the function e.g tests and code coverage should become part of your deployment pipeline. True that the provisioning piece of the application infrastructure comes about much quicker and if designed right in a much more elastic manner. Though this velocity will test your existing or potentially new pipelines. Thus, your pipeline becomes even critical in this new paradigm of serverless computing. 

From Zero (or more) to Serverless Hero

Curious where to start to how to include a Lambda as part of your application stack or how to supercharge / build confidence into your Lambda deployments? Look no further than Harness. 

Including a Lambda as part of your pipeline is simple with Harness. We have produced a quick video outlining steps to have a Lambda as part of your pipeline. We also have a deeper dive into Lambdas in our documentation. “Murder for a jar of red rum”? Well, now you don’t have to with our Community Edition which is free forever. Join our Community today! 

-Ravi

➞ Back to Blog

Leave a Reply

avatar
  Subscribe  
Notify of