Serverless Series Part 2/3 – The Dark Side of Serverless

I do not fear the serverless like you do. I have brought portability, security, and observability to my new empire… I mean my organization!

By Ravi Lachhman
September 11, 2019

Our friend Anakin Skywalker aka Darth Vadar’s memorable quote in Star Wars Episode Three “Revenge of the Sith” telling Obi-Wan Kanobi his accomplishments around the Dark Side was a memorable part of the movie. Like two dueling engineers having a heated discussion on the rights and wrongs of a design paradigm which one could be the future or not. 

Like any shift in technology, battle lines can be drawn with the old vs new guard. For those who remember the JAVA/JEE vs NodeJS arguments, can feel like pulling teeth. Though in technology was not JAVA/JEE vs NodeJS but better together with JAVA/JEE and NodeJS. 

We learned in part one of this Serverless blog series of the agility and low overhead that a serverless invocation takes. But like any technology, serverless does have some drawbacks and these drawbacks are important to keep in mind when architecting your applications. Like Kubernetes and Hadoop, serverless will solve all of your problems, right? The first design consideration is where to take these new-fangled serverless functions.

Portability problems

Free yourself from the shackles of infrastructure and execute your code in a serverless platform.  The allure certainly sounds amazing. Though serverless itself is infrastructure and not some magic platform that runs your code in the ether. 

There is opinionated on how your code should run. NodeJS is NodeJS and JAVA is JAVA but remember the three parts of the serverless lifecycle; trigger, work, and output. The work part is your code. For example, on AWS, you are limited to certain versions of languages. Want to use JAVA, well on AWS Lambda the supported version is JDK 1.8 and at the time of this blog, JDK 1.12 is the latest. 

Parts that are difficult to port from provider to provider is the trigger and how the serverless function is scaled. Those functionalities are specific to each cloud vendor and if you are hosting serverless infrastructure yourself or on a platform-as-a-service, they are all different. 

Potentially there is some reprieve using a more ubiquitous technology such as Kubernetes, for example, Google KNative which is Google’s Istio + Kubernetes + serverless combination. KNative is important because KNative exposes a lot of the moving pieces inside a serverless implementation.  Even with KNative we are still laden with lots of choice and opinion.

Smells like infrastructure

Today we are used to choice in our infrastructure and serverless implementations are no different. Subtracting the array of public cloud offers such as AWS Lambda, Google Cloud Functions, and Azure Functions, there are certainly ways to bring serverless more under your control. 

Apache OpenWhisk is an Apache Software Foundation project whose goal is to be more generic in serverless infrastructure. Apache OpenWhisk can deploy into Mesos and Kubernetes as orchestrators. 

Honorable mentions would also be Open FaaS [function-as-a-service] and Galactic Fog. Google does have a few combinations of KNative such as Cloud Run. The KNative ecosystem continues to expand with the recent addition of CloudState from Lightbend

With any of these more generic serverless infrastructure choices you make, you or your organization will have to maintain the serverless infrastructure.  With the rapid and sometimes unpredictable ways that functions can scale requires sufficient cluster capacity to handle the spikes. It can seem like we are shifting the complexity from one area to the next.

Shifting complexity

I had the opportunity to give a few talks on container-based vs serverless based workloads and these two slides can explain using tacos, yum!

Tacos and Serverless

Each request can cause the function to scale differently. Complexity is also shifted to a database cluster in this example Redis.

Tacos and K8s

Simpler times with Kubernetes. The orchestrator is very portable if no customization. 

Given that serverless functions are short-lived and can scale extremely rapidly, complexity moves to different parts of the stack. Like the first set of Kubernetes based workloads in 2014, there is the possibility of maintaining state.  

Though in our taco example above that state would be delegated to an external system such as a Redis cluster.  State becomes very critical if you start to have more than one function to fulfill a request.

Your function has a function

Like Docker in Docker [DnD] can seem that abstraction knows no bounds. A contemporary argument against serverless functions of days gone by was that you could easily include too much logic in one trying to get an execution with one pass. Though with advancements such as Amazon Step Functions, calling more than one or a series of functions is easier. 

Because of the speed and ease of invoking a serverless function, the infamous Big Ball of Mud antipattern can spring up very quickly if left unchecked. The “serverless container” can instantiate really quickly. Sometimes even the underlying language in a serverless function can come into play for example with a cold start.

Cold starts

A problem that I have not really thought too much about in the non-serverless world is cold starts. In the JAVA AppServer world, minus a deployment or restart you would not care too much about a cold start. In come containers with our friend Docker and yes those cold start/bootstrap becomes more important but adding sufficient capacity could help you overcome the cold start. In other words, awaiting capacity that is warm would take over until additional capacity is needed. 

Serverless, on the other hand, takes this cold start problem to the extreme. Every request is an invocation as when requests are finished the serverless container is destroyed. If using a language such as JAVA, the Cold Start Problem is one to take careful design and infrastructure consideration for. As part of the aforementioned Cold Start article, there are tools that help keep your functions “warm” such as Thundra. The flip side of that warm function, in this case, a warm AWS Lambda would impact your billing.

Even with the limitations that serverless infrastructure has, serverless is ushering in next-generation workloads and Harness is here to help.

Incremental serverless confidence with Harness

The beauty of Harness is the ability to have your pipelines be flexible and robust. Creating a new or finding the appropriate workload to leverage serverless is important. Your pipeline should be flexible enough to encompass both serverless and non-serverless workloads. With that flexibility, incremental introducing serverless is easier than ever. 

In part three of the series coming up, we will do a deeper dive on the Harness Platform and serverless infrastructure so stay tuned! 

Cheers,

-Ravi

➞ Back to Blog

Leave a Reply

avatar
  Subscribe  
Notify of