What is Serverless? And why is it good?

November 12, 2018

Down arrow black


Technology Director

NB: If you would rather someone read this to you, watch this video

Serverless is an software development architectural approach for things like APIs, web applications or event-driven processing.

It’s called 'Serverless' because instead of managing your own hardware, you only have to write the code and a vendor such as AWS or Microsoft will configure and host the infrastructure for you. So yes, there are (of course) servers somewhere, you just don’t have to concern yourself with their management. This is known as "functions as a service" (FaaS).

All you have to do is write and upload a function, put an API in front of it, and you’ve got a live, scalable, fault-tolerant API endpoint.

There are four main facilitators of Serverless; AWS Lambda, MS Azure Functions, Google Cloud Code, and IBM Bluemix Openwhisk. They are all quite similar but have their own nuances and support for different languages. For example, if you want to write an API in Swift, you’ll you need to host it with IBM Openwhisk. Here's some other examples:

  1. AWS if you're working in node.js, python, java, c#, go
  2. Microsoft if you're working in node.js, c#, python, php
  3. Google Cloud if you're working in node.js
  4. IBM Bluemix if you're working in node.js, swift, java, go, php, python

Why is this good?

  1. Scalability: Serberless Functions will scale on demand (per request, even) without you having to lift a finger in setup or configuration. Your stack is also fault-tolerant; you cannot break the infrastructure stack with a bad execution. The vendor will simply spin up new containers on demand.

  2. Cost effectiveness: You only pay per use, which means when your product is under heavy load, you pay per millisecond of compute time and when it’s not being used, you pay nothing at all. This model is ideal for testing MVPs or new products that haven’t yet shown commercial success. Even if your app is under heavy load, the pricing is relatively comparable to running your own infrastructure stack, and considerable cheaper to a Platform as a Service (PaaS) model.

  3. Maintenance and DevOps reduction: As you’re not provisioning your own instances, you no longer need to be concerned with firewalls, security patches and routing logistics (to some degree). Developers can focus on writing the application and not configuring the environment. There becomes a blurred line between developers and DevOps, but a developer with some back-end knowledge should be able to maintain their own Serverless stack.

All of this paves the way for a very quick turn around of getting productionised versions of your product up and running, live and immediately ready to take heavy traffic.

Why can this be bad?

It’s not bad - it’s great! However, there are some applications that are not suitable for a Serverless architecture. Apps that require very low latency may not be suitable for this design pattern, like trading systems.

Another objection to the Serverless design pattern is the vendor lock-in. And this problem can be difficult to some degree because functions you write for Lambda will not directly port to Azure or Google. However, if moving between cloud providers, there are going to be some lifting and reworking jobs no matter how your product is architected. The good news is that there are are tools to help you mitigate this. The Serverless framework lets you define your Serverless app structure, and this can be ported between vendors. If you develop your functions using a service like slspress, you can normalise your entry points for the functions no matter where it’s hosted, so the only updates you need to make are the other services you might be using for persistence or messaging.

What is under the hood?

FaaS providers don't specifically make their underlying architecture transparent, but here are some behavioural aspects shared by all providers that give an indication to how they are structured.

FaaS uses the concept of a warm or cold invocation for each function call. A cold invocation will take a little longer to serve the request because the provider will host a container to serve these requests, but will place the container offline if there is no traffic. When traffic hits the function, the provider will activate a container to serve the request at runtime. This is therefore a 'cold startup', and can take anywhere from 2 to 12 seconds, depending on what language or runtime you are using. Java takes significantly longer as it needs to start the JVM and can take up to 12 seconds.

Once a container has become active from a cold invocation, it will remain active for a short period of time (anywhere from 5 to 15 minutes) ready to serve other requests. These other requests are warm invocations (and will be the majority), because there is already a container prepared to serve requests.

Because containers are shared with warm invocations, it is also possible to share states between functions, and keep data in memory. However, you will only be able to share states within one container, and there may be hundereds (depending on the load). You must generally be quite careful with the states you share, as containers can be shut down at any time by the system, and there are no callbacks to manage a container shutdown gracefully (yet!)

What is the Serverless framework?

All Serverless apps can be created manually; you can upload your functions separately, create your API routes, and configure your data pipeline. However, this is quite inefficient, erroneous to reproduce, and doesn’t help you spin up new environment.

For this reason, most people use tooling to manage this for them, and the most populare is a framework called, simply, Serverless. This provides you with a configuration interface for your app which will then be used to managed the deployment on your chosen cloud provider. In AWS, this will create a cloud formation configuration, create your API gateway, and deploy your Lambda functions and configure all these services for you. Their website has all the information needed to get you started with this framework.

What is SLSPress?

Slspress is a Node.js npm package which provides the middlewhere to help you structure the routing of your Serverless or Lambda functions when developing an application. If follows similar principals to the popular Express framework and is currently available for AWS Lambda, but will soon support Azure and Google Cloud as well.

It helps you structure your logging, error handling, any middlewhere and components which you might want to share between function calls. The apps used on sls.zone are built on the slspress framework.

What's next?

sls.zone is Reason's resource for sharing our best practices and knowledge about using Serverless, FaaS and middleware to manage your serverless applications. In addition to tutorials and resources on the site, we run meetups to discuss the real life issues and questions you may be having and build a community in London of people using Serverless architecture and wanting to share their experiences and learngings. We'd love to hear what you'd like to see on the site next, or discuss in the meetup.

Take a look - and let us know what we can do to help you use Serverless as effectively as possible!