/ Tech

The Rise of the Function-As-A-Service

"Functions as a Service" are becoming quite a common component in complex web services; they allow energy conservation - and thus cost reduction - via only utilising resources when required.

The daddy of FaaS providers is AWS Lambda, as it ties nicely in to other AWS services like the AWS API Gateway, SQS and S3. There are other providers though, such as Google's Cloud Functions or Microsoft's Azure Functions.

If you're interested in running your own FaaS infrastructure there's also OpenFaaS - a self-managed open-source package; I currently run this on a Raspberry Pi cluster - but I haven't been overly impressed. It seems to lack the polish that comes with one of the cloud offerings from the likes of Amazon or Google.

For ease of use, I'm going to use AWS terminology in this post: but FaaS is not a distinctly AWS concept! The principles outlined are architectural, and therefore vendor agnostic.

Why should I choose FaaS?

If, like me, you believe Cloud Computing is generally psuedonymous with "on someone elses computer" - then I'll admit that FaaS sounds like a step too far.

"Functions-as-a-Service? Why would I want that, that sounds gimmicky!"
-- A bemused looking Fergus, circa 2015.

In reality though, the concept of FaaS can be incredibly powerful. To really harness this power though, you need to ensure that your use case is appropriate.

In short that comes down to identifying short-lived tasks that work in isolation, and are not required to run excessively; if your task is tightly coupled to your application - i.e you have a monolithic application architecture - then FaaS is unlikely to be a good fit. Similarly, if your task is run excessively then the costs of FaaS may be more than running a dedicated VM.

Examples of good use cases include:

  • Dealing with inbound events - i.e via AWS SES, Twilio, or payment processors;
  • On the spot file manipulation - i.e via AWS SQS
  • Providing auxilliary functionality (i.e error handling) to data pipelines

An Example Architecture

Below is an example of an Security Information and Event Management pipeline, based off of a real world project I worked on in circa 2015.

An example SIEM pipeline

Upon a security incident, external hosts - i.e application servers - hit an API endpoint with a POST request. This request hits an API Gateway, that uses Lambda1 to perform some authentication checks.

Once the request has been authenticated, it's payload is placed in a queue. Hereby the processing begins, there are two stages of processing - with buffering performed between processing via another queue; this prevents any bottlenecks from saturating our service.

If any errors are encountered, the payload is pushed to another queue - whereby another Lambda2 is invoked to perform a clean-up task, or perhaps log the failure.

If the request is successfully authenticated, and the payload is accepted and processed, then the user recieves a notification (i.e via AWS SNS) and the payload is stored to a database (i.e in this case, AWS RDBMS).

Why would a FaaS platform be suitable?

When developing a SIEM you would hope that security events would be quite rare, and as such AWS Lambda compliments the API Gateway by providing per-request authentication.

Similarly, you would hope that processing errors are even rarer - in which case AWS Lambda is the perfect tool for handling them.

On the other hand, the queue workers are likely to be handling external resources - i.e to populate the event payload with additional details - and as such this would require additional overhead on each request. When billing is handled per 100ms, this additional overhead can soon add up!

Even if the queue workers are lightweight, there may be money saved by running them as containers on a dedicated VM - i.e AWS EC2.

Potential Pitfalls and Securing Success

For any architectural approach to be successful, you need to understand the potential pitfalls of any choices. As already discussed in this article, the primary pitfalls are the potential costs incurred during times of high utilization. This is why it's key to investigate (a) how often your task is called, and (b) how long it takes to execute.

Another - often overlooked - potential pitfall of utilising a cloud FaaS provider is that it further ties your application architecture to a specific vendor; if you've seen the pain of a SaaS company migrating from the likes of AWS to GCP, you'll understand just how many headaches that can cause.

When used appropriately though, FaaS can be complimentary to a well-designed system - offering both cost saving, as well as a high level of decoupling.


Contract Software Developer and DevOps Consultant, based out of London in England. Interests include information security, current affairs, and photography.