🚀 Fauna Architectural Overview White Paper: Learn how Fauna's database engine scales with zero ops required
Download free
Fauna logo
Product
Solutions
Pricing
Resources
Company
Log InContact usStart for free
Fauna logo
Pricing
Customers
Log InContact usStart for free
© 0 Fauna, Inc. All Rights Reserved.

Related posts

How Fauna and Cloudflare Make Serverless Development Faster & More ScalableFive DynamoDB limitations you should know before using itFauna Launches Pay-As-You-Go Listing on AWS Marketplace

Start for free

Sign up and claim your forever-free Fauna account
Sign up and get started

Table of Contents

Authentication

How to keep your serverless applications secure

Feb 16th, 2021|

Categories:

ServerlessSecurity
AWS Lambda, which was introduced in 2014, started the serverless computing revolution. Today, serverless computing benefits organizations across the board with advantages such as cost savings, increased scalability, decreased latency, and no server-side resource management. With serverless computing's growing popularity, however, the potential for security attacks has increased. This is where keeping your serverless architecture secure becomes important.
But first, let’s explain what serverless computing means.
Serverless architecture is an architectural paradigm where computing resources are provisioned on demand, offloading all of the responsibility for infrastructure management tasks such as scaling, provisioning, and patching to the managed service providers.
This reduces the financial and operational burden because organizations don't have to spend time and resources maintaining and scaling their infrastructure. There are no more fixed charges for reserved resources, and no headaches dealing with over or under-provisioned resources. Serverless offers a truly dynamic pay-as-you-use model that ensures you’re only paying based on demand for your own application. From a developer point of view, this abstracts away the infrastructure-related details and enables them to focus on delivering application capabilities. In a nutshell, serverless computing is very useful to increase the velocity of development.
As for security, serverless has its own advantages and challenges. Let’s go over a few of them.

Security benefits of serverless computing

Improved stability With the cloud provider managing the operating system, runtime security, and server patching, developers don’t have to worry about a large portion of the stack. They can therefore focus on developing the application, rather than handling activities such as upgrading OSs, configuring firewalls, and handling downtime when security patches are applied.
Stateless servers = Harder for attackers The stateless or ephemeral nature of serverless computing makes attackers’ lives harder. Serverless connections have extremely short session duration times making it difficult for an attacker to carry out an attack. When coupled with the constant recycling, refreshing, and memory flushing across the underlying server resources, it creates an environment that is hard for an attacker to compromise.
Reduced surface area With smaller microservices, the attack surface can be reduced by using finer-grained security policies and precise service segmentation. Services can be built to perform a single function and can be mapped one-to-one with IAM roles to beef up the service security. In addition, end-to-end security protocols in effect between client and server endpoints can maximize protection and boost confidentiality of sensitive data.

Security challenges with serverless computing

While the serverless computing model introduces benefits, there are also some challenges to keep in mind.
Slow insights from logged data As serverless computing connects a large number of microservices, each with its own functions, policies, and data formats, it can become difficult to manage and quickly process the large number of logs that are produced. Additionally, as the number of microservices scales, it is tougher to distinguish the signal from the noise and convert those signals into actionable insights or anomalies in real time.
Poor observability across multiple cloud providers When running services across multiple cloud providers, observability becomes a challenge and it’s hard to get a unified view of your entire ecosystem. As a result, without adequate security controls in place, attacks can slip through the cracks and go unnoticed.
More resources equal more permissions to manage One of the primary ways to ensure security of serverless computing environments is through adequate access control. By breaking monolithic parts of the application into smaller microservices, developers can accommodate multiple levels of access to the underlying resources. With a growing number of microservices and underlying resources, management can become increasingly difficult, and configuration errors can lead to a vulnerability.

Serverless security best practices

In spite of a few security challenges, there are a couple of tried and tested best practices that make serverless computing more reliable.
Maintain least-privileged access for serverless functions and other services The rules of security least-privilege apply to serverless functions. This means having access to only what is needed and nothing more. If an attacker is able to bypass the system checks and get access to a serverless endpoint, only a limited scope of the serverless ecosystem will be exposed and the damage can be contained as long as least privilege is in place.
Regularly scan for vulnerable third-party dependencies, configuration errors, and over-permissive roles Now, more than ever, supply-chain security has become a top priority. This means, if your code depends on an open source package that has a security vulnerability, this vulnerability can result in a range of different security problems. Cloud security configuration errors can also put sensitive data at severe risk. This includes custom roles that are not fine-grained and incorrectly configured to be over permissive. There is no doubt that continuous scanning is an important best practice step to effectively manage risk.
Use runtime protection to detect malicious event inputs and anomalous function behavior Runtime security involves putting in place security controls that constantly monitor what is happening on the system to detect unexpected behavior that could be malicious or anomalous. For example, enforcing a runtime security policy across a service can bring to light a software bug causing erratic service behavior or leaking of sensitive data.
Start building today with Fauna With security being an important boardroom topic, it is important to ensure that you select a serverless data platform that can give you peace of mind. Don’t know where to look?
Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API that gives you the data safety, security, and scalability you need to build a new business or modernize existing applications. It is 100% ACID compliant and offers innovative capabilities such as data temporality, streaming, and multi-tenancy. Fauna offers web-native secure access, combining attribute-based access control with SSL and 3rd party auth.

Get started on Fauna instantly

Sign-up for free The data API for modern applications is here. Sign-up for free without a credit card and get started instantly. Sign-up now
Quick start guide Try our quick start guide to get up and running with your first Fauna database, in only 5 minutes! Read more

If you enjoyed our blog, and want to work on systems and challenges related to globally distributed systems, and serverless databases, Fauna is hiring

Share this post

‹︁ PreviousNext ›︁

Subscribe to Fauna's newsletter

Get latest blog posts, development tips & tricks, and latest learning material delivered right to your inbox.