Top 3 Serverless Mistakes
October 17, 2022

Tal Melamed
Contrast Security

Ever experience a serverless nightmare?

Hacker News contributor "huksley" has, and it's been a pricey wakeup call about the need to understand the complexities of parameters in a serverless environment.

According to the tale of woe they posted earlier this year, huksley wound up DDoSing him- or herself. They had accidentally created a serverless function that called itself in a recursive loop that ran for 24 hours before it was caught — a function that was using over 70 million Gbps and ran up a "shocking" bill of $4,600.

That's just one of the top serverless mistakes — one caused by not knowing that AWS Billing alertss don't work on the AWS CloudFront content delivery network, which collects information on charges from all regions. That information collection takes time and thus delays the release of the billing alert, as huksley detailed.

Read on for what we see as the top three serverless mistakes that can similarly get you into trouble.

Serverless: The new buzzword

First, some background about why the word "serverless" is becoming a buzzword in the application world. The term "serverless" refers to a cloud-native development model that allows organizations to build and run their applications without the burdens of physical server infrastructure.

Serverless applications offer instant scalability, high availability, greater business agility and improved cost efficiency. This dynamic flexibility helps save time and money across the entire software development life cycle (SDLC). An August 2022 report on the global serverless apps market is forecasting that between 2022  to 2031, the market will record ~23% compound annual growth rate (CAGR).

Still, serverless application security (AppSec) remains a serious issue. As it is, traditional application security testing (AST) tools cannot provide adequate coverage, speed or accuracy to keep pace with the demands of serverless applications. In fact, as of April 2021, concerns about the dangers of configured or quickly spun-up cloud-native (serverless or container-based) workloads had increased nearly 10% year-over-year.

There's good reason for growing concern: For one, malicious actors are already targeting AWS. What we see as the biggest serverless mistakes:

Mistake No. 1: Not understanding security gaps

Organizations think that AWS manages the security, but that is not fully true. Once you write your own code, that code — including the AWS Lambda infrastructure — falls us under your responsibility as a developer or organization.

As such, you have to consider code and configuration, given that the code is always under the customer's responsibility.

Put plainly, in the AWS shared-responsibility model, organizations cannot just use the parameter security. Rather, they need to protect themselves.

AWS is responsible for securing underlying structure, but developers must make sure they secure serverless workloads or functions themselves, given that in serverless, there's no perimeter to secure. Rather, lambda must secure itself by using the "zero-trust" model, which entails:

1. Thorough, continuous authentication and authorization based on all available data.

2. The use of least-privilege access.

3. The assumption that a breach exists: an assumption that supports the visibility provided by end-to-end encryption and use analytics — visibility that leads to improoved defenses and threat detection.

Mistake No. 2: Using traditional tools

Whereas serverless applications are gaining traction due to their benefits, traditional AST tools cause workflow inefficiencies that ultimately bottleneck serverless release cycles.

Traditional security tools — Static Analysis Secuurity Testing (SAST) and Dynamic Analysis Security Testing (DAST) — just aren't made to scan modern appliications.

For example, with the complexity of modern application programming interface (API) code, the frameworks that support them and the complex interconnections between them is simply too much for static tools. Such tools produce an onslaught of false positives, and they miss serious vulnerabilities.

As well, in serverless-based applications, where the architecture is event-based as opposed to synchronous (as is a monolithic application), code can be executed via numerous types of events, like files, logs, code commits, notifications and even voice commands. Traditional tools just aren't built for that and cannot see beyond a simple REST API.

Given their lack of visibility and accuracy, legacy tools depend on expert staff to do manual security triage as they attempt to diagnose and interpret the results before handing recommendations (with limited context) back to developers to fix the problems. After weeding out the high number of false positives, security teams are left to figure out which vulnerabilities should be addressed first. This inefficiency inhibits SDLCs, increases costs and often fails to eliminate many vulnerabilities that can be exploited by cyberattacks.

Static and dynamic tools don't scale well, typically requiring experts to set up and run the tool as well as to interpret the results.

All these reasons are why organizations are opting instead for purpose-built, context-based solutions. Serverless applications are a mix of code and infrastructure, and it is therefore essential to understand both. Organizations need a serverless solution that understands both the code of the functions and its configurations — such as entry points (i.e., triggers) and Identity and Access Management (IAM) policies — and that provides custommers with context-based insight into serverless risks.

Mistake No. 3: The Dangers of Misconfigurations

As "huksley" found out, serverless presents the potential for large overages with incorrect parameters. Without setting a limit on the number of requests allowed by a serverless function, the code could accidentally rack up numerous requests and create a large AWS charge.

Configuration of function at the permission level is another major issue in serverless. Usually, developers use generic permission levels, which give functions far too many permissions. This can result in vulnerable or stolen keys, which can have a big impact in cloud computing. In such a scenario, malicious actors may be able to steal information from databases/buckets, given that the lambda permission has been set at a very broad level. Developers must instead apply the least permissions needed.

That isn't an easy task. Or, to be more precise, this task can be easy — if you write only one function, with 1,000 liines of code. But with dependencies, it becomes a little crazy. The code needs to run in order to understand what function is being carried out, and it must have enough permissions to execute that function.

Conclusion

In April 2022, Cado Security discovered Denonia, the first ever malware to specifically target AWS Lambda. More threats are sure to follow. Avoiding these top mistakes can help to secure your organization when they do.

To fend off such attacks, keep an eye out for free, open-source tools — they can help to secure youur serverless without breaking the bank.

Tal Melamed is Senior Director, Cloud-Native Security Research at Contrast Security
Share this

Industry News

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.

November 18, 2024

Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.

Read the full news on APMdigest

November 18, 2024

Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.

November 18, 2024

Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.