Top 3 Serverless Mistakes
October 17, 2022

Tal Melamed
Contrast Security

Ever experience a serverless nightmare?

Hacker News contributor "huksley" has, and it's been a pricey wakeup call about the need to understand the complexities of parameters in a serverless environment.

According to the tale of woe they posted earlier this year, huksley wound up DDoSing him- or herself. They had accidentally created a serverless function that called itself in a recursive loop that ran for 24 hours before it was caught — a function that was using over 70 million Gbps and ran up a "shocking" bill of $4,600.

That's just one of the top serverless mistakes — one caused by not knowing that AWS Billing alertss don't work on the AWS CloudFront content delivery network, which collects information on charges from all regions. That information collection takes time and thus delays the release of the billing alert, as huksley detailed.

Read on for what we see as the top three serverless mistakes that can similarly get you into trouble.

Serverless: The new buzzword

First, some background about why the word "serverless" is becoming a buzzword in the application world. The term "serverless" refers to a cloud-native development model that allows organizations to build and run their applications without the burdens of physical server infrastructure.

Serverless applications offer instant scalability, high availability, greater business agility and improved cost efficiency. This dynamic flexibility helps save time and money across the entire software development life cycle (SDLC). An August 2022 report on the global serverless apps market is forecasting that between 2022  to 2031, the market will record ~23% compound annual growth rate (CAGR).

Still, serverless application security (AppSec) remains a serious issue. As it is, traditional application security testing (AST) tools cannot provide adequate coverage, speed or accuracy to keep pace with the demands of serverless applications. In fact, as of April 2021, concerns about the dangers of configured or quickly spun-up cloud-native (serverless or container-based) workloads had increased nearly 10% year-over-year.

There's good reason for growing concern: For one, malicious actors are already targeting AWS. What we see as the biggest serverless mistakes:

Mistake No. 1: Not understanding security gaps

Organizations think that AWS manages the security, but that is not fully true. Once you write your own code, that code — including the AWS Lambda infrastructure — falls us under your responsibility as a developer or organization.

As such, you have to consider code and configuration, given that the code is always under the customer's responsibility.

Put plainly, in the AWS shared-responsibility model, organizations cannot just use the parameter security. Rather, they need to protect themselves.

AWS is responsible for securing underlying structure, but developers must make sure they secure serverless workloads or functions themselves, given that in serverless, there's no perimeter to secure. Rather, lambda must secure itself by using the "zero-trust" model, which entails:

1. Thorough, continuous authentication and authorization based on all available data.

2. The use of least-privilege access.

3. The assumption that a breach exists: an assumption that supports the visibility provided by end-to-end encryption and use analytics — visibility that leads to improoved defenses and threat detection.

Mistake No. 2: Using traditional tools

Whereas serverless applications are gaining traction due to their benefits, traditional AST tools cause workflow inefficiencies that ultimately bottleneck serverless release cycles.

Traditional security tools — Static Analysis Secuurity Testing (SAST) and Dynamic Analysis Security Testing (DAST) — just aren't made to scan modern appliications.

For example, with the complexity of modern application programming interface (API) code, the frameworks that support them and the complex interconnections between them is simply too much for static tools. Such tools produce an onslaught of false positives, and they miss serious vulnerabilities.

As well, in serverless-based applications, where the architecture is event-based as opposed to synchronous (as is a monolithic application), code can be executed via numerous types of events, like files, logs, code commits, notifications and even voice commands. Traditional tools just aren't built for that and cannot see beyond a simple REST API.

Given their lack of visibility and accuracy, legacy tools depend on expert staff to do manual security triage as they attempt to diagnose and interpret the results before handing recommendations (with limited context) back to developers to fix the problems. After weeding out the high number of false positives, security teams are left to figure out which vulnerabilities should be addressed first. This inefficiency inhibits SDLCs, increases costs and often fails to eliminate many vulnerabilities that can be exploited by cyberattacks.

Static and dynamic tools don't scale well, typically requiring experts to set up and run the tool as well as to interpret the results.

All these reasons are why organizations are opting instead for purpose-built, context-based solutions. Serverless applications are a mix of code and infrastructure, and it is therefore essential to understand both. Organizations need a serverless solution that understands both the code of the functions and its configurations — such as entry points (i.e., triggers) and Identity and Access Management (IAM) policies — and that provides custommers with context-based insight into serverless risks.

Mistake No. 3: The Dangers of Misconfigurations

As "huksley" found out, serverless presents the potential for large overages with incorrect parameters. Without setting a limit on the number of requests allowed by a serverless function, the code could accidentally rack up numerous requests and create a large AWS charge.

Configuration of function at the permission level is another major issue in serverless. Usually, developers use generic permission levels, which give functions far too many permissions. This can result in vulnerable or stolen keys, which can have a big impact in cloud computing. In such a scenario, malicious actors may be able to steal information from databases/buckets, given that the lambda permission has been set at a very broad level. Developers must instead apply the least permissions needed.

That isn't an easy task. Or, to be more precise, this task can be easy — if you write only one function, with 1,000 liines of code. But with dependencies, it becomes a little crazy. The code needs to run in order to understand what function is being carried out, and it must have enough permissions to execute that function.

Conclusion

In April 2022, Cado Security discovered Denonia, the first ever malware to specifically target AWS Lambda. More threats are sure to follow. Avoiding these top mistakes can help to secure your organization when they do.

To fend off such attacks, keep an eye out for free, open-source tools — they can help to secure youur serverless without breaking the bank.

Tal Melamed is Senior Director, Cloud-Native Security Research at Contrast Security
Share this

Industry News

October 17, 2024

Progress announced the latest release of Progress® Flowmon®, the network observability platform with AI-powered detection for cyberthreats, anomalies and fast access to actionable insights for greater network and application performance across hybrid cloud ecosystems.

October 17, 2024

Mirantis announced the release of Mirantis OpenStack for Kubernetes (MOSK) 24.3, which delivers enterprise-ready and fully supported OpenStack Caracal, featuring enhancements tailored for artificial intelligence (AI) and high-performance computing (HPC).

October 17, 2024

StreamNative announced a managed Apache Flink BYOC product offering will be available to StreamNative customers in private preview.

October 17, 2024

Gluware announced a series of new offerings and capabilities that will help network engineers, operators and automation developers deliver network security, AI-readiness, and performance assurance better, faster and more affordably, using flawless intent-based intelligent network automation.

October 17, 2024

Sonar released SonarQube 10.7 with AI-driven features and expanded support for new and existing languages and frameworks.

October 16, 2024

Red Hat announced a collaboration with Lenovo to deliver Red Hat Enterprise Linux AI (RHEL AI) on Lenovo ThinkSystem SR675 V3 servers.

October 16, 2024

mabl announced the general availability of GenAI Assertions.

October 16, 2024

Amplitude announced Web Experimentation – a new product that makes it easy for product managers, marketers, and growth leaders to A/B test and personalize web experiences.

October 16, 2024

Resourcely released a free tier of its tool for configuring and deploying cloud resources.

October 15, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of KubeEdge.

October 15, 2024

Perforce Software announced its AI-driven strategy, covering four AI-driven pillars across the testing lifecycle: test creation, execution, analysis and maintenance, across all main environments: web, mobile and packaged applications.

October 15, 2024

OutSystems announced Mentor, a full software development lifecycle (SDLC) digital worker, enabling app generation, delivery, and monitoring, all powered by low-code and GenAI.

October 15, 2024

Azul introduced its Java Performance Engineering Lab, which collaborates with global Java developers and customers’ technical teams to deliver enhanced Java performance through continuous benchmarking, code modernization recommendations and in-depth analysis of performance impacts from new OpenJDK releases.

October 10, 2024

AWS has added support for Valkey 7.2 on Amazon ElastiCache and Amazon MemoryDB, a fully managed in-memory services.

October 10, 2024

MineOS announced a major upgrade: Data Subject Request Management (DSR) 2.0.