Progress announced new powerful capabilities and enhancements in the latest release of Progress® Sitefinity®.
So you think your K8s cluster is configured correctly?
Well … think again.
How do we know? Alcide just completed an analysis of Kubernetes multi-cluster vulnerabilities, and the results are not good. It turns out that in 89% of deployments, companies are not using Kubernetes' Secrets resources, with sensitive information wired in the open. Moreover, 75% of the deployments studied use workloads which mount high-vulnerability host file systems such as /proc and none of the deployments showed segmentation implementation using Kubernetes' network policies.
Secrets are a crucial functionality in Kubernetes that everyone should be using, so it's disheartening to learn that so many aren't taking advantage of the security benefits Secrets provide, and leaving themselves unnecessarily vulnerable.
Why You Need to be Using Secrets
Kubernetes users and/or administrators sometimes include sensitive information, such as usernames, passwords, and SSH keys, in their pods. But when credentials that grant access to systems that are critical to business functions (databases, web hosting accounts, encrypted email, various applications, etc.) are inserted verbatim into pod specs or container images, there is a very real risk of security breaches if anyone manages to hack into your code.
Secrets are essentially API objects that encode sensitive data, then expose it to your pods in a controlled way. This enables encapsulating Secrets by specific containers, or sharing them. A Secret stores the information and cloaks it from the pod so that it is black-boxed; all the pod knows is that it has permission to use this Secret, but it can't see the information contained within (and neither can anyone who hacks into your code).
How Secrets Work in Kubernetes Deployments
There are two ways in which a Secret can be used with a pod: as files in a volume mounted on one or more of its containers, or as environment variables. Pods do not have access to each other's Secrets, which further facilitates encapsulating sensitive data across multiple pods. Secrets are stored in tmpfs — not written to disk — and they are only sent to nodes that need them. When the pod containing the Secret is deleted, the Secret is deleted too. SSL/TLS protects communication between users and the API server. Containers in pods must request a Secret volume in its volumeMounts in order for it to be visible in the container. This enables constructing security partitions at the pod level.
How to Make Sure You're Using Secrets
Hopefully you're going to use Secrets from now on. The best way to ensure you're using Secrets the right way is to use a monitoring tool that can not only assess if Secrets are being used, but can also detect where sensitive information is exposed or not secured and needs to be using Secrets. You should know what workloads are allowed to access and communicate with what data. If communication between apps deviates outside their prescribed lines, those deviations should be flagged for DevOps and security teams to investigate.
As new, data-intensive systems are spun up to keep pace with business needs, maintaining security should be a top concern for everyone. Gartner's report on cloud security asserts that through 2022, 95% of security failures will be the result of unintentional errors on the customer's part.
In other words, if you're not using Secrets and your data gets compromised, you have no one to blame but yourself.
Industry News
Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.
Securiti announced a new solution - Security for AI Copilots in SaaS apps.
Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.