Check Point® Software Technologies Ltd. has been recognized as a Leader in the 2024 Gartner® Magic Quadrant™ for Email Security Platforms (ESP).
Cloud computing has become the cornerstone for businesses looking to scale infrastructure up or down quickly according to their needs with a lower total cost of ownership. But far from simplifying the IT executive's life, the expanded use of the cloud is introducing a whole new level of concept and complexity.
Research has found that the average enterprise uses 2.6 public clouds and 2.7 private clouds. Meanwhile, the average business is using 110 different cloud applications in 2021, up from 80 in 2020. Digital transformation has exacerbated the problem, with organization's of all sizes now faced with an abundance of technology ‘choice' which is actually only serving to hinder cloud transformations.
It gets even more challenging when IT people need to communicate crucial aspects of an organization's cloud infrastructure to non-technical decision makers. And as more and more workloads are migrated to different clouds, the unique requirements of different processes, and the effects of configuration "drift" start to become clearer.
Put simply, cloud infrastructure is becoming harder to control manually.
The question is, how can organizations start to establish visibility into what are becoming more complex, harder to track environments, and how, with cloud configurations changing all the time, can teams develop repeatable processes that will enable them to take back control of their infrastructure and catch the drift before it gets out of hand?
The Rise of Automation and Infrastructure As Code
First, a brief history lesson. Those of a certain vintage might remember the halcyon days when you had to buy and maintain your own servers and machines. We evolved from this era of computing around 2005 with the widespread adoption of something called virtualization, a way of running multiple virtual machines on a single physical server.
Virtualization not only created infrastructure that was more efficient and easier to manage, it also allowed for the development of new technologies, such as cloud computing — something which has revolutionised the way businesses operate. But alongside all the benefits of the cloud — the flexibility, scalability, and cost efficiencies — organizations soon found themselves encountering scaling problems.
This is because provisioning, deploying and managing applications to the hybrid-cloud is costly, time-consuming and complex. For one thing, manually deploying instances of an application to multiple clouds with different configurations is prone to human error. Scale too quickly and you might end up missing key configurations and have to start all over again. Fail to configure an instance correctly and it could prove too costly to ever fix.
These problems necessitated the development of Infrastructure as Code (or IaC). With IaC, it is possible to provision and manage infrastructure using code instead of manual processes, allowing for greater speed, agility, and repeatability when provisioning infrastructure, enabling IT teams to automate cloud infrastructure deployment and processes further than ever before.
With IaC, you write a script that will automatically handle infrastructure tasks for you, saving not only time but also reducing the potential for human error. Simple, right? Problem solved! Well yes and no …
Managing IaC at Scale
Expectations often have a habit of not aligning with reality. If you've started using IaC to manage your infrastructure, you're already on your way to making cloud provisioning processes more manageable. But there's a second piece to the infrastructure lifecycle: how do you know what resources are not yet managed by IaC in your cloud? And of the managed resources, do they remain the same in the cloud as when you defined them in code?
Changes to cloud workloads happen all the time. Increasing the amount of workloads running in the cloud means an increasing number of people and authenticated services interacting with infrastructures, across several cloud environments. As IaC becomes more widely adopted and IaC codebases become larger, it becomes more and more difficult to manually track if configuration changes are being accounted for, which is why policies are required to keep on top of everything.
Misconfigurations are, of course, rife across cloud computing environments, but most organizations are not prepared to manually address the issue. If there are differences between the IaC configuration and the actual infrastructure, this is when cloud or configuration drift can occur.
Drift is when your IaC state, or configurations in code, differ from your cloud state, or configurations of running resources. For example, if you use Terraform — one of the leading IaC tools that allows users to define and manage IaC — to provision a database without encryption turned on and a site reliability engineer goes in and adds encryption using the cloud console, then the Terraform state file is no longer in sync with the actual cloud infrastructure. The code, in turn, becomes less useful and harder to manage.
Now imagine this volume of code, at scale, across multiple clouds. Tricky. So what can be done?
Prescriptive vs Declarative: Establishing an IaC baseline
The issue lies in the approach.
Today, the majority of organizations are operating in what could be described as a prescriptive way, building environments where processes are still manual, even with the introduction of IaC. People are still needed to connect to platforms to patch, enhance, and configure infrastructure. The issue, as we've seen, is managing the delta between the current automation of a platform and the rest of its life cycle.
A shift is needed to catch the drift. A shift towards a declarative approach that removes people from the equation completely so that what is versioned is what is in production. This removes one of the biggest concerns for organizations, which is how to react to problems when they occur and how to reproduce infrastructure. Establishing an IaC baseline of an environment that is deemed to be the known state means that what is versioned becomes the constant source of trust and the desired state of an organization's infrastructure.
Once this has been established it opens doors to bigger and better automation and improved all-round experience for developers who no longer need to worry about taking time away from developing to configure infrastructure and troubleshoot issues. Retro-engineering IaC in a way that enables organizations to define what infrastructure needs to look like in a declarative way would enable organizations to start taking back control of their IaC and create visibility into drift long before it gets out of hand.
What often first started in many organizations as just one cloud instance has now spiralled, ploomed, morphed and mushroomed into a hydra's head of cloud applications that often lacks coherence or defined procedures. The only two certainties are that cloud applications are here to stay and that companies that fail to manage their cloud portfolio are going to face some difficult times. The 100-plus cloud applications used by businesses today require active and planned management that eliminates human intervention and provides a consistent approach to the processes involved. The good news is through IaC the tools are at hand to do the job.
Industry News
Progress announced its partnership with the American Institute of CPAs (AICPA), the world’s largest member association representing the CPA profession.
Kurrent announced $12 million in funding, its rebrand from Event Store and the official launch of Kurrent Enterprise Edition, now commercially available.
Blitzy announced the launch of the Blitzy Platform, a category-defining agentic platform that accelerates software development for enterprises by autonomously batch building up to 80% of software applications.
Sonata Software launched IntellQA, a Harmoni.AI powered testing automation and acceleration platform designed to transform software delivery for global enterprises.
Sonar signed a definitive agreement to acquire Tidelift, a provider of software supply chain security solutions that help organizations manage the risk of open source software.
Kindo formally launched its channel partner program.
Red Hat announced the latest release of Red Hat Enterprise Linux AI (RHEL AI), Red Hat’s foundation model platform for more seamlessly developing, testing and running generative artificial intelligence (gen AI) models for enterprise applications.
Fastly announced the general availability of Fastly AI Accelerator.
Amazon Web Services (AWS) announced the launch and general availability of Amazon Q Developer plugins for Datadog and Wiz in the AWS Management Console.
vFunction released new capabilities that solve a major microservices headache for development teams – keeping documentation current as systems evolve – and make it simpler to manage and remediate tech debt.
Check Point® Software Technologies Ltd. announced that Infinity XDR/XPR achieved a 100% detection rate in the rigorous 2024 MITRE ATT&CK® Evaluations.
CyberArk announced the launch of FuzzyAI, an open-source framework that helps organizations identify and address AI model vulnerabilities, like guardrail bypassing and harmful output generation, in cloud-hosted and in-house AI models.
Grid Dynamics announced the launch of its developer portal.
LTIMindtree announced a strategic partnership with GitHub.