Modern Microsegmentation: Zero Trust Through Software Identity
July 16, 2019

Tom Hickman
Edgewise Networks

Security teams face a never-ending, ever-growing onslaught of attacks. According to the 2019 Verizon Data Breach Investigations Report , 88% of data breaches involve malware or hacking. What's more, the average cost of a malware attack is $2.4 million USD, making any breach against a company's networks a costly operational ordeal. These facts create a compelling call to action — there has to be a better way!

Managing the sprawl and complexity of even an average-sized company's technology ecosystem is a monumental challenge. Modern networks span on-premises data centers, containers, and multiple clouds; new applications and technologies are constantly being deployed; and dozens of new vulnerabilities that need to be patched arise daily. It takes just one weak point in an organization's network to allow a damaging compromise.

The potential damage is far greater when the organization is operating flat networks — those without any internal security checks and balances on communicating software and services. Flat networks have the advantage of making it easy for people, software, and machines to access resources. This increases efficiency but also makes lateral movement easy for attackers if they're able to bypass perimeter defenses. As a result, security teams must prepare for the certainty that, eventually, something malicious will gain a foothold in the network. In response, security teams are refocusing their work on the need to harden internal network security. And the methodology they're turning to is zero trust.

In a zero trust environment, all communications are treated as potentially hostile by default. Through microsegmentation, the network is divided into many smaller, secure segments. Anytime a device, host, service, or application wants to communicate across these segments, both sides of the communication must be authenticated and authorized — a process that needs to take place continuously.

Step 1: Understand everything that's on the network

Before security teams can set up policies to govern which communications are authorized, they must first establish an accurate, up-to-date inventory of all network assets and map data flows and dependencies between them. The inventory and map must be updated in real time to capture network changes as they happen.

Today's networks change too often for manual updates to reliably capture all the changes, which could leave vital assets unprotected. Managing that dynamism is a task best suited for automation and machine learning.

Step 2: Verify all network communication

The next consideration is how communications will be verified. Historically, microsegmentation policies have relied on network address-based information, but this method is difficult to implement, even harder to maintain, and completely unsuited for containers or the cloud.

IP addresses change frequently, which necessitates the constant updating of policies. That upkeep is somewhere between impractical and impossible in autoscaling environments, where addresses are ephemeral and always in flux. Further, addresses can only tell an operator where a communication is coming from and going to, not the character of what's trying to communicate; in other words, if the communication is authorized or malicious.

Therefore, verification should be based on software identity. It's a concept similar to user identity, but much more rigorous because it includes far more factors, such as a SHA-256 hash of a binary, executable signings, and portable executable (PE) headers. These identities are complex, relying on immutable and often cryptographic properties that, unlike an IP address, cannot be spoofed.

By using software identity, security is drawn closer to the assets themselves, and the control plane is decoupled from the network. As a result, applications can be moved to autoscaling clouds or containers without breaking policies, which makes identity-based microsegmentation a more secure methodology than microsegmentation based on network addresses. Another added benefit is the increased flexibility provided to DevOps teams. With policies that are constructed with the application at the core, developers can build and secure applications at once.

Stepv3: Microsegment to achieve a zero trust posture

Finally, a zero trust environment should operate according to the principle of least privilege, which grants no more rights than are necessary to accomplish required tasks, minimizing the number of network assets and potential attack paths that can access sensitive data and applications.

With the zero trust framework in place and up-to-date visualization of all network entities and communication pathways identified, security teams can now microsegment their networks and begin building policies. The policies should be designed to shrink the attack surface by eliminating unnecessary pathways, and should be based on software identity instead of ports and addresses. The process of building policies should be automated using machine learning and advanced analytics.

Once complete, this recipe yields great benefits: bad actors will have much less space in which to maneuver. Even when malware gains access to an endpoint or a host, the damage stops there. The security team gains fine-grained control over the most vital data and applications, and can apply the security best practice of least privilege all the way down to the level of individual applications on individual hosts or workloads.

The days of flat networks are coming to an end. No responsible security team can afford to leave the entire environment exposed to any attacker that penetrates the perimeter. Thankfully, by leveraging software identity-based microsegmentation and automation powered by machine learning, achieving zero trust is now within reach of every organization.

Tom Hickman is VP of Engineering at Edgewise Networks
Share this

Industry News

November 21, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.

November 21, 2024

Securiti announced a new solution - Security for AI Copilots in SaaS apps.

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.