Over the last few years, zero trust has achieved widespread acceptance and adoption, and rightly so. The zero trust security model significantly reduces risk by minimizing the enterprise attack surface and limiting the ability for bad actors to move laterally within a network. With zero trust, organizations move from a “trust but verify” approach to “never trust, always verify.”
Technically, zero trust applies to all users, devices, and workloads, but in most organizations, zero trust has become synonymous with user access to applications. As organizations shift to the cloud, applying zero trust principles to cloud workloads is as critical as applying the model to user access.
So, how do you accomplish this? Start by assuming that everything—both internal and external to the network—is untrusted and requires verification prior to being granted access. Authenticating and authorizing everything attempting to access the network builds an identity for all workloads. Then, use that identity to build least-privilege policies that restrict workload access to only what is absolutely necessary.
A core principle of zero trust access is all users must be authenticated and authorized before granting access. Most implementations will leverage not only strong authentication, but several elements of context, such as endpoint posture, when making access decisions.
With workloads, authentication and authorization are a bit more challenging to achieve, but easily doable with the right technologies. In most implementations, unless workloads have been identified by a set of attributes—like a workload fingerprint or identity—they are untrusted and blocked from communicating.
Zscaler Workload Segmentation, for example, computes a cryptographic identity for every workload. This identity takes into account dozens of variables, including hashes, process identifiers, behaviors, container and host ID variables, reputation, hostnames, and more. This identity is verified every time a workload attempts to communicate and is paired with least-privilege policies when determining whether or not to grant access.
Prior to zero trust, the model for users, whether they were on the local network or remote, assumed that the user could be trusted with permissive access and that their identity was not compromised. Basically, users could access anything on the corporate network, making it very convenient for bad actors to move laterally across an organization’s network.
Zero trust changes that by implementing least-privilege principles, granting user access not to networks, but only to the specific applications and resources that a user needs to get their job done. In this model, if a user’s identity is compromised or if that user becomes malicious, the amount of damage they can do is limited by a much narrower set of resources and applications they can access.
As we have learned from countless ransomware and malware attacks and compromised legitimate software—such as was the case with the SolarWinds attack—applying similar least-privilege concepts to workloads can dramatically lower the risk of a breach and limit the blast radius of compromised or malicious software.
Least privilege for cloud workloads means that rather than creating flat networks, which allow overly permissive access in your cloud environments, your policies must only allow access to the users and applications that the workload requires in order to function properly.
Once these two steps are completed, only known and verified workloads can communicate on the network. Now, those workloads only have access to the users, applications, and resources necessary for that workload to function properly.
The result? Dramatic risk reduction and elimination of attack surface. Malicious software will be unable to authenticate and will be kept off the network entirely. And if a bad actor is able to compromise a workload, the attacker's ability to move laterally across the network will be severely curtailed.