Practice
Computing Profession Practice

The Arrival of Zero Trust: What Does it Mean?

A discussion with Michael Loftus, Andrew Vezina, Rick Doten, and Atefeh Mashatan.
Posted
  1. Article
'Zero Trust' amid collage of text, illustration

back to top 

It used to be that enterprise cybersecurity was all castle and moat. First, secure the perimeter and then, in terms of what went on inside that, “Trust, but verify.”

The perimeter, of course, was the corporate network. But what does that even mean at this point? With most employees now working from home at least some of the time—often on their own smartphones or laptops—and organizations relying increasingly on cloud computing, there’s no such thing as a single, enterprise-wide perimeter anymore.

And, with corporate security breaches having become a regular news item over the past two decades, trust has essentially evaporated as well.

John Kindervag, who articulated the zero-trust enterprise defense strategy a little over a decade ago, explained: “The concept is framed around the principle that no network, user, packet, interface, or device—whether internal or external to the (corporate) network—should be trusted. Some people think zero trust is about making a system trusted, but it really involves eliminating the concept of trust from cybersecurity strategy.”

Easier said than done, naturally. Still, how well is that effort going? We asked Atefeh Mashatan, the founder and director of the Cybersecurity Research Lab at Toronto Metropolitan University (TMU), to explore that with three enterprise security professionals of long standing: Andrew Vezina, the CISO at Equitable (EQ) Bank; Rick Doten, the VP of information security at Centene Corporation, as well as CISO of Carolina Complete Health; and Michael Loftus, an IT consultant with considerable corporate security experience.

ATEFEH MASHATAN: There has been considerable buzz about the zero trust security model. Some call it the “gold standard” of cybersecurity, while others argue it’s nothing more than marketing jargon for networking practices that organizations should have adopted some time ago. What’s the truth of the matter?

MICHAEL LOFTUS: It’s somewhere along that continuum since zero trust is a conceptual framework that challenges the long-held assumption that, if you’re physically within a firewall perimeter and are part of some specific enterprise domain, implicit trust will be granted to both you and your device. Indeed, zero trust challenges the very idea that any sort of implicit trust can be considered safe any longer.

But if you’re asking, “Don’t these supposedly new zero trust measures often include techniques that have been deployed previously?”, the answer is, “Absolutely!” This is particularly true when it comes to MFA (multifactor authentication) and advanced analytics. And yet, do I still think zero trust qualifies as something new and buzzy? Absolutely!

Ultimately, the truth is that zero trust is an approach that involves doing things differently with respect to how users and devices are authenticated, as well as how risk is managed. That’s all because we will no longer assume that any part of the environment can be granted implicit trust.

MASHATAN: But shouldn’t we have been doing this all along?

LOFTUS: I wouldn’t say that, since I think zero trust came up in response to the many different lessons we have absorbed in moving to more distributed environments where applications, data, and identity mechanisms now reside in the cloud. With that shift, security mechanisms—once good enough to protect people on the enterprise premises who were linked to the enterprise network—became inadequate.

MASHATAN: Which techniques do you consider to be absolute minimum requirements for a zero trust environment? What other capabilities would you regard as highly desirable?

ANDREW VEZINA: Whenever I picture a zero trust architecture, I start in the middle and work my way out. The central element that everything seems to come back to is the trust engine responsible for making decisions about whether some particular user, coming from some particular device, ought to be allowed access to some particular resource or application. As Mike indicated, this used to be a much simpler call based on whether the user could supply the correct Active Directory password. But now, under zero trust, that’s changing in two major respects.

First, we’re going to be working with a much richer set of information when it comes to users and their authentications. The biggest change here is the shift from a simple password to multi-factor authentication. We’re also starting to see many new capabilities and tools that will be able to take advantage of all manner of telemetry related to a user’s login characteristics.

The second major improvement worth noting is the ability to do more in terms of authenticating devices simply because we’re able to obtain more information about the devices people typically use whenever they attempt to access an environment. The move toward implementing zero trust starts with pushing all the authentication decisions out toward the apps such that this greater context might be incorporated in deciding whether access ought to be granted to whatever resource has been requested.


MICHAEL LOFTUS: The truth is that zero trust is an approach that involves doing things differently with respect to how users and devices are authenticated, as well as how risk is managed. That’s all because we will no longer assume that any part of the environment can be granted implicit trust.


From there, you can start thinking about how people are going to access the applications and what this suggests your network ought to look like—bearing in mind you will no longer need a local area network or a corporate network to serve as a method of trust, which is to say we are moving to a security model where it basically doesn’t matter where the network user is coming from since you will be relying on user and device telemetry for authentication instead of what the network tells you.

There’s also an opportunity to simplify and improve the user experience through SASE (secure access service edge) and similar technologies. Fundamentally, this just gives users a cleaner, easier route to get from wherever they happen to be to whatever applications they’re looking to use. That, in turn, leads to an architecture where you rely on SASE technology that’s typically cloud-based to provide access to the Internet in a safe manner even as you also rely on software-defined perimeter technology to allow connectivity from the Internet to whatever application or resource a user may have requested.

MASHATAN: Now that you’ve given us a sense of how SASE can be combined with zero trust, can you compare the two to point out their similarities and differences?

VEZINA: First, I’d describe zero trust as a strategy, whereas SASE is a technology that, over the past few years, has largely displaced a whole class of on-prem tools many organizations once used to evaluate user traffic going out to the Internet—their egress traffic, in other words. Typically, that included Web content filtering, malware analysis, and perhaps even data-loss prevention analysis, along with a couple of other capabilities.

Prior to SASE, everything was built on the corporate network. So, if users were working at home and wanted to access the Internet for any reason, they’d first have to log onto the corporate network, do a VPN (virtual private network) connection to enter the network, and then execute a hairpin turn to go back out to the Internet—passing through the corporate proxy and malware tools, as well as probably three, four, or five different security tools along the way. With SASE, instead of requiring users to jump onto the corporate network first, they’re now routed through a SASE provider where all the necessary security functions are provided by way of a cloud-based model that is complementary to the zero trust strategy in the sense that it doesn’t require a corporate network.

Because these two concepts are complementary, it seems most of the organizations that are implementing zero trust architectures now are moving toward SASE as well. With that said, the two are not necessarily interdependent. That is, you can implement SASE and yet still employ a legacy model for security. And it’s at least theoretically possible to implement zero trust without SASE, but that might prove to be difficult at this point, given current technology.

RICK DOTEN: I tend to look at SASE as infrastructure, which, as Andrew indicated, replaces a variety of tools previously operated on premises. And yes, putting all that in the cloud certainly simplifies things for users. That infrastructure also still provides a central point for implementing all policy decisions. That’s all great, particularly for small and mid-sized organizations that don’t have a lot of infrastructure and may already be relying largely on cloud-based applications and services anyway. For them, SASE is ideal.

But I represent a very large company where this would be regarded as sacrificing visibility and control since it doesn’t allow for directly managing and configuring all the different feature sets and monitoring options, even if we are allowed to set the rules and control things at a high level.

Bottom line, SASE is great for small to medium-sized organizations that would love to rely upon a specialized external infrastructure to handle all their security monitoring, response, and controls. That certainly promises to reduce their own management burdens and basically gives their users a clean pipe to all their applications.

And yet, this isn’t necessarily going to work well for all organizations. Besides the matter of scale, there also are certain regulated industries (particularly in the U.S.) where companies may not feel comfortable outsourcing protected data—such as healthcare information and client financial in-formation—to a SASE provider. In this respect, it’s important to remember there are certain things the SASE provider will want to look at in de-encrypted form, particularly when it comes to behavior monitoring.

It seems zero trust might be best described as a strategy or approach, which is to say it’s somewhat nebulous and hard to pin down. That, of course, makes it an absolute gift to those who market cybersecurity products and services.

Indeed, zero trust has been promoted with real gusto. The trade press commonly uses the word “hype” in reporting about zero trust and the efforts made to market it. Yet there are aspects of the approach that even critics readily agree are entirely sensible.

One thing seems certain: The jury is still out as to whether zero trust will ever manage to live up to all the marketing froth.

MASHATAN: To date, it’s fair to say the zero trust strategy has taken hold chiefly in North America. You don’t see a lot of zero trust initiatives being launched in Europe, for example. Also, it’s noteworthy that NIST (National Institute of Standards and Technology) is the only standards body so far to have released a guide for zero trust architecture.

Why is it that all this energy seems to have been concentrated in North America?

DOTEN: That’s an insightful question since it points out that zero trust is being aggressively marketed to companies in the U.S. and not so much to everyone else. That does make you wonder if something might be going on here. What I think is at the root of this is a history in the U.S. of selling things through the subtle manipulation of fear, uncertainty, and doubt—the so-called “FUD factor.” In this case, the pitch runs something like: “Here’s this incredibly complex problem that’s too big for you to handle on your own. But, for a price, we can help.” As North Americans, we’ve been conditioned to respond to that sort of marketing, but it’s not necessarily how the rest of the world has been conditioned. In fact, in many European countries, this sort of approach is illegal.

I do think zero trust is being hyped in the U.S., and, frankly, I’m embarrassed for our industry, as well as for NIST. In fact, if you look closely at the text in the NIST guide, you will find arguments that closely echo narratives already put forth by each of the big vendors—sometimes using the very same words.

I’ve been thinking quite a bit about this recently because, on the current roundtable circuit, there’s a lot of talk about digital transformation and the push to move everything into the cloud. To be honest, the whole zero trust pitch just dovetails a little too neatly with that. In fact, I think zero trust is little more than the security field’s tagalong concept to digital transformation.

The funny thing is, this stuff isn’t even particularly new. I mean, when you think about it, we’ve had network access control for quite some time. As much as 20 years ago, you could connect to a VPN that would scrutinize your environment to make sure your operating system was up to date, your antivirus protection was on and attached, and you had the right versions of software loaded, and so on. All these things involved remote access. We also had behavior monitoring and risk-based monitoring.

These concepts are not new. The only difference is that, whereas we used to access all these applications through a portal in the corporate infrastructure, we now can get to them in the cloud without going through the corporate infrastructure—which is great but not exactly the huge leap forward that the zero trust marketing might lead you to believe.

MASHATAN: Do the rest of you agree the zero trust strategy is being oversold?

VEZINA: I share much of Rick’s skepticism. Over the years, we have certainly seen plenty of new approaches to security, whether in the form of technologies or strategies. The introduction of malware sandboxes about 15 years ago as a new way to identify malware is one that comes to mind. Until then, we all used signature-based antivirus detection, which wasn’t working so well. Then this new technology came out that proved to be a clear improvement. But it also was largely oversold as the solution to breaches and, accordingly, made billions of dollars for a few vendors. Now, it’s nothing more than table stakes for most organizations, and it never actually managed to solve the problem. It helped in some cases, but then the attackers evolved.

It could be argued that zero trust is a bit different in that it’s a strategy rather than a tactical countermeasure. But we don’t know yet whether it’s going to live up to the hype, and red team exercises will likely be one of the best ways to evaluate its effectiveness.

In fact, in recent years, I’ve been involved with several different red team exercises run against conventional defenses. Each of those attacks had different objectives and used different approaches. But the common theme was that, once the attackers established a foothold in the environment, there were many different avenues they could take to move laterally to get at whatever important asset they were looking to compromise or use while executing some sort of objective—like data theft or ransomware.

In the most recent red team exercise at our firm, we provided the attackers with two initial footholds in the environment. Machine #1 was a traditional workstation connected to the corporate network that was accorded the trust that commonly comes along with this. Machine #2 was more aligned to the requirements of zero trust in that it had no ability to connect to the corporate network or to any other machines on the network. Instead, it could be used to connect with and make use of only a small number of applications.

When initiating their attack from this machine, the attackers found they were trapped in that user’s workspace with no way to travel laterally. In this case at least, the decision to step away from trusting the corporate network limited the potential to move laterally, which kept these attackers from achieving their objectives.

Conversely, with machine #1 and its legacy security architecture, the attackers were easily able to move laterally to other machines and user accounts on the corporate network, and this ultimately allowed them to achieve their objectives while the SOC (security operations center) scrambled to respond to the attack. Given this result, I think an architecture designed to curb lateral movement is one that holds considerable promise.

DOTEN: I agree wholeheartedly. We used to joke that, while the corporate network might be hard and crunchy on the outside, it usually proved to be soft and chewy on the inside. There’s a good argument to be made for any sort of defense that promises to reduce an attacker’s ability to move laterally.

I also applaud your malware sand-boxing example. The big lesson there is that people trusted that claim implicitly. I had a malware sandbox eight years ago and it worked 60% of the time exactly as advertised. The other 40%, it didn’t. I mean, it didn’t open the mal-ware, it didn’t block it … it just dropped it into the mailbox, which just goes to illustrate the hazards of returning to a vendor-driven environment where you buy something, install it, and then just simply expect it to work. The real danger is the false sense of security that you don’t really need to do anything more about the situation since you’re already protected. Guess again!

MASHATAN: People must find all the hype in this space disturbing. What telltale signs should we be looking for?

DOTEN: We are certainly not talking about anything quite like blatant lies and distortions. I think it’s more a matter of vendors not giving customers all the help they need to filter through the glittering generalities in the marketing pitch. Mind you, zero trust as a concept is a perfectly fine approach. But if it doesn’t happen to be something that matches up well with your organization’s risks and processes, then it’s not for you.

VEZINA: I think the best way to speak to this is by way of example. Some vendors push identity governance as the key to implementing a zero trust strategy.

One of the more prominent controls in identity governance is the practice of reviewing or recertifying user access on a certain frequency or based on certain events. It’s every auditor’s favorite security control. But I find it difficult to connect this with the core tenets of zero trust, whereas other identity controls such as MFA and the central trust engine are clearly aligned with it.

Whatever else might be said about zero trust, it isn’t magic and won’t be achieved with just a few simple steps. As a sea change in the overall approach to enterprise security, it’s bound to entail considerable effort, substantial investments, and plenty of time—regardless of what the marketing might suggest.

Yet, for all that, maintaining a more traditional approach isn’t a viable option since corporate attack surfaces now have grown much larger, the stakes are higher, and the threat models have evolved in a multitude of ways from those encountered just a few years ago.

So … what to do?

MASHATAN: Let’s dive a bit deeper into the matter of zero trust implementation. Just how complicated is that?

LOFTUS: How complicated? That’s probably in the eye of the beholder. The reality is that this represents a significant technology shift, which never is going to be trivial. Whether you’re focused on authentication or looking to make changes to your network and move to a SASE provider, you’ve got some hard work ahead, and that’s going to take some time.

Much of what is happening in the marketplace now is focused on the authentication side of things—MFA, cloud-based IdPs (identity providers), and application authentication protocols. That’s just because all the things we’ve done over the years with active directory, Kerberos, and Windows NTLM (New Technology LAN Manager) just aren’t going to cut it in a zero trust world. And then, of course, SASE is also exploding.

MASHATAN: To what degree do legacy systems and processes complicate matters?

LOFTUS: It’s sure going to be difficult to get very far down this path if you’re relying on old-school VPNs and on-premises directories and storage. To my way of thinking, you really need to have a cloud-enabled IdP, along with the ability to egress your traffic just as close to the end user as possible.

The real problem with all that legacy, however, is what it does to hamstring your ability to take advantage of the whole zero trust approach. That is, if you have a lot of legacy applications, you might be able to adopt zero trust to better control end-user access, but there also are many zero trust practices you won’t be able to employ simply because you won’t be able to intercept most legacy protocols.

MASHATAN: Once an organization has made the decision to adopt zero trust, what are its biggest challenges?

VEZINA: If that organization happens to be running legacy applications, as Mike just suggested, it’s likely to face some significant difficulties, from both a zero trust and a cloud migration perspective.

On the user side, there’s plenty more heavy lifting to be done. For one thing, you will need to make sure you’ve got the right technology in place to handle user authentication. And you are probably going to want to provide for that in a centralized manner that handles most, if not all, your applications. This means integrating all those legacy applications to the centralized identity provider.

So, there will be a fair amount of work to do, but—apart from dealing with legacy technology—I don’t think of it as exceptionally challenging work. Still, there’s no question but that it represents a lot of heavy lifting, and you’vee got to identify the right spots in your transition roadmap to make those investments.

MASHATAN: What might some of the early steps in that roadmap look like?

DOTEN: It goes back to the fundamentals of automating any business process—understanding what’s there already, mapping out where you need to go, and then determining where your security gaps are likely to surface. In that respect, the challenges are already familiar. But then add to that the issues associated with legacy versus modern, which we’ve already touched on. Then there’s also the matter of scaling, which isn’t limited just to the number of units since you will also need to account for all the new applications, all the things that come about owing to merger-and-acquisition activity, all the new users who come in—while noting that this all is inherently dynamic.

MASHATAN: Let’s say the organization has made good purchase decisions and has a good zero trust roadmap. What are some of the implementation challenges it is still likely to face?

VEZINA: Most of the organizations that falter will do so because of incomplete implementations. An organization could easily select the wrong identity provider that doesn’t necessarily offer a full range of capabilities—or, more likely, falls short by integrating only 20% of the company’s applications, thus leaving all the others stuck in the old model. If that proves to be the case, it’s going to be necessary to keep the old corporate LAN around, meaning the company is going to end up with a half-and-half architecture—which can prove really challenging.

The second area where I think difficulties will arise is on the endpoint side. That is, it’s going to be necessary to have the right agent or two—or maybe even three—to provide telemetry from the endpoint up to the identity source. An investment in EDR (endpoint detection and response) capabilities will probably be required for many of the detection functions to relay some sort of health telemetry to the centralized decision-making engine. All of this is difficult.

There’s also a good chance most organizations don’t currently have agents on their endpoints capable of handling this, which means they’re going to need to switch out their core endpoint technology for more modern capabilities, and that in turn means all their workstations will need to be updated to the right operating-system level.

So, right there, you have two significant challenges. I would describe both as “ground-floor issues” that need to be addressed just to make decent progress even possible.

That still leaves what I consider to be unsolved problems. One of those is: How is this architecture supposed to work for privileged users? How do they fit into this picture? Typically, they do a lot of their work on the corporate network. In fact, there are some proven session-management capabilities that were implemented in the past with this in mind. How are those capabilities going to fit into this new architecture? Somebody might already be working on this problem, but I don’t have a clear sense of what that solution is going to look like.

Similarly, I don’t know what sorts of provisions are going to be made for developers—who have special requirements of their own. That is, they too often work on the corporate network and tend to have unusually high performance needs. There definitely are some problems that remain to be solved.


RICK DOTEN: I do think zero trust is being hyped in the U.S., and, frankly, I’m embarrassed for our industry, as well as for NIST. In fact, if you look closely at the text in the NIST guide, you will find arguments that closely echo narratives already put forth by each of the big vendors—sometimes using the very same words.


DOTEN: My concern, speaking as someone who has been quite focused on application cloud security for a long time, is that zero trust so far has been approached almost exclusively as a technical problem. The reality is that it’s just as important to understand the business requirements. IT security isn’t simply about protecting IT; it’s about protecting the business. What I think we’re lacking at this point is a governance process that links the people who run the business with those who are responsible for the technical operations such that they can work together to ensure that what comes out of these zero trust efforts actually ends up addressing the resiliency, security, and privacy requirements of the applications and data on the business side of the house.

It’s also true that engineers don’t always think about the user experience or fully appreciate the limits on customer tolerance for lag time or the number of steps in some given process. This, too, is where input from colleagues with more customer contact can prove useful. The same might be said for reducing risk since requirements checklists are no substitute for what can be learned from road-tested experience.

MASHATAN: Can you draw on your own experience to point out some of the specific implementation challenges people might encounter?

VEZINA: The first thing that comes to mind—at least for larger organizations, given their scale and the amount of data involved—is that it can be hard to obtain the investments required even to start moving in a zero trust direction. Generally, though, there already are plans in motion to make certain improvements aimed at keeping the infrastructure current. Often, at least some of that will align reasonably well with what also needs to be done to better secure the environment.

For example, maybe your network egress solutions are coming up for renewal, meaning you might also look to move all that to a SASE model. Or maybe there are certain applications that are pushing the organization to move to the cloud. As part of that, you could also look at re-architecting the authentication process so it fits better with the zero trust strategy.


ANDREW VEZINA: One of the more prominent controls in identity governance is the practice of reviewing or recertifying user access on a certain frequency or based on certain events. But I find it difficult to connect this with the core tenets of zero trust, whereas other identity controls are clearly aligned with it.


This is just to say that it’s going to take some strong leadership to bring the zero trust strategy to the forefront and start lining up all the necessary investments. Frankly, I think it’s a given that some organizations will fail at this, which can happen even if all the concepts are understood and the company is motivated to move forward. None of that will matter if things can’t be implemented in a timely manner.

DOTEN: One of the biggest challenges here is also one of the oldest challenges: intra-organization politics. When it comes to implementing something as all-encompassing as a zero trust initiative for a large company, you’re talking about coordinating the efforts of a dozen teams or more. I mean, there’s the identity team, the endpoint team, the application team, the VPN team, the cloud team, and on and on.

So, collaborating, integrating, and orchestrating becomes the name of the game. Getting these different groups to cooperate and move in the same direction can be enormously challenging just because you’ve got all these different fiefdoms to account for and you always discover some number of people who simply aren’t ready to embrace change. Dealing with these sorts of issues generally proves to be a lot harder than handling the technical problems.

VEZINA: But now, if we shift to look at the other end of the spectrum, where you find all the smaller—possibly younger—firms that are less burdened with technical debt, we find a very different set of problems. Many of the issues we just discussed will not be concerns there. Instead, there might be problems executing details of the architecture. For example, the people whose job it is to assign access might simply be granting people full administrative access to the whole tenant space at a cloud provider. Or maybe they’re adding tools to their stacks that are going to start looking for certain capabilities immediately—but, while those things happen to be there, nobody ever enabled them. Or maybe they’re expecting their SASE provider to perform certain functions but neglected to turn on the necessary inspection capabilities.

This is only to say a lot of these execution failures plague many smaller organizations since they tend to have fewer specialists or maybe only a small set of generalists on staff. They’re just going to be more prone to missing some of these details.

MASHATAN: Any advice on what not to do?

DOTEN: Not to suggest that you shouldn’t listen to your vendors, but don’t expect complete candor. This speaks to the point Andrew just made about smaller, less mature organizations: Since they lack relevant experience, they’re going to end up relying more on their vendors. That can be a problem. The vendors are going to try to convince you that if you buy their particular platform, you’ll achieve zero trust. But it doesn’t work that way since this is more of a journey. There are maps to help you steer in the right direction, but there aren’t any measurables to let you know whether you’ve arrived or not. That’s just something you’ll need to figure out by living with your system.

You need a plan and some clear guidelines. Don’t set off on this journey without those. Still, it’s fine if the plan is a simple one. In fact, we like to say the best way to eat an elephant is one bite at a time. So, just start off with one thing. Particularly if you’re a small organization, keep your focus on that alone. Just make sure you’ve got everyone on the system identified and have MFA on everything. Then you can expand from there. But do it in steps; don’t try to do it all at once.

One more thing: Don’t set off on this journey until you have a clear set of business requirements in mind. Otherwise, the vendors will try to shape that agenda for you—and that probably won’t lead to a happy outcome.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More