John Kindervag, a former analyst from Forrester Research, was the first to introduce the Zero-Trust model back in 2010. The focus then was more on the application layer. However, once I heard that Sorell Slaymaker from Techvision Research was pushing the topic at the network level, I couldn’t resist giving him a call to discuss the generals on Zero Trust Networking (ZTN). During the conversation, he shone a light on numerous known and unknown facts about Zero Trust Networking that could prove useful to anyone.
The traditional world of networking started with static domains. The classical network model divided clients and users into two groups – trusted and untrusted. The trusted are those inside the internal network, the untrusted are external to the network, which could be either mobile users or partner networks. To recast the untrusted to become trusted, one would typically use a virtual private network (VPN) to access the internal network.
The internal network would then be divided into a number of segments. A typical traffic flow would enter the demilitarized zone (DMZ) for inspection and from there access could be gained to internal resources. The users are granted access to the presentation layer. The presentation layer would then communicate to the application layer, which in turn would access the database layer. Eventually, this architecture exhibited a lot of north to south traffic, meaning most of the traffic would enter and leave the data center.
The birth of virtualization changed many things since it had a remarkable impact on traffic flows. There was now a large number of applications inside the data center that required cross communication. That triggered a new flow of traffic, known as east to west. The challenge for the traditional model is that it does not provide any protection for east to west traffic flows.
Traditional networks are broken up into various segments that are typically viewed as zones. It was a common practice to group similar server types into zones with no security controls to filter the internal traffic. Typically, within a given zone servers can freely talk with each other and share a common broadcast domain.
If a bad actor finds a vulnerability in one of your database servers in that zone, the bad actor could move with ease to try and compromise other database servers. This is how the networking and security model came into existence. Unfortunately, it is still the common enterprise architecture that is in use today. It is outdated and not secure, yet still most widely adopted. In this day and age, you need to be on the right side of security.
Bad actors will always hunt for the weakest link and once the link is compromised, they move unnoticed in pursuit of higher target assets. Thence, not only do you need to protect north to south traffic you also need to protect east to west. To bridge the gap we went through a number of phases.
The current best and most preferred practice to protect east to west traffic is microsegmentation. Microsegmentation is a mechanism whereby you segment the virtualized compute from the users. It further reduces the attack surface by reducing the number of devices and users on any given segment. If a bad actor gains access to one segment in the data zone, he is restricted from compromising other servers within that zone.
Let’s look at it from a different perspective. Imagine that the Internet is like our road system and all the houses and apartments are the computers and devices on the road. In this scenario, microsegmentation defines the neighborhood and the number of people living in the neighborhood. Everyone in the neighborhood has the ability to navigate to your door and try to gain access to your house. Here, we have to make the assumption that fewer the people in the neighborhood, the less likely your house will be robbed.
Similarly, in the case of microsegmentation, not only did we segment our applications and services, but we also started to segment the users. It segments different users utilizing different networks into different segments. It was a step in the right direction since today it controls both the north to south and east to west movements of traffic, further isolating the size of broadcast domains.
It comes with some drawbacks as well. One of the biggest flaws is that it is IP address centric, relying on VPN or NAC clients which is not compatible with the Internet of Things and relies on binary rules. We utilize a binary decision-making process; either allow or deny. An ACL doesn’t really do that much. You can allow or deny on an IP or port number, but it is very much a static, binary process.
Factually, for today’s applications, we need to use more intelligent systems, whereby additional criteria can be used along with allow or deny. Comparatively, NextGen firewalls can make more intelligent decisions. They consist of rules that, for example, allow a source and destination pair to communicate only during certain business hours and from certain network segments. They are more granular and can also register if the user has passed the multi-factor authentication (MFA) process.
The session layer
Where does all the intelligent work take place? The session layer! The session layer provides the mechanism for opening, closing, and managing a session between end users and applications. Sessions are stateful and end-to-end.
It is the session layer whereby the state and security are controlled. The reason we have firewalls is that routers do not manage state. Middleboxes are added to manage state, it is at the state level where all your security control exits such as encryption, authentication, segmentation, identity management, and anomaly detection to name a few.
In order to have a Zero-Trust highly secure network, the network has to become smarter, it has to become layer-5-aware to manage the state and security. Since this is network specific, you should still have appropriate security controls higher up in the stack.
At some stage, instead of requiring the bolting on all these ”middleboxes,” network routers must provide these functions natively in next-generation software-defined networks (SDN), which separate the data plane from the control plane.
Today, we are witnessing a lot of attention in the SD-WAN market. However, SD-WAN uses tunnels and overlays such as IPsec and virtual extensible LAN (VXLAN) that lack end-to-end application performance and security controls.
Within an SD-WAN you do not have many security controls. Tunnels are point-to-point, not end-to-end. All sessions are going through a single tunnel and in the tunnel; you have no security controls for that traffic.
Although progress is being made and we are moving in the right direction, it isn’t enough. We need to start thinking about the next phase – Zero Trust Networking. We need to be mindful of the fact that in a ZTN world, all the network traffic is untrusted.
Introducing Zero Trust Networking
The goal of Zero Trust Networking is to stop malicious traffic at the edge of the network before it is allowed to discover, identify and target other networked devices.
Zero-Trust in its simplest form has enhanced segmentation to a one-to-one model. It takes segmentation all the way to the absolute end points of every user, device, service, and application on the network.
Within this model, the protected elements can be either users, ‘things’, services or applications. The true definition is that no user datagram protocol (UDP) or transmission control protocol (TCP) session is allowed to be established without prior authentication and authorization.
We are doing segmentation all the way down to the endpoint. In a Zero-Trust world, the first rule is to deny all. Literally, you trust nothing and then you start to open up a whitelist, which can get as dynamic and granular as you need it to be.
My first reaction to Zero Trust Networking was that this type of one-to-one model must add some serious weight to the network i.e. slow it down, add latency etc. However, that is actually not the case, you only need the ability to control the first set of packets. You only have to allow the session to be established. In the TCP world, it is the TCP SYN and SYN-ACK process. For the rest of the session, you can stay out of the data path.
A network manager must spend the time to truly understand the users, things, services, applications, and data on their network. Besides, the manager must gauge who has access to what. The good news is, a lot of this information exists already in the IAM directories that just need to be mapped into the routed network.
How do you measure security?
It would be a good idea to ask yourself. How do I measure my security vulnerability? If you can’t measure it, how can you manage it? We need to be able to calculate the attack surface.
With ZTN, we now have a formula that basically calculates the network attack surface. This is one effective way of measuring network access security risks. The lower the attack surface, the more secure the network assets are.
Prior to Zero-Trust, one of the variables for the attack surface was the broadcast domain. It was an end host that could send out a broadcast address resolution protocol (ARP) to see if anything else was on the network. This was a substantial attack surface.
The attack surface essentially defines how open the network is to the attack. If you install, for example, an IoT surveillance camera, the camera should only be able to open a transport layer security (TLS) session to a selected set of servers. Under this model, the attack surface is 1. With the automatic spreading of malware with millions of insecure IoT devices, it is a necessity in today’s times.
The best attack surface number is obviously 1 but the number of poorly designed networks could be significantly higher. For instance, while adding an IoT surveillance camera to a warehouse LAN that has 50 other connected devices and the camera has 40 open ports but it is not encrypted, and there are no directionality rules about who is allowed to initiate a session. This results in the attack surface of up to 200,000 times, which is a huge gap in the attack surface. This gap is the level of exposure to risk.
The perimeter is dissolving
The perimeter has dissolved your users, things, services, applications and the data is everywhere. As the world moves to the cloud, mobile and the IoT, the ability to control and secure everything in the network is longer available.
Traditional security controls such as Network Access Control (NAC), firewalls, intrusion protection and Virtual Private Networks (VPN) all are based on the assumption that there is a secure perimeter. Once you gain access to the LAN, it is assumed that everything is automatically trusted. This model also assumes that all endpoints run the same VPN or NAC client, which is difficult to enforce in this distributed digital world.
Zero-Trust claims the opposite. Everything whether inside or outside is beyond the domain of trust. Essentially, nothing on the network is trusted. Every session that a user creates with other users or applications must be authenticated, authorized, and accounted for at the edge of the network where the network session is established.
Today, everyone can leave their house, travel to your house and knock on your door. Although, they might not have the keys to open the door, but they can wait for a vulnerability such as an open window.
Contrarily, the ZTN is saying that no one is allowed to leave their house and knock on your door without proper authentication and authorization. It starts with the premise that malicious traffic should be stopped at its origin, not after it has penetrated in the network trying to access an endpoint or application.
Defining a network security posture with a default of denying all the network access and then building whitelists will eventually reduce the risk from DDoS attacks, malicious software infections and data breaches.
If a bad actor cannot even get to the “front door” of an asset, then they will not have the ability to go to the next step and try to breach it! The old days of “plug & pray” do not work in today’s era. Therefore, the networks must become intelligent enough to only allow authenticated and authorized sources. In a digital world, nothing should be trusted.
This article is published as part of the IDG Contributor Network. Want to Join?