Zero Trust Model - Never trust, always verify

Zero Trust Model

Never trust, always verify

Tomaso Vasella
by Tomaso Vasella
time to read: 12 minutes

Keypoints

This is what Zero Trust can do for you

  • The term Zero Trust was coined in 2010 by Forrester Research
  • the zero trust model addresses the fact that the efforts of conventional perimeter security can no longer provide adequate protection
  • it redefines the architecture inside of the organizational boundary and adopts a data centric approach
  • Zero Trust denotes a security architecture model and not a certain technology

Zero Trust is heard so frequently that it can be considered one of the hype topics in information security. This article looks at the origins and the principles of the zero trust security model, illustrates the common understanding of the term and points out practical advantages that can be drawn from adopting its principles.

The term Zero Trust was coined in 2010 by an analyst who was working for Forrester Research at the time. Similar concepts have already existed some years earlier and were published in the Jericho forum in the context of the de-perimeterization. A few years later, Google announced that they implemented zero trust security which resulted in a quickly growing interest. In addition, the National Institute of Standards and Technology (NIST) has recently published a draft of their Special Publication 800-207 about Zero Trust architecture.

Zero Trust is a security model and not a specific technology. Originally it was mainly about network architecture but now contains principles that are also applicable to other areas of information security or affect them. The zero trust model offers approaches that address the fact that traditional perimeter security no longer provides adequate protection against digital adversaries.

The Model

Conventional approaches based on perimeter security presume that data flows and access to data located inside of a protected network zone or a defined context are trustworthy. This trust is based on the circumstance that some control instance (a perimeter or a zone boundary) must be successfully passed before entering the zone, for example by enforcing authentication. In other words, trust is based on the reliable distinction between authorized and other entities and an access protection mechanism that cannot be circumvented by unauthorized entities. As soon as the control instance is successfully passed, free movement inside of the zone or context is granted. Figuratively, this could be compared to entering a locked room: only holders of the right key can open the door. Once the room is entered, unconstrained movement is possible inside.

On the contrary, the zero trust model focuses on the data, advocating a data centric model and demands that the flows and locations of data and all data access is clearly known at all times. The basic idea is that in any given network environment, nothing is trustworthy as long as it hasn’t been explicitly validated against a predefined set of criteria, a view that was somewhat unfamiliar at that time. Thus, any entity such as users, systems and processes is validated (authenticated) prior to permitting any kind of action (authorization) such as login, an automated process or a privileged activity. Therefore, the zero trust model is sometimes paraphrased as “never trust, always verify”.

Recent IT infrastructures have reached a considerable degree of complexity. In many environments the classic perimeter is dissolving more and more or has already vanished because of the adoption of cloud computing, mobile device usage and distributed applications. Furthermore, the data that must be protected is distributed over many systems and locations. As experience has shown on many occasions, classic security models don’t work well enough under these conditions making such new approaches reasonable.

The Principles

The zero trust model contains several principles and the following paragraphs summarize the most important ones.

Identification of sensitive data

Adequate security measures can only be defined once it is known which data must be protected and where it is located. However, this is only possible in a reasonable way if in addition to identifying the relevant data its sensitivity is also known, thus data must be classified. The identification and classification of data is one of the most basic elements of a successful information security strategy. Without this basis, it is not possible to define and implement security measures using a risk based approach and security measures may fail to achieve their intended benefits.

Mapping of all sensitive data flows

In addition to identifying data, also their flows must be known to be able to define and implement suitable restrictions and controls for protecting data in transit. Mapping all relevant data flows requires an up to date inventory of all data processing systems and applications.

Implementation of micro segmentation

Once data and its flows are known it is possible to define so-called micro perimeters which are based on the definition of logical entities that must be segregated. Micro perimeters can be considered a reduction of the classic perimeter to the level of individual systems, devices, data blocks or applications. In modern IT environments, this point of view is especially useful in the context of cloud based applications, APIs and the Internet of Things (IoT).

Micro segmentation is not the same as Zero Trust but it is an architecture that is used to help implementing a zero trust model. Micro perimeters allow fine-grained control and regulation of the interactions and data flows between the segregated entities. This is achieved by strictly controlling every access and every communication while applying a least privilege principle.

Micro perimeters can be implemented in different ways that must be selected and combined based on the specific situation and its requirements. Examples are: host based firewalls, stringent authentication and authorization, segmentation through virtualization, firewalling on an application and container level, etc.

Usage of automation and orchestration

A side effect of introducing micro perimeters is an increased complexity due to the growing number of rules and controls that must be managed. With isolated solutions such as individual firewalls or security components with decentral management, the limits of what can be successfully operated are quickly reached. Therefore, solutions are required that allow the comprehensive definition of a desired security state and then are able to enforce this security state on a technical level. The desired security state is specified on an abstract, non-technical level, for example: a user may access a particular application with a particular device from a particular location. Subsequently, this abstract definition must be translated into technical rules such as user and device authentication, location control, port and protocol rules, etc. (automation) and this must work across all involved security components (orchestration).

Continuous security monitoring

Continuous inspection and monitoring of security relevant events and data flows is necessary also in Zero Trust environments. Compared to classic environments this might even be easier in micro segmented environments since security gateways designed for such environments typically have the capability to inspect all network traffic flowing through them.

Benefits practical application

The increase of media covered data leakage and security incidents is often used as an argument for the failure of classical perimeter security and thus the need for zero trust models. Although this may be somewhat too simply put, introducing a zero trust approach can bring some benefits and limit the harmful effects of certain scenarios.

Inventory

The fundamental importance of a clear inventory and classification of data must be pointed out again. Without that it is not possible to define an efficient and effective, risk-based security strategy. This is highlighted by the fact that several standards and regulations explicitly require an inventory. For example, the European General Data Protection Regulation (GDPR) requires, among other things, that the collection, storage and processing of relevant personal data must be documented. The security standard PCI-DSS of the credit card industry also requires the documentation of certain data flows. In other words, inventorying and classification must be done anyway, independently of zero trust plans.

Insider threats and lateral movement

In a cyberattack, an attacker’s entry point into a network is usually not the location where the data is located that the attacker is interested in. This means that to reach its destination, an attacker must move inside of the organization or its network after gaining initial access. This is referred to as lateral movement and applies analogously to insider attacks. Preventing such lateral movement as much as possible is important because it can lead to an attacker being detected and stopped before being able to access or exfiltrate sensitive data. Micro segmentation and Zero Trust can help achieving this goal by stopping or preventing unhindered lateral movement. In addition, if the continuous monitoring of security-relevant events is implemented as suggested by the model, access to sensitive data can be recorded which provides a better chance of detecting and blocking attackers in time.

IoT, cloud services, APIs and distributed applications

In times where the amount of data, its value and its logical and physical distribution is steadily increasing, it does make sense to focus on the data and its protection. Applying a principle of granting only the minimally required rights, i.e. restricting access rights as much as possible for each action is useful also in this context and corresponds to a basic information security principle. If data, devices, applications etc. are not located in a logical, uniform location it makes sense to define an individual protection layer for each of these entities. Particularly in the context of the Internet of Things (IoT), micro segmentation can contribute to increasing security by individually and reliably authenticating each device and each exposed API.

Challenges and criticism

In addition to the mentioned advantages, legitimate criticism of the Zero Trust model is repeatedly voiced and the main counter arguments are summarized in the following sections.

Legacy applications and systems

An organization that develops and operates its own applications runs the risk of using legacy technologies, especially if those applications are already a few years old. Redesigning and developing such applications can result in substantial efforts and expenses which can only be justified by important business reasons. Retrofitting existing applications and older systems with the necessary security functionalities for integration into a zero trust model – this usually affects mechanisms for automation, role definition and authentication – is therefore potentially not feasible with a reasonable effort.

Missing security basics

The successful implementation of a zero trust model requires a certain maturity level regarding information security. For example, in environments where there is no inventory of data and systems or that uses shared accounts or where privileged access management has not been implemented, Zero Trust models cannot be successfully deployed. Another important point concerns vulnerability and patch management, although that is not usually mentioned in the context of Zero Trust: It is futile to strictly authenticate users and control traffic and data flows if an attacker can simply exploit a known vulnerability.

Theory and practice

The model as such is relatively straight forward and makes sense. However, a closer look at the concrete, technical implementation options reveals that various practical challenges must be overcome. Digital transformation projects such as cloud applications or IoT often require additional technologies to achieve the desired segmentation and control. This can significantly increase the cost of such projects and have a negative impact on the operability of the resulting solution. Although the functionality of modern cloud services can support zero trust models, their actual usability depends on the type of cloud usage. This can be leveraged best by newly developed applications while systems that are directly migrated to the cloud without change are unlikely to benefit. The successful end-to-end implementation of Zero Trust models can almost only be achieved through an architecture designed accordingly from the beginning which is rarely possible in most existing environments or only for new projects.

Bottom line

Zero Trust is a general approach based on micro segmentation and fine-grained access controls that focuses on data protection. This is a welcome approach that corresponds to the proven principle of not trusting blindly and only making intentional and conscious decisions whenever possible. However, Zero Trust should be seen as an attitude or point of view rather than a rigid principle to be enforced in every detail. Implementing a zero trust model required a prolonged transition process – a zero trust architecture can typically not be implemented without extensive technology adjustments. However, many environments are already have elements of a Zero Trust model in their IT infrastructure that can be leveraged in an incremental adaptation and transition process.

As is often the case in information security, it is important to find a sensible balance between the stringency of the implementation, the associated security benefits and the effects on the operability and usability. Experience has shown that one is well advised to proceed according to the following scheme: Understand the principle, identify useful elements and approaches, adapt them to own requirements and situation, and appropriately combine with other measures and controls.

About the Author

Tomaso Vasella

Tomaso Vasella has a Master in Organic Chemistry at ETH Zürich. He is working in the cybersecurity field since 1999 and worked as a consultant, engineer, auditor and business developer. (ORCID 0000-0002-0216-1268)

Links

Is your data also traded on the dark net?

We are going to monitor the digital underground for you!

×
The new NIST Cybersecurity Framework

The new NIST Cybersecurity Framework

Tomaso Vasella

Flipper Zero WiFi Devboard

Flipper Zero WiFi Devboard

Tomaso Vasella

Denial of Service Attacks

Denial of Service Attacks

Tomaso Vasella

System Log Monitoring

System Log Monitoring

Tomaso Vasella

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here