This is why Hardening is not easy
To harden Linux systems, we use our own checklists that are based on CIS Benchmarks. There is a distribution-independent benchmark and other ones for popular Linux distributions such as Red Hat, Fedora, CentOS, Debian, Ubuntu and SUSE Linux. The benchmarks are also useful for defining your own hardening requirements.
To audit a hardened Linux system, we used the tool Lynis. Not only does Lynis check the operating system configuration, it also inspects installed services such as SSH, Apache or Samba. The Lynis scan report lists the results and gives suggestions for improvement and countermeasures for each point.
In his article entitled systemd service sandboxing and security hardening 101, Daniel Aleksandersen introduces the systemd audit tool, which can be used to check that all systemd services have the ideal configuration and are using security directives. This check should be carried out particularly for services that are exposed to the internet or process user input data. To give a few examples, these directives include:
PrivateTmp: a proprietary temporary directory for the service; no access to /tmp
ProtectHome: the service’s access to home directories
ProtectSystem: read-only access to /boot, /etc and /usr directories for the service
An audit of the systemd services can be started with
systemd-analyze security. Specifying the service name allows the service’s directives to be checked. The analysis tool is one of the first command line programs I know of that uses emojis:
[user@host ~] systemd-analyze security UNIT EXPOSURE PREDICATE HAPPY NetworkManager.service 7.7 EXPOSED 🙁 auditd.service 8.7 EXPOSED 🙁 crond.service 9.6 UNSAFE 😨 dbus-broker.service 8.6 EXPOSED 🙁 ... sshd.service 9.6 UNSAFE 😨 ... email@example.com 3.4 OK 🙂 systemd-initctl.service 9.5 UNSAFE 😨 systemd-journald.service 4.4 OK 🙂 systemd-logind.service 3.0 OK 🙂
Using these directives is a simple way of improving system security. However, a thorough test examining how the service behaves with the newly set directives should be performed beforehand.
Using three examples, we demonstrate how security concepts can be implemented in practice and what should be taken into account when implementing them.
Administrative interfaces such as SSH should not be accessed directly from the internet or from untrusted networks such as a client network. Administrators should also use two user accounts: one for administration and one for daily work. Ideally, multi-factor authentication should be used too.
In our infrastructure, we use a jump host to gain access to administrative services. Each administrator logs into the jump host using SSH and public key authentication. The infrastructure’s systems are only accessible from this jump host using SSH. On the jump host, each administrator has a second SSH key pair that is used for authentication to access infrastructure systems. All SSH key pairs must be secured with a passphrase. Due to the size of the infrastructure, we do not use central user management; there are local users on each system. But as the infrastructure grows, it becomes absolutely vital to introduce central management, as otherwise the effort involved in user management increases immeasurably.
The jump host is directly accessible from our internal, trusted network. A port knocking procedure is needed from the internet to enable SSH access. Several port knocking solutions are described in the Arch Linux wiki. Since iptables is used as a firewall on our systems, we decided to use a pure solution with iptables:
# # Port Knocking for SSH # Sequence: 12345/tcp, 54321/tcp, 4242/tcp # For each of the ports to knock, one rule checks for the correct port in sequence. # If the sequence is met, a jump occurs to where the IP address is added to the list for the next knock in sequence. # # Opens the SSH port for 30 seconds, if the connecting IP address is on the list SSH2 -A INPUT -i eth1 -m state --state NEW -m tcp -p tcp -d 192.0.2.1 --dport 20022 -m recent --rcheck --seconds 30 --name SSH2 -j ACCEPT -A INPUT -i eth1 -m state --state NEW -m tcp -p tcp -m recent --name SSH2 --remove -j DROP # Third Knock -A INPUT -i eth1 -m state --state NEW -m tcp -p tcp --dport 4242 -m recent --rcheck --name SSH1 -j SSH-INPUTTWO -A INPUT -i eth1 -m state --state NEW -m tcp -p tcp -m recent --name SSH1 --remove -j DROP # Second Knock -A INPUT -i eth1 -m state --state NEW -m tcp -p tcp --dport 54321 -m recent --rcheck --name SSH0 -j SSH-INPUT -A INPUT -i eth1 -m state --state NEW -m tcp -p tcp -m recent --name SSH0 --remove -j DROP # First Knock -A INPUT -i eth1 -m state --state NEW -m tcp -p tcp --dport 12345 -m recent --name SSH0 --set -j DROP -A SSH-INPUT -m recent --name SSH1 --set -j DROP -A SSH-INPUTTWO -m recent --name SSH2 --set -j DROP
Once three randomly selected ports have been contacted in the correct order, the SSH daemon’s port is released for the requesting IP address for 30 s. Afterwards, authentication using public key authentication is required. Using port knocking means that the SSH daemon is not directly accessible from the internet and thus not exposed to automated scans and brute force attacks.
In the Zero Trust Model article, Tomaso Vasella writes that the classic approaches based on perimeter security are outdated and no longer sufficient. If attackers succeed in overcoming the perimeter protection, they can often move laterally from one system to the next within zones. We recommend using host firewalls on client and server systems to increase individual systems’ resilience against lateral movement attacks. A restrictive configured host firewall also provides protection when a vulnerability is being exploited, so no additional malware can be loaded externally following exploitation.
In our infrastructure, we use
iptables as the host firewall. We have created a set of template rules that applies to all systems. Basically, incoming and outgoing connections are not allowed. All outgoing connections are restricted to the source address, the destination address(es) and the protocols to be used. The input/output rules are maintained on both the host firewall and the perimeter firewall. Before introducing a new service or application, it must be clarified what protocols are being used to access the destination. The required rules can be defined afterwards. In practice, this unfortunately doesn’t always work as easily as one might imagine.
Our monitoring system, which is based on what is known as the ELK stack, is one example of this. ELK stands for the components Elasticsearch, Logstash and Kibana. An RPM repository is available for installing the components. Defining the firewall rules for this repo was more difficult than expected and started with the analysis: the domain name of the repository
artifacts.elastic.co is an alias (CNAME) for
dualstack.elastic.map.fastly.net. Fastly is a cloud service provider and assigns dynamic IP addresses as a service. The inclusion of a single IP address from the first DNS query in the set of rules failed because another IP address was already resolved between the first DNS call and the attempt to update the repo. The next workaround, using a global DNS tool like dnschecker.org to collect a list of IP addresses, also failed. However, Fastly provides a support document and an API to generate a list of IP address ranges. The rule below has thus been added to the ELK system to allow access to Fastly’s address range. A rule that allows the ELK system to access these areas has also been added to the perimeter firewall.
# # General Settings # *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] ... # # Outgoing Connections # # Elasticsearch Repo - artifacts.elastic.co - dualstack.elastic.map.fastly.net # List of Fastly's assigned IP ranges: https://api.fastly.com/public-ip-list - Last update 23.01.2020 -A OUTPUT -m state --state NEW -m tcp -p tcp -m multiport --dports 80,443 -d 184.108.40.206/20,220.127.116.11/22,18.104.22.168/24,22.214.171.124/23,126.96.36.199/24,188.8.131.52/20,184.108.40.206/16,220.127.116.11/18,18.104.22.168/17,22.214.171.124/20,126.96.36.199/20,188.8.131.52/20,184.108.40.206/18,220.127.116.11/22,18.104.22.168/21,22.214.171.124/16 -j ACCEPT
When firewall rules are created for cloud service providers, a compromise needs to be made between security and operations. The destination cannot be restricted to a single address. Creating the risk that attackers using the same provider will also gain access to their systems. However, because the perimeter firewall restricts access to individual systems, this risk can be reduced.
But if attackers succeed in compromising a system, the lateral movement is restricted by host firewalls, since this system can only communicate with other systems in the network and in the same zone. We believe that this level of protection justifies the effort required to maintain incoming and outgoing connections.
All communication between systems should be encrypted. Many services offer encryption using Transport Layer Security (TLS), for example. Certificates from a trusted certification authority (CA) should be used in this regard.
In our infrastructure, the systems’ logs are sent to a central rsyslog server. Transmission of the logs is encrypted using TLS. A TCP listener must be set up on the server for this purpose:
# load TCP listener module(load="imtcp" StreamDriver.Mode="1" StreamDriver.AuthMode="x509/name" PermittedPeer="*.example.com") input(type="imtcp" address="192.0.2.6" Port="10514") # make gtls driver the default $DefaultNetstreamDriver gtls # TLS config - CA Honest Achmed's Used Cars and Certificates $DefaultNetstreamDriverCAFile /etc/rsyslog.d/certs/example_ca_cert.pem $DefaultNetstreamDriverCertFile /etc/rsyslog.d/certs/log.example.com.pem $DefaultNetstreamDriverKeyFile /etc/rsyslog.d/certs/log.example.com.key.pem
The certificate used is stored on the client and rsyslog is configured to transmit the logs over an encrypted connection:
# make gtls driver the default $DefaultNetstreamDriver gtls # TLS config - CA Honest Achmed's Used Cars and Certificates $DefaultNetstreamDriverCAFile /etc/rsyslog.d/certs/example_ca_cert.pem $DefaultNetstreamDriverCertFile /etc/rsyslog.d/certs/host1.example.com.pem $DefaultNetstreamDriverKeyFile /etc/rsyslog.d/certs/host1.example.com.key.pem ... # Action $ActionSendStreamDriverAuthMode x509/name $ActionSendStreamDriverPermittedPeer log.example.com $ActionSendStreamDriverMode 1 # run driver in TLS-only mode # Send everything to log.example.com # @ for UDP, @@ for TCP *.* @@log.example.com:10514
The encryption of communications should also be examined and, if possible, implemented for all other services within the infrastructure.
The three examples and further experiences with updating the infrastructure showed that hardening measures cannot simply be implemented “at the push of a button”. We are also aware of this when we formulate these as recommendations and countermeasures for findings. Ideally, security measures are already included in the concept/planning phase when new services and applications are implemented. This also enables the design of accompanying processes that make administration easier, such as a form for ordering firewall rules that contains a communication matrix. We’ve gone the extra mile in updating our infrastructure instead of using the default configuration and abandoning defense in depth concepts. We firmly believe that the additional effort is worthwhile and we will continue to implement our own hardening measures in productive environments.
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here