Ransomware Detection, Defense, and Analysis
How webmasters can interfere with your internal services
Several years ago, an online shift took place: security professionals mandated that all websites were to be accessed exclusively via encrypted HTTPS. The religious fervor for achieving this goal obscured the sacrifices in terms of complexity, performance and caching. And yet it was precisely this push that has made the web a more secure place – at least to some extent.
For compatibility reasons, many sites still support unencrypted access via HTTP. After all, there are still people and companies using outdated browsers that cannot handle the strict SSL/TLS requirements – or free internet providers who forgo secure communications so that they can also offer alternatives that do not exclude certain users – especially when a provider’s own services do not make privacy a main priority.
Modern web browsers nudge users to change their habits and, for example, define whether a website should be accessed via HTTPS instead of HTTP. As a result, the autocomplete function in the address bar will always suggest this path. The risk here – at least when visiting the site for the first time – is that sensitive login or session information will be sent in plain text if, of course, the site was only accessed via HTTP. In this case it is immaterial if HTTPS is used or whether this is forced by a redirect in a subsequent step, for instance.
Webmasters can help to avoid this scenario with HSTS. Including the
Strict-Transport-Security header in an HTTP response will cause the browser to behave differently.
In this example, the web browser is told that for the next 31,536,000 seconds (365 days) the website should always be accessed via HTTPS first.
Incidentally, this header must only be sent via HTTPS. In theory, HTTP does not support this. This ensures that only users who have connected via HTTPS at least once already will be allowed to connect with this method.
And even if this has occurred, the user can manually switch back to HTTP – assuming the web server allows this, of course. This represents the optimal combination of security and flexibility.
The HSTS header can be expanded to increase the level of security. The following example uses the
includeSubDomains directive to additionally include the subdomains of the web server in the request. If www.example.com returns HSTS, it will be used for *.www.example.com from then on.
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
This directive is required so that the
preload directive can be used as well. HSTS without
preload requires that the user has accessed the website via HTTPS intentionally or unintentionally at least once already. Otherwise, the requirement cannot be set in the browser.
preload directive skips this step, making it possible to add a page to the HSTS preload list for supporting browsers. Anyone who has configured their website as shown above can register it with hstspreload.org. After verification by Google, this list is then distributed to the makers of common web browsers, which then distribute the list to their software installations in the form of a software update. This list is then stored locally on the client systems.
Now whenever the user tries to access a website that requires HSTS preload, the browser will always access the site via HTTPS automatically. Even if the user enters
http://example.com, the site will never be accessed via
http://, and the browser will switch to
https:// before network access instead. This is basically a good thing.
HSTS preload has explosive potential: A webmaster, web server admin or proxy admin is normally able to control HTTP headers. Each can generate a functional preload header independently of the others, i.e. setting the expiry, including subdomains and defining the preload directive.
And each of them can be submitted individually to the HSTS preload list. The problem here is that the directive for including subdomains can, of course, affect services and servers that are not covered by the assigned privileges. By configuring https://example.com accordingly, it would also be possible to control webmail.example.com, intranet.example.com or root.directory.srv2.example.com.
It currently takes about two weeks for a valid submission to be added to the HSTS preload list. About a week later, it is then distributed by the first browser makers. From then on, the affected users are forced to access the specified resources via HTTPS alone.
This becomes problematic, for example, when HTTPS is not offered in the first place or when using an invalid certificate (e.g. expired or self-signed) – something that is happening more and more frequently on internal networks.
In this case, users have very few options if they still want to access the service, because the browser will prevent the switch to HTTP or will reject invalid certificates. Some browsers allow the manual deletion or modifying of the local HSTS preload list. This works fine in Google Chrome via
chrome://net-internals/#hsts. Mozilla Firefox apparently allows this text file to be modified as well. During our tests, however, we could not find a solution. There was no other possibility than to switch to another browser.
If internal services have found their way onto the HSTS preload list accidentally, you can submit a removal request at hstspreload.org. The site states that the process can take 6-12 weeks. After that, one can only hope that the list is distributed by the browser makers shortly thereafter. Web browsers that are not updated over the Internet may not ever receive the new list, however.
In my opinion, HSS preload has a flawed design. It fundamentally violates a classic principle of information security: the separation of duties. Why should anyone who is responsible for a public resource and who can publicly validate its legal compliance, even to a limited extent, have any control over internal resources?
By definition, preventing HSTS as a higher-level instance is impossible. It is therefore essential to question who is given control over HTTPS headers. It is not uncommon for webmaster tasks to be outsourced. This harbors an additional risk that someone working for the external company will intentionally or unintentionally create a problematic situation.
While it has fallen out of fashion a bit in some places, one way to avoid this problem would be to set up a local domain – at least for internal services. For instance, example.com could be used for public services, whereas example-internal.com would be used only on the internal network (the domain has be be acquired properly). Setting up this sort of configuration after the fact, however, can become a Sisyphean task.
If one were suddenly to be confronted with an attack like this and unable to wait for the rollout of the removal, one possible temporary solution might be to try establishing HTTPS with valid certificates. In many cases, this is possible, but it does take some work. At any rate, once it is done, it offers a long-term security improvement. This is because, strictly speaking, HTTP and invalid certificates should never be used on internal networks in the first place.
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here