What to look for in a CRS setup review
ModSecurity integrates with the web server (it originated as an Apache module, hence the name) and allows you to filter and inspect the traffic passing through your server to any of the applications behind it. This allows you to stop attacks on the applications by blocking requests containing malicious inputs, improve the application logs you get by blocking automated probes and similar noise, and it can itself generate logs and warnings that you can make use of. Some of the Core Rule Sets rules are even dedicated to doing exactly that!
By default, ModSecurity does not do anything but only gives you the ability to do things with it. The original developer of ModSecurity, Ivan Ristić, noted that that is because he didn’t trust tools to make decisions for him. So how do you make ModSecurity actually do something?
By defining rules. An example rule could be
SecRule ARGS "<script>" log,deny,status:404 which looks for
<script> in the ARGS (all request parameters) and then logs the request, denies it, and returns a 404. The more general formulation of a rule is
SecRule VARIABLES OPERATOR ACTIONS
VARIABLES tell ModSecurity where to look, the
OPERATOR is how to look (typically a regular expression) and the
ACTIONS are what to then do about it.
Ideally, of course, everyone would write rules that are specific to their use case, their application, thereby minimizing false positives and maximizing the protection gained from ModSecurity. Unfortunately in the real world, few people have the time to do so, which is where pre-made rule sets come in. OWASP’s CRS is a set of some 200 rules that detect various issues in a generic way. The CRS does not know anything about your application but it includes a lot of know-how about how to detect an attack. With that, the CRS can cover a lot of different attacks against a wide variety of applications. It includes multiple so-called paranoia levels, starting by default on the lowest, which progressively enable more, and more aggressive rules. Higher paranoia levels are also more likely to lead to more false positives.
Importantly, as of CRS 3.x, it defaults not to the traditional pass/fail methodology, but uses anomaly scoring. CRS’ rules now no longer block any request but use the
setvar action to create a score for any request they process. At the end, the rules in blocks 949 and 959 (for requests and responses, respectively) evaluate that score and handle the request if it is above a certain threshold. Generally, handling is blocking and logging, but this too is configurable. By default, the threshold and the score gained from rules is set up so that it works rather similarly to the traditional method, with a single “critical” rule giving enough points to trigger blocking. However, this can be used to give a new site a higher threshold so that one can still already benefit from blocking and does not have to be in some sort of monitoring-only mode for who knows how long.
In combination with the paranoia levels, this allows for convenient fine-tuning to the needs of the application without being overly complex. Christian Folini can be a great resource for anyone who wants to learn more about this side of the CRS.
In a review a configuration of a CRS deployment, the first thing to look at is whether they use traditional detection or anomaly scoring and if the latter, how the thresholds are configured. If larger thresholds are set, there should be some process or appointed date when the thresholds are tightened up. Secondly, take a look at the configured paranoia level and whether that matches the application under review, with higher risks demanding higher levels.
The big thing to look for is whitelisting. What rules are deactivated and why? If rules from the IP-Reputation block (910XXX) or the Scanner-Detection block (913XXX) have been deactivated, that is of less concern than if rules from the SQLi (941XXX) or Local-File-Inclusion (930XXX) blocks are being deactivated. Netnea has a great list, courtesy of Christian Folini, of all the rules in the Core Rule Set. Using this list and the actual rule files, the deactivated rules can be evaluated and the impact assessed properly.
However, if any rules are deactivated at all, we will strongly recommend that there ought to be a process that re-evaluates them periodically to see if they are still required to be deactivated or if it is possible to configure the anomaly scoring to have them active but not impact the scoring unduly.
For 2.2.x, Netnea once evaluated the rule set for the amount of false positives each rule created, which is an interesting read and can be found in their blog archive. This resource helps understand choices, too and allows for a better assessment of deactivated or modified rules. Most rules didn’t produce very many false positives at the time but it is useful to be able to identify the outliers.
Lastly, modified or completely custom rules are looked at. Do they overlap, are they based on a misunderstanding of what the CRS does or are they customization and adaptation to specific needs of an application? They should be compared to their documented purposes. In any case, the effort to adapt the rules to one’s needs is to lauded.
Given the strengths of the CRS, as a reviewer, there is not much to worry about as long as the configuration is not completely broken, such as by deactivating the evaluation rules for anomaly scoring. It’s important to understand which rules are deactivated, and how ModSecurity and the CRS are embedded into the client’s larger security system: To see whether the alerts generated and the logs produced are handled in a useful manner, monitoring the situation. If the logs are used for alerting and monitoring and no crucial rules are deactivated, then using the Core Rule Set is a significant security win.
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here