I want a "Red Teaming"
Michael Schneider
A penetration test task consists of the stages scoping, testing and documentation. For all three stages, there is extensive discussion – of varying quality – among the IT security community. For scoping, there is sometimes talk, and highly polemic talk at that, of whether social engineering should also be included. Elsewhere, you will find claims that all the peripheral systems in the company form part of the scope. It’s an important discussion, but risk scenarios are too often mixed up with statement contexts. This has a negative effect on any lay observers trying to follow the discussion. And in testing, too, you will find these debates. One article on the subject worth reading is Purple Testing, which is a combination of the classic schema of red (attack) versus blue (defense). Another is Interactive Pentesting, in which the author calls for greater involvement of clients during tests. What makes these articles interesting is that they look beyond the normal bounds of these discussions. Less valuable are the hair-splitting articles that examine the exact definition of words such as penetration test and security assessment.
One aspect that sometimes comes off second-best is documentation, which seems to affect testers like garlic as sunlight affect vampires. That reports must be written for the reader and not the tester, for example, is one simple fact that is often overlooked. A simple list of vulnerabilities is sometimes enough when you’re collaborating directly with the development team. But where the report is required for approval of a new budget, a purely technical report will rarely cut it.
At scip, we are always trying to improve and that leads to discussions about what to test, how to test and naturally also documentation of findings. Here, I would like to reflect on two approaches to testing: checklist-based testing versus scenario-based testing in relation to (web) applications – and its implications for documentation.
Essentially, this already exists in many automated scanners for networks and web applications. Test cases – nothing more than points in a checklist – are defined in the program, and these are then run through. The question is, how does this approach work for manual tests, and does it make sense?
Many advantages and disadvantages depend on the implementation and quality of the checklist. Are the points in the list risks or test cases/payloads? Let’s examine this using the example of session management. Depending on the approach, we can carry out the four following points in the checklist one by one:
Or simply summarized as one item:
As these are four different problems, you could well argue that each deserves its own place in the checklist. On the other hand, however, you could also argue that the four items result in the same risk – a session being overtaken. What’s more, presumably only one tool is needed for the first three items, because from an implementation perspective they are all interdependent. The probability of collision, for example, is higher if the token is short. Fortunately, most applications these days use existing session token technologies that have been subject to frequent testing and are regarded as secure. If, for example, a simple user name is used for the token, that will be noted in the test and documented. But it also makes sense to integrate this point under ‘session token technology’.
On the other hand, if the issue of risk versus test case is included as an entry in the checklist in relation to cross-site scripting (XSS), it is easy to decide whether differentiation according to risk or test makes sense:
XSSs that exist because of browser errors (universal XSS and mutation-based XSS) should be included in a checklist for browser testing. If there is a filter, and if it is possible to circumvent it, then this is another vulnerability and it is better to list it separately.
There is no one answer to the issue of test case or risk; rather it must be decided on a case-by-case basis. The highest premise is defined as the highest and most comprehensive coverage of the defined scope.
Benefits of checklists in testing:
With checklists, it is important that testers don’t regard lists as definitive, but rather as minimum requirements. This is more a problem of how the checklist is used than the approach itself. Where a finding arises that is rare and not in the checklist, it may not be worth listing it as an additional point. Instead, it should be documented separately. In this case, concentrating solely on the list would be detrimental due to the extra effort generated without added value.
The checklist also helps with documentation:
The appearance of vulnerabilities in the context of all tested points is then actually more a side-effect of complete documentation. This helps avoid a situation where a major vulnerability sets the tone amid 100 points that were otherwise all positive; for example, if the coding of the task wasn’t complete, but session and configuration of the server header were carried out without error.
Reports that do more than just criticize are generally easier to accept. That’s because we want the report to improve security and support the developers – rather than generate resistance.
An additional, positive side-effect here is training of administrators and developers. Just because a vulnerability is not apparent in the application does not mean that it was actively avoided or even that anyone was aware of its existence.
The scenario-based testing approach has its origins in risk analysis. Risk analysis is used to find out which risks exists and assesses them according to their effect. This can then be used to develop appropriate counter-measures. This approach can also be used for web application penetration tests.
Compromise of the server or defacing of the website are always general risks. But every application has its own specific risks. For a banking application, a user able to carry out transfers in the name of another user would be fatal. For a mail service, being able to read another user’s emails is the kind of risk that must never occur.
As with the checklist approach, scenarios have to be developed first of all. Some can be developed independently by the customer, while others require insider knowledge. For example, with an application that functions as a training platform, there may be artificial intelligence working in the background. Were an attacker able to steal it, the company’s entire business model would collapse. Without this background information, this may not be listed as a testing objective. On the other hand, a statement about this in the report would certainly be welcome.
The advantages of scenario-based testing are:
This means of testing also has benefits for documentation:
Problems can arise when this approach is carried to the extreme. If the documentation contains only complete scenarios, the report may miss out on known vulnerabilities. These might be relevant only in a future scenario, but the possibility of the attack scenario occurring at all could be prevented. Therefore, these vulnerabilities must also find a place in the documentation.
Both approaches have their positive aspects. In the scip Red Team, we try to draw the best from both sides. For the tests, we use checklists to guarantee completeness and comparability. The individual vulnerabilities identified sometimes give rise to new scenarios that were not previously apparent. Conversely, we also use scenarios that can sometimes lead to new points being included in the checklist. And in reports, too, we use a combination of both approaches. The goal of any report is:
In the case of transparency, the checklist approach can help enormously. It is easy to see what was tested, which areas were fine and which areas need improvement. This approach also helps maintain completeness. Comprehensibility and relevance are covered as far as possible by scenarios. Scenarios flow as often as possible into the individual points and are referred to in the management summary. With scenarios, we can also ensure the appropriate customer-specific relevance.
Our experts will get in contact with you!
Michael Schneider
Marisa Tschopp
Michèle Trebo
Andrea Covello
Our experts will get in contact with you!