I want a "Red Teaming"
Michael Schneider
Vulnerabilities in Software are worth money. Bug Bounties are a source of income for many independent security experts. In 2014, Facebook’s bug bounty program has paid out over a million US Dollar. During the same year, Microsoft has paid out the record sum of 100’000 US Dollars for a single vulnerability – a bug in Windows 8, discovered by British researcher James Forshaw.
But not just the discovery can be lucrative; the development of insecure software for clients can make some money. It happens time and time again that developers create software for clients and during development – perhaps because they don’t know better -, they leave out crucial and rudimentary security mechanisms. Or said mechanisms end up being a source of error. The nasty surprise follows when the vulnerabilities turn up during a security check and suddenly, there’s a need for mitigation.
When the security officer of a company pulls the emergency brake and the release of an application is halted immediately, it’s time for crisis: How long does it take to implement decent input validation in the application? How long to replace the databases that are vulnerable to SQL injection with Prepared Statements? Or a solid Session Management that respects the data model of the application and integrates seamlessly into the existing application without bothering existing components? And who pays for all this? Often, developers try their luck following the logic of Attack is the best defence and present their estimates including the anticipated bill.
It’s only just that the customer asks «Do I have to pay for that?» and if all rules of logic are followed, the answer must be No. Why should a customer pay for basic quality markers that are the de facto standard today? To invoke the ever-present security/car analogy: A customer can expect to buy a car with working brakes and airbags. If that isn’t the case, then it would be nothing short of outrageous if the dealer asked for 50 per cent of the initial buying price to retrofit the car if the client demands for brakes and an airbag – if the customer is still able to demand anything, that is.
But the situation is not quite that simple. If the contract between developer and client does not explicitly state how to handle security fixes many developer will try to get away with it. The reason for this is that defining security requirements as well as functional requirements is often neglected in the terms of the contract or the service level agreement.
But these measures are of great importance. To not implement them into the contract can be a huge risk for the project: What if it turns out shortly before release that the application fails completely at handling client data and an attacker can easily gain access to sensitive information? This is quite the dire situation: Releasing the application in its current state is a big risk that very few people are likely to take. To just scrap the application would mean the complete loss of all investment into development, project coordination and so on and to begin from scratch. The only viable solution is to pay the developer the price they’re asking for the metaphorical brakes and airbags.
It even gets a bit more complicated when the discussion turns to vulnerabilities that don’t even deserve the description scandalous: Missing security features, for example. They’re not exactly vulnerabilities. But even if all parties can agree that these shortcomings are to be taken care of by the developer, there is resistance. Sometimes, this resistance is even justified because the client wasn’t willing to budget for the desired multi-factor authentication. Sometimes, though, the resistance isn’t justified, because you would expect a software developer with over ten years of experience to know which standards a financial institute needs to observe in terms of software security.
A solid approach to this complex problem is to use an already established framework for security requirements and make it part of the contract between developer and client. For web applications there’s the Application Security Verification Standard (ASVS) Project. ASVS defines three levels of requirements that an application has to fulfil, depending on the level chosen by the client. Level 1, labelled Opportunistic, is the generally accepted minimal baseline for all applications. On Level 1, vulnerabilities that can be identified easily by using automated tools and without access to the source code. This includes Cross-Site Scripting or simple SQL-Injection vulnerabilities.
Levels 2 and 3 take it further: Standard (2) defines requirements that are necessary to fend off an average attacker and is typically used to secure applications that have to deal with sensitive data such as client data or other data, protected by regulatory documents. Advanced (3) is the highest level and focuses on security measures because the applications that get this level assigned handle critical data such as data found in health, financial or military industries.
By integrating an ASVS level into the call for offers and into the contractual baseline of the company – the process that is applied when contracts are being drafted – conflicts and nasty surprises can be avoided before they even have a chance to happen. The situation becomes more transparent and both developer and client are treated more fairly. For the developer the effort required and expectations are clear from the get-go. The client can rely on the fact that the application will fulfil security standards – and they can get this checked out by a security assessment or a penetration test without having a war chest filled to the brim in case that the test turns up nasty surprises.
Our experts will get in contact with you!
Michael Schneider
Marisa Tschopp
Michèle Trebo
Andrea Covello
Our experts will get in contact with you!