Ransomware Detection, Defense, and Analysis
How to use AR and AI for Penetration Testing
Improving efficiency and accuracy is very important to us. Ultimately, we want to offer the best service on the market. When it comes to offensive security, i.e. security testing, the various requirements include offering employees the right balance of general and detailed information for a variety of technologies. After all, each day our penetration testers have to deal with highly complex systems, find vulnerabilities in these systems and successfully exploit them. To keep up at all times, continuous learning processes in this area are essential.
One of our internal research projects aims to enable our analysts to do more. In certain situations, they should automatically receive support so that they can immediately tackle new kinds of problems and focus on the highly complex details.
Developing an Augmented Reality Application (AR app) makes it possible during the first phase of a penetration test to identify which software components are being used, because this often forms the foundation for being able to deal with the product’s quirks and, of course, its vulnerabilities.
The combination of optical text recognition (OCR) and image recognition makes it possible to immediately identify a local client application or web application. All the analyst has to do is point the AR app at their screen. An overlay feature will then display various details:
As soon as the app has identified a piece of software, it sends a query to the backend server, which then displays the requested information in the app. This is necessary because a local database does not offer the desired flexibility and performance.
We have been experimenting for years with expert systems that are designed to make work easier. We already developed an implementation back in 2010 for the information gathering and enumeration stages. The system guides the analyst through their decision-making process. They select the current state and are given suggestions on possible ways to proceed. Next, they can follow these to discover new possibilities. The system helps them both with the choice and the implementation (e.g. suggested input).
The combination of data collected with text/OCR, the further processing with AI, and details display using AR offer an integrated approach to enhancement.
This helps the analyst to determine what kind of access attempt they should carry out. For example, if the software detects that an SQL injection using sqlmap is being tested, information on possible parameters and recommended procedures will be displayed automatically.
AI can help in very specific ways when trying to solve a problem, such as when the task involves elevating privileges for a local test or testing a Citrix system. AI can show how to achieve the desired goal step by step. We are deploying this feature on an experimental basis for our internal set of Alexa skills.
This simplifies work for the analyst by eliminating the need to make the most elementary decisions and deal with functional details. Instead, greater focus can be placed on creative problem solving and implementing specific approaches.
Originally, the proof-of-concept was planned to use Google Glass as the basis, but Google decided to pull the plug on further development several years ago. The hardware and software support for the AR glasses is now so bad, however, that the solution was developed as an Android app instead. The goal here is to achieve maximum platform support and, as a result, portability.
Various libraries are available that can be used for text and image recognition. As so often the case, however, these have various advantages and disadvantages. Especially complex systems require a certain level of hardware performance. If this is not provided, the image processing requires so many resources that the control link to the camera only works every few seconds. For AR as well as VR, it is imperative that both work together in real time. If the solution is too sluggish, it will result in a product that is neither intuitive nor ergonomic.
Text recognition works very well on its own in some cases, but it can certainly be a bit inaccurate at times – resulting in pesky false positives. For example, if the text on a website uses the expression Java SE 12, the AR app will of course conclude that this is the software being used. The blacklisting of certain expressions and logical linking of statements could help to minimize this undesirable effect. However, image recognition should be the primary instance capable of placing text in a certain context, such as by identifying whether a piece of text is displayed in a title bar of a window on the desktop or a footer on a web page.
The recognition is reliable as long as clear rules have been defined and specific queries can be sent to the backend. Things can become problematic here if it is not always clear for which component information is required, what level of detail is useful at this point in time, and how long the data should be displayed (important elements may sometimes be hidden during interactive testing). So, just like any other new technology, more experience needs to be gained to ultimately develop a product that can be seamlessly integrated into day-to-day tasks and that can be used as an effective work tool.
Technologies such as augmented reality and artificial intelligence can help to make things tangible and more comprehensible and to deal with them more efficiently. The goal of our research efforts is to address the needs of penetration testers. In this way, we hope to deploy text/image recognition to properly identify a piece of software to be tested and AI to recommend which testing steps to take next. AR is great for this task because it can provide additional insights.
There are still many technical as well as psychological challenges, but it has become apparent that this is the right path to improving quality and efficiency.
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here