Augmented Reality and Artificial Intelligence – Use cases for Offensive Security

Augmented Reality and Artificial Intelligence

Use cases for Offensive Security

Marc Ruef
by Marc Ruef
time to read: 7 minutes

Keypoints

How to use AR and AI for Penetration Testing

  • A range of sectors are discovering the possibilities afforded by augmented reality (AR) and artificial intelligence (AI)
  • We are conducting an internal research project to develop a tool for our penetration testers
  • The AR software automatically identifies potentially vulnerable products
  • It provides background information on the product and its vulnerabilities
  • Unique challenges that ultimately affect key aspects of the user experience came up during development

Everyone is talking about Augmented Reality (AR) and Artificial Intelligence (AI). Various industries have now discovered ways to tap into the benefits of these technologies themselves. This article provides a few insights into our research, which of course also aims to apply these technologies in the realm of offensive cybersecurity.

Improving efficiency and accuracy is very important to us. Ultimately, we want to offer the best service on the market. When it comes to offensive security, i.e. security testing, the various requirements include offering employees the right balance of general and detailed information for a variety of technologies. After all, each day our penetration testers have to deal with highly complex systems, find vulnerabilities in these systems and successfully exploit them. To keep up at all times, continuous learning processes in this area are essential.

Augmented Reality as an Enhancement

One of our internal research projects aims to enable our analysts to do more. In certain situations, they should automatically receive support so that they can immediately tackle new kinds of problems and focus on the highly complex details.

Detecting Software Components

Developing an Augmented Reality Application (AR app) makes it possible during the first phase of a penetration test to identify which software components are being used, because this often forms the foundation for being able to deal with the product’s quirks and, of course, its vulnerabilities.

The AR app recognizes a range of different products

The combination of optical text recognition (OCR) and image recognition makes it possible to immediately identify a local client application or web application. All the analyst has to do is point the AR app at their screen. An overlay feature will then display various details:

As soon as the app has identified a piece of software, it sends a query to the backend server, which then displays the requested information in the app. This is necessary because a local database does not offer the desired flexibility and performance.

Artificial Intelligence as a Guide

AR app suggests the next step to elevate privileges

We have been experimenting for years with expert systems that are designed to make work easier. We already developed an implementation back in 2010 for the information gathering and enumeration stages. The system guides the analyst through their decision-making process. They select the current state and are given suggestions on possible ways to proceed. Next, they can follow these to discover new possibilities. The system helps them both with the choice and the implementation (e.g. suggested input).

The combination of data collected with text/OCR, the further processing with AI, and details display using AR offer an integrated approach to enhancement.

This helps the analyst to determine what kind of access attempt they should carry out. For example, if the software detects that an SQL injection using sqlmap is being tested, information on possible parameters and recommended procedures will be displayed automatically.

AI can help in very specific ways when trying to solve a problem, such as when the task involves elevating privileges for a local test or testing a Citrix system. AI can show how to achieve the desired goal step by step. We are deploying this feature on an experimental basis for our internal set of Alexa skills.

This simplifies work for the analyst by eliminating the need to make the most elementary decisions and deal with functional details. Instead, greater focus can be placed on creative problem solving and implementing specific approaches.

Challenges for Development

Originally, the proof-of-concept was planned to use Google Glass as the basis, but Google decided to pull the plug on further development several years ago. The hardware and software support for the AR glasses is now so bad, however, that the solution was developed as an Android app instead. The goal here is to achieve maximum platform support and, as a result, portability.

AR for Google Glass originally planned

Various libraries are available that can be used for text and image recognition. As so often the case, however, these have various advantages and disadvantages. Especially complex systems require a certain level of hardware performance. If this is not provided, the image processing requires so many resources that the control link to the camera only works every few seconds. For AR as well as VR, it is imperative that both work together in real time. If the solution is too sluggish, it will result in a product that is neither intuitive nor ergonomic.

Text recognition works very well on its own in some cases, but it can certainly be a bit inaccurate at times – resulting in pesky false positives. For example, if the text on a website uses the expression Java SE 12, the AR app will of course conclude that this is the software being used. The blacklisting of certain expressions and logical linking of statements could help to minimize this undesirable effect. However, image recognition should be the primary instance capable of placing text in a certain context, such as by identifying whether a piece of text is displayed in a title bar of a window on the desktop or a footer on a web page.

The recognition is reliable as long as clear rules have been defined and specific queries can be sent to the backend. Things can become problematic here if it is not always clear for which component information is required, what level of detail is useful at this point in time, and how long the data should be displayed (important elements may sometimes be hidden during interactive testing). So, just like any other new technology, more experience needs to be gained to ultimately develop a product that can be seamlessly integrated into day-to-day tasks and that can be used as an effective work tool.

Conclusion

Technologies such as augmented reality and artificial intelligence can help to make things tangible and more comprehensible and to deal with them more efficiently. The goal of our research efforts is to address the needs of penetration testers. In this way, we hope to deploy text/image recognition to properly identify a piece of software to be tested and AI to recommend which testing steps to take next. AR is great for this task because it can provide additional insights.

There are still many technical as well as psychological challenges, but it has become apparent that this is the right path to improving quality and efficiency.

About the Author

Marc Ruef

Marc Ruef has been working in information security since the late 1990s. He is well-known for his many publications and books. The last one called The Art of Penetration Testing is discussing security testing in detail. He is a lecturer at several faculties, like ETH, HWZ, HSLU and IKF. (ORCID 0000-0002-1328-6357)

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Specific Criticism of CVSS4

Specific Criticism of CVSS4

Marc Ruef

scip Cybersecurity Forecast

scip Cybersecurity Forecast

Marc Ruef

Voice Authentication

Voice Authentication

Marc Ruef

Bug Bounty

Bug Bounty

Marc Ruef

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here