Ways of attacking Generative AI
This is how Social Engineering works
It mainly focused on organ donation and the various models in different countries. A distinction can be made between three models. Opting in is a consent solution whereby individuals have to explicitly state that they would like to become organ donors, but the initial assumption is that they’re against it. When it comes to the opting out option, the assumption is that people are automatically in favor of organ donation. Individuals who do not want to become organ donors have to explicitly opt out. Then, there’s the neutral method, where you have to decide either for or against. The correlation between the number of organ donations and the chosen model is interesting. Johnson and Goldstein convincingly determined that the percentage of donors was around 98% when the opting out variant was used, but only ranged between 5% and 30% when the opting in model was applied, due to behavioral and psychological traits. When I came across these findings, I considered the extent to which approaches encountered in behavioral psychology can be used in social engineering. I will present some of my findings below.
The above is a typical example of the default effect. When it comes to the default effect, people (without actively making a decision) tend to go with the default, even if it was set randomly. And not just in organ donation; this approach can influence extremely trivial everyday decisions. A company can save a considerable sum of money if a printer’s default settings are black-and-white and double-sided printing. This effect could also be exploited to ensure staff security. The default effect means it can be assumed that many users leave their computers’ settings in the default states. This can provide support, especially in the context of social engineering. A company could, for example, decide not to have its email program automatically load pictures in messages or choose to always have macros from Office documents disabled by default.
When it comes to the framing effect, a single message is formulated in two different ways, with one option often being preferable to the recipients of the message later on. Take shopping in the supermarket, for instance. You can draw shoppers’ attention to particularly “healthy” products by pointing out that they’re 80% fat-free rather than only 20% fat.
We can also draw a distinction between gain framing and loss framing with respect to the framing effect, with the former primarily emphasizing the benefits that could be gained, and the latter focusing on the potential losses.
Both approaches increase the probability of people taking action compared to a statement that is formulated neutrally. Kahneman and Tversky found that people are more willing to take risks when faced with a loss and are more risk-averse about the chance of winning.
The framing approach is already widely used for phishing emails. Let’s take the scenario where an email recipient has allegedly won a prize in a lottery, but a “security flaw” is discovered and they are instructed to change their password as soon as possible using the URL that’s conveniently included in the mail. Alternatively, they might be able to get that dream holiday they’ve won by clicking on the link provided. In these examples, an attacker uses different methods again, which brings us to the next approach.
Thaler and Sunstein make a distinction between the automatic system and the reflective system (sometimes also described as system 1 and system 2 respectively in the specialist psychological literature). The automatic system is based on intuition and speed. It controls our instinctive actions (wincing at the sound of an oncoming train honking its horn, or jumping out of the way when we see a car approaching out the corner of our eye). According to Thaler and Sunstein, these activities are processed in the oldest part of the brain, in areas we even share with lizards, for example.
The reflective system, meanwhile, focuses on reflections and rationality. It is slower and acts more deliberately. One example is when we’re thinking about something that concerns us and consider several alternative approaches.
Attackers using social engineering are often aware of this behavior, which is why they try to get their victims’ automatic system to take action. In so doing, they attempt to arouse fear or anxiety so victims start to act intuitively (i.e. with the automatic system) without really thinking about the situation or acting rationally. One frequent characteristic of phishing emails, for example, is that they urge the recipient to do something very urgently and quickly. An attacker might claim to have compromised the recipient’s PayPal account, so they’ll be prompted to reset their password as soon as possible. The handy button in the email allows them to instantly call up the “right” page so they can change the settings.
One of the most famous experiments is that conducted by Stanley Milgram in 1961, where test subjects were instructed to administer electric shocks to people if they failed to perform a task correctly. The voltage was steadily increased to a level so high that it would have been fatal under real-life conditions.
One important finding of this undertaking was that, although the test subjects expressed doubts and would not have continued the experiment on their own initiative, they obeyed the research director when he instructed them to continue. Only standardized phrases, such as Please continue, were used. The test subjects then did so, despite knowing that they wouldn’t lose their compensation and could stop the experiment themselves at any time.
Here, it becomes apparent what an influence authorities have on our own behavior, as well as our own willingness to delegate responsibility away. Milgram’s original intention was above all to investigate the Nuremberg defense, whereby accused Nazi Party members claimed that they were ‘just following orders’.
This approach is also often used in phishing campaigns, whereby a recognized authority such as a bank, a public authority or the CEO of a large company instructs the recipient to perform an action. In 2016, for example, there was a phishing email doing the rounds in which the German Federal Criminal Police Office warned of the Locky computer virus. Attached to the email was a security guide, which was actually malware. This is where the Federal Criminal Police Office’s authority comes into play, since people “trust” this office to make the right decisions and to know a thing or two about the subject matter. Consequently, they’d doubtless consider the guide to be “good”, since there’s no way it could be “wrong”.
As these examples show, social engineering attackers systematically and frequently exploit “weak points” rooted in human nature and behavior. There’s a very good reason why social engineering attacks are known for being particularly successful.
Nowadays, there are numerous experiments for improving our understanding of human behavior and drawing attention to its weak points. Linus Neumann has an interesting take on things in his article entitled Hirne Hacken [“Hacking the Human Brain”]. It is interesting to note how he questions awareness programs that aren’t interactive, because although users understand the theory, they do not apply it in everyday life or in emergency situations. Neumann makes it clear that the only way to bring about an effective learning effect (albeit one that decreases over time) is through individual experience. Corporate culture is also important in this regard, as is the fact that mistakes can be concealed to mitigate further damage, as already mentioned in the Phishing article.
Through my studies, I have become familiar with exciting elements of behavioral psychology that attackers can exploit. Attackers often exploit human behavior by actively addressing our automatic system, for example, and thus trying to prevent us from thinking about the situation. But it’s the reflective system that is trained in many of these awareness training courses and theories; automatic behavior remains unchanged. While it takes a long time and regular training to retrain the behavior of the automatic system, it’s not impossible. Rationally, most people know how they ought to act and understand the problem of social engineering. But the big money question is: are they also taking appropriate instinctive action?
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here