Trustworthy AI - Can Laws build Trust in AI?

Trustworthy AI

Can Laws build Trust in AI?

Prisca Quadroni-Renella (external)
Prisca Quadroni-Renella (external)
Marisa Tschopp
Marisa Tschopp
on September 16, 2021
time to read: 11 minutes

Keypoints

Relationship between Trust and Law is counterintuitive and paradox

  • Trust and trustworthiness are frequently used in the EU proposal for regulating AI
  • The relationship between trust and law is hotly debated
  • Increasing trust should not be the motivator behind creating laws
  • Motivators should be accountability and a functioning society

The European Commission’s AI regulation proposal is a proposal for a regulation of the European parliament and of the Council laying down harmonized rules on artificial intelligence, Artificial Intelligence Act and amending certain union legislative acts (published in April 2021). Its explanatory memorandum explicitly aims to implement, among others, an ecosystem of trust by proposing a legal framework for trustworthy AI and the word trust is mentioned several times (14 trust, 1 trusted, 2 trustful, 21 trustworthy, 3 trustworthiness, 6 entrusted, 1 entrusting). This is somewhat surprising from a Swiss legal point of view.

Indeed, under Swiss law, trust (German: Vertrauen / Italian: Fiducia / French: Confiance) is never mentioned, for example, in the Swiss Civil Code, in the Code of Obligations, nor in the Federal Product Liability Act, which constitute fundamental legal bases. However, we start seeing this trend also in Switzerland: The second key objective of the Digital Switzerland Strategy is guaranteeing security, trust and transparency. Trust seems therefore to become an important aspect concerning AI, and the system of governance (i.e. structures and processes that are designed to ensure accountability, transparency, rule of law and broad-based participation) itself, as well as the regulators applying it, seem to need to earn public trust (Sutcliffe & Brown, 2021). But what exactly are we talking about when we talk about trust?

Trust believers vs. trust sceptics: Trust in AI systems is a hotly debated issue

The extensive use of the trust construct in a regulatory context has also been accompanied by criticism. The topic of trust can be approached very differently depending on one’s perspective. Some, such as Joanna Bryson argue that AI is nothing to be trusted as AI systems should never be presented as being responsible.

Others have questioned that users can actually trust the product or system, as it is just the proxy of the designer or developer who engineered the product. Moreover it is debated who can actually be perceived as trustworthy or not, which refers to trustworthiness, the property of the trust receiver. This could be integrity for humans or performance for machines. In an influential trust review by Hoff & Bashir, it is stated that human-automation trust can be viewed as specific type of interpersonal trust in which the trustee (i.e., the trusted actor) is one step removed from the trustor. However, there are also arguments in favour of a direct trust relationship between humans and technology. For example, within the context of automated vehicles, it may indeed be the automation in use that is trusted in particular situations. The topic of trust is indeed complicated and you cannot neglect either side, both views have valid arguments and no matter where you are from, trust should be always used with caution. From a human-AI interaction perspective, trust as a psychological construct, is indispensable. However, trust is quite problematic in a regulatory context.

In this article, we – a lawyer and psychologist – first try to understand if and how trust is meant to be built regulatorily. Secondly, we outline our take and finally conclude that trust is not an adequate term in regulatory context, but useful, when it comes to communicating to the greater public.

Can rules build trust?

According to Hilary Sutcliffe, Director of the Trust in Tech Governance Initiative, and Sam Brown, Director of Consequential, three factors need to be implemented by regulators of AI (such as government and standard setters) in order to be seen as trustworthy and so earn public trust in their approach:

  1. Ensuring effective enforcement (i.e. compelling observance of or compliance with a law, rule, or obligation),
  2. Explaining what they do and communicate more about their role and
  3. Empower the citizen and developing inclusive relationships with the latter.

The three factors mentioned above relate to a sort of system trust, i.e. trust in the governmental and legal system, and are already in some ways implemented in the legislative process. In our view the key word will be legal certainty , i.e. knowing what you can expect, especially how will the AI rules be applied by the judges. There must be uniform and regular application over time. The citizen must know what to expect and see that his case will be treated in the same way and have the same result as an identical case in another part of the country. If each State or Canton applies effectively, but in different ways same cases, trust in the system will be lost.

We, therefore, do not believe that rules can, by the only fact of existing, create trust.

Should rules build trust?

According to Daniel Hult, Lector at School of Business, Economics and Law at the University of Gothenburg, the government should in any case refrain from trying to create personal trust by means of legislation. He believes that a more feasible regulatory goal is to incentivize trustworthy behaviour of societal actors (because they are more or less forced to act in a certain way), which might generate trust in the governmental and legal system as a positive side effect. He adds that if, after all, personal trust is the chosen regulatory goal, then legislation is not a suitable regulatory technique to build trust. Instead, less controlling regulatory techniques should be employed, e.g. programmes of voluntary regulation. Indeed, standards, best practices or labels set by private associations are not mandatory and because of it, if a company chose to voluntary adhere to standards, it opens the door of a possible trust in its behaviour.

Daniel Hult (2018) therefore agrees with the last two factors indicated by Hilary Sutcliffe and Sam Brown. Involving the citizen into the process of regulation is certainly a less controlling regulatory technique. Rules, in particular mandatory ones, exclude any need of trust.

We second this: Even if the legislator wished to build trust with rules, it would be time wasted.

Trust the trust hype?

However, this has nothing to do with psychological trust in AI as an attitude of human beings, which is what this new trend in regulation is trying to achieve. Additionally, we argue that too many neglect the important difference between trust in AI and trustworthiness of AI.

The European Commission has already in the Ethics guidelines for Trustworthy AI comprehensively and very clearly defined which aspects are necessary for the creation of trustworthy AI (see references). According to this guideline, Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

  1. It should be lawful, complying with all applicable laws and regulations;
  2. It should be ethical, ensuring adherence to ethical principles and values; and
  3. It should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

The guideline sets out a framework for achieving Trustworthy AI by listing requirements that AI systems should meet and providing a concrete assessment list aimed at operationalising these requirements, but it does not explicitly deal with the lawful AI component.

This component is defined by the European AI regulation proposal (see references), which sets the requirements for AI in order to be lawful. In particular, the proposal defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalized through harmonized technical standards. The proposal also addresses the situation after AI systems have been placed on the market by harmonizing the way in which ex-post controls are conducted.

On the other hand, Switzerland’s strategy recommendation at the moment is mainly to adapt existing legislation (Christen et al, 2020) and to coordinate with Europe’s AI definitions. Indeed, while the EU has defined a proposal for AI regulation, it should not be forgotten that we are not in a legislative vacuum. Existing laws already provide for rules governing AI applications. However, some definitions, which are exclusively human-centred, need to be updated to incorporate machine-generated actions. An example would be adapting the Federal Product Liability Act to this technology.

The EU has set an example and established important elements that Switzerland will carefully consider, as it did so for the update of its data protection law. It will now be necessary to see how the EU’s AI regulation proposal and the revised Swiss laws, once entered into force, will be applied and, more importantly, how they will be applied over time. We believe that the challenge will be defining the first steps and being coordinated on implementation. A knowledge of this technology not only by the regulator, but also by all implicated stakeholders and actors, such as the judiciary, will also be important.

Conclusion

In a nutshell, that trust in AI can only follow trustworthy AI, is an ideal, linear and unfortunately unrealistic relationship. It’s rather a mission or vision, nothing more and nothing less. Should we trust AI or is this AI trustworthy, are in fact very nuanced, but distinct questions. Maybe these are the days where we are all better off with a a zero-trust approach until a company or developer is able to prove its trustworthiness in order to earn the user’s trust.

We believe that the fundamental purpose of a law is still to establish standards, maintain order, resolve disputes, and protect liberties and rights, not building personal trust, neither trust in AI per se. With a robust legal and judicial system in AI matters, a fuzzy feeling of trust in AI may be generated over time as positive side effect.

Having a culture of trust would certainly not harm though. Would not it be great if people could in fact blindly rely on AI systems? Knowing how and that they perform reliably, with good intentions of the developers, that they are secure and safe, treat personal data well, and so on? But this time is certainly not here, pondering when and if this day will ever arrive. However, over time people will better understand what AI is all about (or what it is not about) and until then legislation will protect the fragile ones (who are not able to understand nor to defend themselves), as well as the blind (-ly trusting) tech-optimists to not get fooled and that some person can be held accountable if things go wrong.

Authorship

This article was written by Prisca Quadroni-Renella, Swiss lawyer and founding Partner at AI Legal & Strategy Consulting AG and Legal Lead for Women in AI, in collaboration with Marisa Tschopp.

References

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI Art

Human and AI Art

Marisa Tschopp

Conversational Commerce

Conversational Commerce

Marisa Tschopp

ChatGPT & Co.

ChatGPT & Co.

Marisa Tschopp

TEDxBoston Countdown to AGI

TEDxBoston Countdown to AGI

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here