On the Relationship Between Interpersonal Trust and Technological Reliability: Some Reflections on Trustworthy AI in the AI Acton Act
Parole chiave:
Artificial Intelligence, AI Act, Trust, Interpersonal Trust, Technological Reliability.Abstract
Trust and trustworthiness have become central concepts in the ethics and governance of artificial intelligence (AI). The issue arises because of the inherent uncertainty of AI system outputs, which are often described as opaque systems or black boxes. The need of a legal/regulatory framework, as a response and remediation to AI opaqueness, has been rapidly recognised by the European Union, which has put in place, in a time record, a powerful piece of legislation: the AI Act. Valuable efforts of the EU notwithstanding, regulation shows tension with private/public law, and mostly tension in understanding of trust/trustworthiness from jurisprudence. The tension arises because, from a legal perspective, an instrument cannot possess trust, which, in some philosophical traditions, is defined as a strictly human-to-human relation. This tension is substantiated, for instance, in philosophical anthropology. Yet, as the terminology of trust has stuck, in this paper we offer a different understanding of this notion. Building on philosophical and anthropological contributions, we extend the meaning of trust to technical artefacts using an argument by analogy. We propose a semantics for trust/trustworthiness based on a relational framework that places AI in a network of relations, and so trust/trustworthiness are not properties of the artefact alone, but of a network in which human agents still play a fundamental role.
