Home > Risks Management and Insurance Magazine > Articles > Who is liable for errors in Artificial Intelligence?

Portada RC dela IA

Who is liable for errors in Artificial Intelligence?

Artificial Intelligence is becoming a part of almost all economic activities, transforming fundamental aspects of essential sectors, and its application at user level is becoming normalized, but does it have the appropriate regulatory support to protect users from the damage it may cause?

Bird&Bird partner Virginia Martinez gives us the keys to this pressing regulatory challenge:

The use of Artificial Intelligence (AI) is an increasingly used resource in various production processes and services. In fact, at a time when concepts like chatbots, ChatGPT, or machine translation are becoming commonplace, we can speak of the “golden age” of AI, which is conquering new industries and uses every day.

There is no doubt that the use of AI, like all technologies, has multiple advantages:

  • Increased efficiency and productivity in businesses (e.g., releasing workers from repetitive tasks).
  • Significant reduction in human error and its negative consequences.
  • It helps businesses make decisions with advanced management of large amounts of data.
  • It allows the customer experience to be customized in certain services.
  • It automates certain processes, reducing production costs.

However, like with all new technologies, these advantages bring with them a number of risks and disadvantages. For example:

  • The high cost of its development (at least in this initial phase of implementation).
  • The risk of infringement of rights (intellectual property rights, image rights, privacy, etc.)
  • Or even the risk that AI is used for malicious purposes.

The increasing application of AI across all industries, and even in our everyday lives, leads us to ask ourselves the following questions: Are systems or machines using AI autonomous enough to discern what is right and what is wrong, what is legal or illegal? Can they be legally held liable for their actions and the damage they may cause to third parties? If not, who is actually liable? Are the rules and principles in place adequate?

In Bird&Bird’s opinion, the solutions envisaged in the European civil liability systems (contractual or non-contractual) and in the specific regulations on liability for defective products are not adequately aligned with the nature of this new technology, and are therefore insufficient or inadequate to protect victims from its use.

Challenges of effective regulation

The main difficulty lies in the fact that machines or systems using AI are characterized by a high degree of autonomy or self-learning capacity. This condition means that they may make decisions that are not programmed and, therefore, not foreseeable in advance by the manufacturer, owner, or user. In these cases, if a third party is harmed, the requirement of “fault or negligence”, which is a basic pre-condition in any civil liability regime, would not be met.

Moreover, there may also be no “defect”, strictly speaking, in the design or manufacture of the device, as required by existing regulations on liability for defective products. And even in the event of any deficiencies, the very complexity of the algorithms or technical data used in the manufacturing could mean that it is practically impossible to determine the origin of the fault and prove it before the courts.

The context of the European Union

In this context, the EU, with the aim of promoting the adoption of AI safely and reliably, and of adapting current European liability regimes to its use, submitted two legislative proposals at the end of 2022.

  • First, the AI Liability Directive Proposal, whose objective is to ensure that injured parties obtain civil liability protection equivalent to that of damages caused by other products. This Directive aims to mitigate the well-known black box effect or the difficulty faced by victims in proving the wrongful act, as well as identifying the cause of the damage, which in many cases means that injured parties lack protection. Given this situation, the Proposed Directive establishes a series of measures that encourage the obtaining of evidence and the identification of potentially liable parties. It also introduces the so-called “presumption of causality”, which seeks to facilitate the proof of the causal link between the damage and the negligent fact generated by AI. This Directive shall apply to civil lawsuits in which claims for damages caused by an AI system are brought on the basis of fault or negligence. The Proposal is being discussed in the Council.
  • Second, the European Union Commission has submitted the Proposed Review of Directive 85/374/EEC on liability for damages caused by defective products, as the current wording of the Directive does not provide sufficient clarity on how to determine liability for defects in software updates, machine learning algorithms, or digital services essential to the functioning of a product. The Proposal was already approved by the Parliament on March 12, 2024 and is pending formal approval by the Council.

These two legislative initiatives join the perhaps better-known AI Regulation, approved by the Council of the European Union on May 21, 2024, which aims to ensure that Artificial Intelligence systems introduced into the European market and used in the EU are secure and respect fundamental rights. Its approach is based on the risk that can arise from the use of AI systems, establishing requirements and obligations for the various participants in the value chain.

It is therefore clear that the European Union has addressed the issue of liability that may arise from the use of AI. The problem is that, as has historically been the case, legislation often lags behind reality. Therefore, until EU rules are fully applicable (and, in the case of Directives, transposed into national legal systems), it will be necessary to resort to the tools provided by current legal systems to protect users from the damage that the use of AI may cause.

Article contributor:

Virginia Martínez

Virginia Martinez is a partner in Bird & Bird’s Insurance and Reinsurance department at the Madrid office. She has more than fifteen years of experience in regulatory, commercial, and procedural matters, regarding insurance and reinsurance. She has participated in the design of complex insurance distribution structures, design and negotiation of all types of contracts in the insurance sector, and in the preparation and presentation of administrative files at the General Directorate for Insurance and Pension Funds.
During her career as an attorney, she has taken part in civil liability disputes and procedures, industrial claims, product liability, construction claims, and coverage dispute resolution.

donwload pdf
Smart materials for building the future

Smart materials for building the future

The transformation of a traditional sector like construction can begin with the revolution of its materials, setting new milestones in architecture and providing innovative solutions that ensure sustainability and improve buildings. These advances have redefined how...

read more
Ultranav: A success story in maritime risk management

Ultranav: A success story in maritime risk management

During the Alsum Latin American Maritime Insurance Congress, we learned about the internal challenge taken on by Ultranav—a shipping company with a large fleet and presence in 19 countries—that successfully managed to significantly reduce the number of claimable...

read more