Back to all Expert Interviews

The Role of Explainable AI (XAI) in EU Law Compliance

  • Explainable AI
  • AI Act
Elena Dubovitskaya
Jura-Professorin
11. Februar 2025
·

Law Professor Elena Dubovitskaya explains how the upcoming EU AI Act will increase transparency requirements for AI systems starting in 2026. Existing regulations, such as the GDPR, already demand explainable AI predictions, which could drive the development of XAI methods. Companies that adapt now can secure competitive advantages. Additionally, the new AI Liability Directive will introduce clear liability rules and rights for the disclosure of evidence in cases of damage.

To provide our readers with more high-quality expertise, we conduct regular interviews with leading experts in AI, data science, and machine learning. Interested experts are warmly invited to contact us for exciting collaboration opportunities: blog@statworx.com.

About Elena Dubovitskaya

Elena Dubovitskaya completed her law studies at Lomonosov University in Moscow, where she earned her doctorate in 2003 on the freedom of establishment of companies within the European Community. She then studied law at the University of Bonn. From 2009 to 2015, she was a research assistant at the chair of Barbara Dauner-Liebs at the University of Cologne. Since 2015, she worked as a research associate at the Max Planck Institute for Comparative and International Private Law in Hamburg. After her habilitation in 2019 at Bucerius Law School, where she obtained the venia legendi for Civil Law, Commercial and Corporate Law, Capital Market Law, Comparative Law, and Eastern European Law, she accepted the appointment to the W3 Professorship for Civil Law and Economic Law (now: Professorship for Civil Law, Commercial and Corporate Law, and Law of Digitalization) at the University of Giessen as of April 1, 2022. She has extensively engaged with explainable artificial intelligence (XAI) and has co-authored essays such as “How Should AI Decisions Be Explained? Requirements for Explanations from the Perspective of European Law,” “The Schufa, the ECJ, and the Right to Explanation,” “The Management and the Advice of (Un)Explainable AI,” “Explainable AI in the Work of the Supervisory Board,” and “AI Compliance from the Operator’s Perspective.”

1. Please briefly introduce yourself and your work.

At the University of Giessen, I lead the Chair of Civil Law, Commercial and Corporate Law, and Digitalization Law. My research focuses intensely on legal issues surrounding Artificial Intelligence (AI). One key question is what requirements the law imposes on the explainability of AI. Such requirements are found in data protection, product safety, product liability law, and corporate law. For example, decision-makers in companies must be able to understand AI predictions if they rely on them; otherwise, they breach their duty of care. AI predictions must therefore either be inherently understandable (Whitebox AI) or made understandable with the help of XAI (explainable AI) methods. This is not about fully disclosing the algorithm but providing a comprehensible representation of the main decision factors and their weighting.

2. The EU AI Act sets transparency requirements for AI systems and will come into effect in 2026. Do black box AIs have a free pass until then?

No, because aside from the AI Act, there are other legal norms that demand transparency. Corporate law was already mentioned. In data protection law, the General Data Protection Regulation (GDPR) mandates transparency in automated decisions. Similarly, the Consumer Credit Directive allows consumers to demand clear and understandable explanations, including the logic of automated creditworthiness assessments, if a lender uses automated processes. The new Product Liability Directive, effective December 2024, also plays a crucial role. AI system manufacturers can already be held liable for damages caused by their products, even if the system’s functionality is opaque. This liability can compel companies to develop internally understandable models to have a defense in case of a dispute. Furthermore, the requirement for explainability may arise from industry-specific product safety standards, especially in sensitive areas like medicine or mechanical engineering.

3. Some researchers see a “right to explanation” already embedded in the GDPR. What is your stance on this, and what would be the practical implications of such a right?

For a time, such a right was denied, and individuals affected by automated decisions were only granted the right to an explanation of the general principles of decision-making. This narrow interpretation of the GDPR is very disadvantageous for those affected. Imagine your loan application is rejected after an automated credit check. You would want to know the reasons for the rejection, not just how the automated credit check generally works. If the reasons are not provided, it is difficult to contest the decision, despite Article 22 of the GDPR explicitly providing such a right. Therefore, there is a growing consensus that the GDPR grants affected individuals the right to explanations of specific AI decisions. This was recently expressed by the Advocate General in the SCHUFA case before the European Court of Justice. Practically, the right to explanation means that ex-post explanations are needed for AI decisions. For black box AI models like neural networks, such explanations can only be generated with the help of XAI. Recognizing a right to explanation will thus drive the development and implementation of XAI methods. This could lead to higher costs and additional development efforts but would ultimately enhance AI transparency and acceptance in the long run.

4. If existing law already poses such significant challenges, how relevant is it for companies to prepare for the EU AI Act now?

Regarding explainability, implementing current law goes hand in hand with preparing for the AI Act. The latter also provides, in Article 86, a “right to explanation” for certain high-risk AI systems (such as those used in hiring, creditworthiness assessments, or determining life and health insurance premiums). This right allows affected individuals to receive a clear and meaningful explanation of the system’s role in the decision-making process and the key elements of the decision from the AI system operator. This right applies only insofar as other EU regulations do not provide a corresponding right. Moreover, the AI Act imposes numerous requirements on companies, particularly providers of high-risk AI systems. Companies should therefore analyze now which of their systems might be classified as high-risk. Future challenges include implementing specific quality management systems and conformity assessment procedures, requiring significant technical and organizational adjustments that should begin early. Companies that align their processes with upcoming regulations now can achieve long-term competitive advantages.

5. After the EU AI Act, another groundbreaking law is on the way: the AI Liability Directive. What can you tell us about it now?

The planned AI Liability Directive is closely linked to the AI Act. It addresses the liability for damages caused by AI systems. Violations of the AI Act can also lead to liability under the AI Liability Directive. The AI Liability Directive is also intended to complement the updated Product Liability Directive, which focuses on protecting “traditional” legal goods like life, body, health, and property. The planned AI Liability Directive applies even when none of these legal goods are violated, e.g., in cases of pure financial losses. An especially innovative aspect of the AI Liability Directive is the so-called right to disclosure of evidence. If a high-risk AI system is suspected of causing damage, the potentially injured party can request the court to disclose relevant evidence related to the AI system. The injured party only needs to plausibly present their claim for compensation. Furthermore, the directive eases the injured party’s burden of proof by establishing a rebuttable presumption of causality between the fault of the defendant and the “misconduct” of the AI system under certain conditions. In summary, the AI Liability Directive adapts traditional liability law to the challenges of the AI era.

statworx comment

The explainability of Artificial Intelligence (XAI) is crucial for trust and transparency in AI systems. Current and upcoming regulations, such as the EU AI Act, emphasize the necessity of engaging deeply with XAI, particularly in areas like data protection, corporate law, and product liability.

From our perspective, implementing XAI methods offers companies not only challenges but also real market opportunities. By integrating XAI, companies can meet regulatory requirements, strengthen customer trust, and gain entirely new insights into their data and AI models that would not be possible without XAI.

Therefore, solutions like the Black Box Decoder, which make the decisions of AI models understandable and transparent, offer a strategic advantage that promotes the acceptance and success of AI technologies. Companies that engage with these topics early are better prepared for future challenges and can adhere to both legal and ethical standards.

Linkedin Logo
Marcel Plaschke
Head of Strategy, Sales & Marketing
schedule a consultation
Zugehörige Leistungen
No items found.

More interviews

  • Data Culture
  • Change Management
  • Strategy
How are Data Culture and Change Management connected?
Cathrin Gerlach
3.1.2025
Read more
  • GenAI
Melodies in transition: What influence does Generative AI have on Music Creators?
Jesse Josefsson
3.1.2025
Read more
  • Ethical AI
  • Human-centered AI
Between regulation and innovation: Why we need ethical AI
3.1.2025
Read more
  • Explainable AI
The future of AI: Explainable AI will become the norm
Barry Scannell
3.1.2025
Read more
  • AI Act
The AI Act as an opportunity: Proper regulation can be an asset
Jakob Plesner Mathiasen
30.10.2024
Read more