en
                    array(2) {
  ["de"]=>
  array(13) {
    ["code"]=>
    string(2) "de"
    ["id"]=>
    string(1) "3"
    ["native_name"]=>
    string(7) "Deutsch"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    int(0)
    ["default_locale"]=>
    string(5) "de_DE"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "de"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(6) "German"
    ["url"]=>
    string(102) "https://www.statworx.com/content-hub/interview/die-zukunft-von-ki-explainable-ai-wird-die-norm-werden/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/de.png"
    ["language_code"]=>
    string(2) "de"
  }
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(103) "https://www.statworx.com/en/content-hub/interview/the-future-of-ai-explainable-ai-will-become-the-norm/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact
Content Hub
Interview

The future of AI: Explainable AI will become the norm

  • Expert Barry Scannell
  • Date 28. February 2024
  • Topic Artificial IntelligenceHuman-centered AIStrategy
  • Format interview
  • Category Technology
The future of AI: Explainable AI will become the norm

Expert Interview

Our Content Hub is expanding and evolving! In order to provide our valued readers with even more high-quality expertise and relevant content, we regularly conduct interviews with experts on fascinating topics related to artificial intelligence, data science, machine learning, and related fields. If you consider yourself an expert in these areas, please don’t hesitate to reach out to us. We are constantly exploring new opportunities for collaborations and look forward to uncovering shared synergies that can lead to engaging expert interviews. Feel free to contact us at: blog@statworx.com

 

About William Fry  

William Fry is one of Ireland’s oldest and largest law firms. With offices in Dublin, Cork, London, New York and San Francisco, and a global law firm network, it is a leading international corporate law firm, with over 320 legal and tax professionals and more than 470 staff. The firm is ranked by international directories as being a leader in the core practice areas of Corporate and M&A, Banking & Finance, Litigation & Investigations, Asset Management & Investment Funds, Real Estate, Insurance, Competition & Regulation, Tax, Projects & Construction, Employment & Benefits and Technology.

 
 
1. Please introduce yourself.

My name is Barry Scannell and I’m a senior lawyer in the top-ranked Irish law firm William Fry, where I specialize in AI law and the legal implications of AI technologies. I’ve recently been appointed by the Irish Government to Ireland’s AI Advisory Council.

The AI Advisory Council to the Irish Government
The AI Advisory Council to the government of the Republic of Ireland will provide independent expert advice on artificial intelligence policy, with a specific focus on building public trust and promoting the development of trustworthy, person-centred AI. Its first role will be providing expert guidance, insights, and recommendations in response to specific requests from Government. Its second role will be developing and delivering its own workplan of advice to Government on issues in artificial intelligence policy, providing insights on trends, opportunities, and challenges.

2. Please give us an overview of AI regulations around the globe. Who will be the first to follow the EU?

It’s difficult to say who will be the first. What is becoming clear from global attempts at AI regulation is that the common themes are transparency, human oversight, data governance, and risk management. These are the pillars of most attempts at AI regulation globally. While there is a bipartisan framework before the US legislature, it’s not clear if this will gain any traction – although it has striking similarities to the AI Act under these four pillars. The US seems to be making the most headway federally, via Biden’s Executive Order on AI, and on a state-by-state level, individual states are introducing their own AI regulations. Canada’s AI legislation is also similar to the EU’s, and like the EU, Canada seems to take more of a fundamental rights approach. I think the Brussels Effect is still going strong.

3. What degree of transparency should sound AI regulation require?

I think it depends on the level of risk involved, and this is basically what’s captured by the AI Act. Generally speaking, the AI Act captures AI systems that make decisions or take actions which could not only have big impacts on people’s lives but also have harmful impacts on people’s lives. I think for such systems, we need to avoid the “black box” trap of AI and have a high degree of transparency so that meaningful human oversight can be applied if necessary.

4. What are the strengths and weaknesses of the EU AI Act from a global perspective?

I think the biggest strength of the AI Act is that companies have a structure and a framework to aim for. You’ll see in some jurisdictions, such as the UK, where a light touch is being taken from a regulatory perspective, that there is potential for disparities in the levels of responsible AI. The biggest weakness of the AI Act is trying to regulate a technology that is advancing so rapidly. When the EU started preparing the AI Act, generative pretrained transformers (the GPTs that have caused the latest explosion in AI) didn’t even exist – they were first written about in a 2017 research paper. Keeping pace with technology is likely to be the biggest challenge.

5. How do you see Explainable AI (XAI) contributing to the compliant use of AI, according to the EU AI Act?

I think that Explainable AI is so central to the AI Act, and other regulations internationally, that its time as a separate “thing” is coming to an end. I don’t think we will have “XAI” in three years because all AI will be XAI. XAI obligations already exist under the GDPR (ed. note: General Data Protection Regulation) regarding automated decision-making. The transparency requirements for high-risk AI systems mean that XAI will become the norm.

statworx Kommentar

The AI Act

By introducing the AI Act, the EU is creating the first comprehensive framework for trustworthy AI. This legislation is intended to ensure compliance with ethical principles and human rights. The AI Act will disrupt the entire AI landscape by defining numerous requirements for AI systems.

The European trilogue (European Commission, Council of the European Union, and European Parliament) reached a political agreement on the wording of the AI Act in December 2023. At the beginning of February 2024, the representatives of the member states (European Council) agreed on the AI Act. The Act must now be adopted by the European Parliament and the Council of the European Union (≠ European Council) before it can become EU law.

With this groundbreaking step, the EU is setting clear standards for responsible AI made in Europe. For companies, the transition phase now begins, during which adjustments can be made to the new regulations.

A key element here is the risk class in which an AI system is classified. This determines the requirements that companies must meet. The higher the risk of harming people and fundamental rights, the more comprehensive the legal requirements that now need to be implemented.

High Risk, High Requirements for AI systems

The AI Act pursues a risk-based approach. It defines four risk classes – from low to unacceptable risk. The higher the risk, the higher the requirements. Once the AI Act is enacted, affected companies will have 24 months to comply with the requirements for each AI system.

High penalties for non-compliance

Non-compliance will result in a penalty of up to 40 million euros or 7% of a company’s annual revenue. For this reason, we recommend taking action at an early stage.

You can already use our free AI Quick Check to make an initial assessment today. Determine the risk class of your AI systems based on a few questions and prepare yourself for the future!

If you would like to discuss the challenges of the AI Act in more depth, statworx offers a non-binding initial consultation.

 

Barry Scannell

Senior Lawyer | William Fry

Learn more!

As one of the leading companies in the field of data science, machine learning, and AI, we guide you towards a data-driven future. Learn more about statworx and our motivation.
About us