en
                    array(2) {
  ["de"]=>
  array(13) {
    ["code"]=>
    string(2) "de"
    ["id"]=>
    string(1) "3"
    ["native_name"]=>
    string(7) "Deutsch"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    int(0)
    ["default_locale"]=>
    string(5) "de_DE"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "de"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(6) "German"
    ["url"]=>
    string(114) "https://www.statworx.com/content-hub/interview/zwischen-regulierung-und-innovation-warum-wir-ethische-ki-brauchen/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/de.png"
    ["language_code"]=>
    string(2) "de"
  }
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(107) "https://www.statworx.com/en/content-hub/interview/between-regulation-and-innovation-why-we-need-ethical-ai/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact
Content Hub
Interview

Between regulation and innovation: Why we need ethical AI

  • Expert Philippe Coution
  • Date 07. September 2023
  • Topic Artificial IntelligenceData CultureHuman-centered AI
  • Format interview
  • Category Management
Between regulation and innovation: Why we need ethical AI

Expert Interview

Our Content Hub is expanding and evolving! In order to provide our valued readers with even more high-quality expertise and relevant content, we regularly conduct interviews with experts on fascinating topics related to artificial intelligence, data science, machine learning, and related fields. If you consider yourself an expert in these areas, please don’t hesitate to reach out to us. We are constantly exploring new opportunities for collaborations and look forward to uncovering shared synergies that can lead to engaging expert interviews. Feel free to contact us at: blog@statworx.com

 

About TÜV SÜD  

TÜV SÜD, founded in 1866, has been pursuing the ambitious vision of protecting people, property and the environment from technical risks for over 150 years. Today, sustainability and safety continue to form the basis for all services. After all, TÜV SÜD’s goal is to inspire confidence in technologies and enable progress by managing technical risks and facilitating change.

 
 
1. Please introduce yourself briefly.

My name is Philippe Coution, and I contributed actively to TÜV SÜD’s framework for AI Quality. I am not what you would classically call a tech nerd, but I have been interested in the digital world for a long time. I’ve only been involved with AI for a few years, but since then, I’ve invested all my energy into making it a powerful yet ethical, safe tool. At the current pace of change, that’s a full-time job! At TÜV SÜD, I work at the headquarters in Munich in the Digital Service department, which deals with our digital transformation and that of our services. For example, we support the development of AI standards and then assess our customers on how they comply with those new needs and adapt to the new AI era. It’s an exciting job because I have to constantly switch between perspectives. I don’t just spend my free time reading AI publications, though. I like to ride my mountain bike in the Alps. Cycling and sweating are great for clearing my head and thinking freshly about AI challenges.

2. Why is integrating ethical principles into AI systems important?

Let’s think the other way around. Why haven’t we considered ethics as important when human-designed systems have been doing tasks? Mainly because these systems either lacked autonomy – for example, our cars still have drivers – or because they could not provide controversial input; although one could argue that, for example, cross-selling recommendations to a child on a streaming platform already crosses certain boundaries. Ethical concerns arise from AI’s increasing abilities to make recommendations to us, to make decisions, and to generate content. But their autonomy alone cannot be the problem. Even when systems act very autonomously, such as robots in a factory, they can be 100 percent predictable and used without hesitation in a given environment. What could possibly happen to a robot that has not been planned and programmed beforehand?

So this is the combination of autonomy and lack of predictability that’s problematic. And it is even more than this combination that we are concerned about. The large spreading makes it so critical. Indeed, ethical concerns first arose when we started to use AI everywhere – invisibly in the background and with harmful or unclear effects on children, for example, through recommendation engines and educational apps, or on legally protected groups of people, for example, in HR applications, and for facial recognition. When we think about how many people are affected by the results of such AI systems, it becomes clear how important the design and input of the systems are in terms of ethical criteria: Are the data used free of unwanted bias? Are they representative of the target groups? Are the applications designed to be ethical? Are they used in their intended context? The question of ethics cannot be applied to an AI system as merely a final check. It must be a fundamental requirement of an AI system, and asked before it is developed, and then enforced all along. It is not an after-the-fact mitigation but arequirement for deliberate use.

3. How can companies ensure that they are developing and deploying ethical and legally compliant AI systems?

In the same way that they ensure, for example, legally compliant vehicles on the roads or legally compliant medical devices on the market: via adapted quality management systems. These are embedded in product development, production, and monitoring – even post-market. But ethics goes beyond mere regulation. Ethical principles should be reflected in corporate values. This is where external ethical reviewers specializing in AI can provide significant support.
The first step, of course, is awareness at the executive level of these issues and a willingness to integrate ethical principles at the highest possible level beyond the legal requirements.
Once ethical principles are integrated into corporate values and strategy, the next step is to empower the technology and quality management teams. The best practice here is training by experts who explain the legal framework but, more importantly, are familiar with its practical application in AI standards, such as the IEEE CertifAIed© on ethics in AI. It is not about abstract values and standards. It is about the practical implementation of projects in a specific context. Here, the implementation of an AI quality management system, governance, and accountability are critical to ensure ethical, compliant AI systems. Not just in the short term but in the long term – from product definition to post-market monitoring.

4. What does AI quality mean and what is important for companies to ensure?

AI quality refers to the degree to which an AI system meets certain requirements throughout its lifecycle. Ethics is one of those requirements, along with safety, security, performance, compliance, and sustainability. Why is it important to look at the whole lifecycle? Because quality is not a state. Quality is a continuous process. Therefore, companies should implement AI quality and risk management systems to ensure that all ethical concerns and risks are addressed throughout the lifecycle of the system. Only AI quality management will ensure trustworthy AI in the long run. And more importantly: It is the only way to prove, test and improve the quality of AI systems. To put it in a nutshell: AI quality is imperative to ensure that systems adhere to regulations and are proven to be ethical. This allows companies to realize the full potential of AI while managing the associated risks.  

5. What’s your take on the concerns that the AI Act is too regulatory and stifles innovation? How is it possible to strike an appropriate balance between regulation and innovation?

Obviously, in the tension between innovation and regulation, there are many ways to read the draft AI Act. But what we should keep in mind is: Innovation is not a goal. A better product or a better society are goals. Innovation is just one of the means to achieve those goals. Regulation is not the right conceptual approach either. Rather, it is about risk management because that is the backbone of any sustainable development. For example, would we want nuclear power plants without risk management? No. And a smartphone game? More likely. A smartphone game for kids 0 to 3? Very problematic again. That’s what this is all about – the right level of control for the specific case. Just think of the failures of self-regulation in the past, for example, with Cambridge Analytica. That we cannot leave regulation to profit-oriented Silicon Valley companies should be beyond dispute. They failed so many times already.

Nevertheless, to answer the question about the balance between regulation and innovation: I support in principle the regulatory approach of the AI Act, even if the draft is far from perfect, and I have concerns about the regulation of basic models. At the same time, I think that the chapters on fostering innovation are not ambitious enough. If we want to make the EU a strong innovation region for AI, we need to do more. I am thinking, for example, of support for startups, e.g., through protected trial periods, and special support for SMEs, for example, with transformation programs. We also need infrastructure programs and a clear strategy for critical components such as chips, GPUs, and co. to take an AI leadership role and benefit from the Brussels Effect (editor’s note: the process of unilateral globalization of regulation resulting from the European Union de facto moving its laws outside its borders through market mechanisms).

statworx Comment

AI-skeptical discourse always revolves around one question: How do we ensure that we can trust the systems? Philippe Coution, AI quality expert at TÜV SÜD, says ethical principles must be part of the development of an AI solution from the very beginning; they must not be brought in after the fact. That’s why, for him, ethical AI is a leadership issue in companies. It’s about establishing AI quality management – linked to corporate values – for long-term trustworthy AI systems whose quality can be tested and optimized.

At statworx, we are committed to AI quality according to ethical standards. In view of the rapid technological progress and the great danger of manipulation, AI principles are a necessity in our view. This is the only way to prevent artificial intelligence from manipulating elections, discriminating against groups or disregarding fundamental rights from the outset. At the same time, AI principles create certainty for their users that newly developed, approved systems actually support and empower people.

The employees of statworx have therefore jointly defined AI principles: People-centric, transparent, ecological, respectful, fair, collaborative, and inclusive is how AI should be. The principles serve as a compass for our day-to-day work. And this is how we ensure that systems we develop and implement for our clients (over)comply with upcoming legal regulations. Our AI systems are ethical while realizing the full potential of AI. To paraphrase Philippe: We manage the risks and are ambitious in exploiting AI potential.

If you want to prepare your company in time for the AI Act, we recommend our free AI Act Quick Check for an assessment of the risk classes of their AI systems. After all, once the law is passed, companies only have two years to make their AI systems and related processes compliant with the law. This includes quality standards for data used, technical documentation and risk management. Given the AI Act’s already foreseeable design as a risk-based regulation, companies that prepare now will have tremendous advantages when the time comes. Simply contact us for a non-binding initial consultation.

 

Philippe Coution

Business Development Lead AI Quality | TÜV SÜD

Learn more!

As one of the leading companies in the field of data science, machine learning, and AI, we guide you towards a data-driven future. Learn more about statworx and our motivation.
About us