“Building trust through human-centric AI”: this is the slogan under which the European Commission presented its proposal for regulating Artificial Intelligence (AI regulation) last week. This historic step positions Europe as the first continent to uniformly regulate AI and the handling of data. With this groundbreaking attempt at regulation, Europe wishes to set standards for the use of AI and data-powered technology – even beyond European borders. That is the right step, as AI is a catalyst of the digital transformation, with significant implications for the economy, society, and the environment. Therefore, clear rules for the use of this technology are needed. This will allow Europe to position itself as a progressive market that is ready for the digital age. In its current form, however, the proposal still raises some questions about its practical implementation. Europe cannot afford to risk its digital competitiveness when competing with America and China for the AI leadership position.
Building Trust Through Transparency
Two Key Proposals for AI Regulation to Build Trust
To build trust in AI products, the proposal for AI regulation relies on two key approaches: Monitoring AI risks while cultivating an “ecosystem of AI excellence.” Specifically, the proposal includes a ban on the use of AI for manipulative and discriminatory purposes or to assess behavior through a “social scoring system”. Use cases that do not fall into these categories will still have to be screened for hazards and placed on a vague risk scale. Special requirements are placed on high-risk applications, with necessary compliance checks both before and after they are put into operation.
It is crucial that AI applications are to be assessed on a case-by-case basis instead of a previously considered sector-centric regulations. In last year’s white paper on AI and trust, the European Commission called for labeling all applications in business sectors such as healthcare or transportation as “high-risk”. This blanket classification based on defined industries, regardless of the actual use cases, would have been obstructive and meant structural disadvantages for entire European industries. The case-by-case assessment allows for the agile and innovative development of AI in all sectors and subjects all industries to the same standards for risky AI applications.
Clear Definition of Risks of an AI Application Is Missing
Despite this new approach, the proposal for AI regulation lacks a concise process to assess the risks of new applications. Since developers themselves are responsible for evaluating their applications, a clearly defined scale for risk assessment is essential. Articles 6 and 7 circumscribe various risks and give examples of “high-risk applications”, but a transparent process for assessing new AI applications is yet to be defined. Startups and smaller companies are heavily represented among AI developers. These companies, in particular, rely on clearly defined standards and processes to avoid being left behind by larger competitors with more appropriate resources. This requires practical guidelines for risk assessment.
If a use case is classified as a “high-risk application”, then various requirements on data governance and risk management must be met before the product can be launched. For example, training data must be tested for bias and inequalities. Also, the model architecture and training parameters must be documented. After deployment, human oversight of the decisions made by the model must be ensured.
Accountability for AI products is a noble and important goal. However, the practical implementation of these requirements once more remains questionable. Many modern AI systems no longer use the traditional approach of static training and testing data. Reinforcement Learning instead relies on exploratory training through feedback instead of a testable data set. And even though advances in Explainable AI are steadily shedding light on the decision-making processes of black-box models, complex model architectures of many modern neural networks make the tracing of individual decisions almost impossible to reconstruct.
The proposal also announces requirements for the accuracy of trained AI products. This poses a particular challenge for developers because no AI system has perfect accuracy. Nor is this ever the objective, as misclassifications are often calculated to have as little impact as possible on the individual use case. Therefore, it is imperative that performance requirements for predictions and classifications be determined on a case-by-case basis and that universal performance requirements be avoided.
Enabling AI Excellence
Europe is Falling Behind
With these requirements, the proposal for AI regulation seeks to inspire confidence in AI technology through transparency and accountability. This is a first, right step toward “AI excellence.” In addition to regulation, however, Europe as a location for Artificial Intelligence must also become more attractive to developers and investors.
According to a recently published study by the Center for Data Innovation, Europe is already falling behind both the United States and China in the battle for global leadership in AI. China has now surpassed Europe in the number of published studies on Artificial Intelligence and has taken the global lead. European AI companies are also attracting significantly less investment than their U.S. counterparts. European AI companies invest less money in research and development and are also less likely to be acquired than American companies.
A Step in the Right Direction: Supporting Research and Innovation
The European Commission recognizes that more support for AI development is needed for excellence on the European market and promises regulatory sandboxes, legal leeway to develop and test innovative AI products, and co-funding for AI research and testing sites. This is needed to make startups and smaller companies more competitive and foster European innovation and competition.
These are necessary steps to lift Europe onto the path to AI excellence, but they are far from being sufficient. AI developers need easier access to markets outside the EU, facilitating the flow of data across national borders. Opportunities to expand into the U.S. and collaborate with Silicon Valley are essential for the digital industry due to how interconnected digital products and services have become.
What is entirely missing from the proposal for AI regulation is education about AI and its potential and risks outside of expert circles. As artificial intelligence increasingly permeates all areas of everyday life, education will become more and more critical. To build trust in new technologies, they must first be understood. Educating non-specialists about both the potential and limitations of AI is an essential step in demystifying Artificial Intelligence and strengthening trust in this technology.
Potential Not Yet Fully Tapped
With this proposal, the European Commission recognizes that AI is leading the way for the future of the European market. Guidelines for a technology of this scope are important – as is the promotion of innovation. For these strategies to bear fruit, their practical implementation must also be feasible for startups and smaller companies. The potential for AI excellence is abundant in Europe. With clear rules and incentives, it can also be realized.
is a consulting company for data science, statistics, machine learning and artificial intelligence located in Frankfurt, Zurich and Vienna. Sign up for our NEWSLETTER and receive reads and treats from the world of data science and AI. If you have questions or suggestions, please write us an e-mail addressed to blog(at)statworx.com.