en
                    array(2) {
  ["de"]=>
  array(13) {
    ["code"]=>
    string(2) "de"
    ["id"]=>
    string(1) "3"
    ["native_name"]=>
    string(7) "Deutsch"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    int(0)
    ["default_locale"]=>
    string(5) "de_DE"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "de"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(6) "German"
    ["url"]=>
    string(58) "https://www.statworx.com/content-hub/blog/tag/strategy-de/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/de.png"
    ["language_code"]=>
    string(2) "de"
  }
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(61) "https://www.statworx.com/en/content-hub/blog/tag/strategy-en/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact

We are at the beginning of 2024, a time of fundamental change and exciting progress in the world of artificial intelligence. The next few months are seen as a critical milestone in the evolution of AI as it transforms from a promising future technology to a permanent reality in the business and everyday lives of millions. Together with the AI Hub Frankfurt, the central AI network in the Rhine-Main region, we are therefore presenting our trend forecast for 2024, the AI Trends Report 2024.

The report identifies twelve dynamic AI trends that are unfolding in three key areas: Culture and Development, Data and Technology, and Transparency and Control. These trends paint a picture of the rapid changes in the AI landscape and highlight the impact on companies and society.

Our analysis is based on extensive research, industry-specific expertise and input from experts. We highlight each trend to provide a forward-looking insight into AI and help companies prepare for future challenges and opportunities. However, we emphasize that trend forecasts are always speculative in nature and some of our predictions are deliberately bold.

Directly to the AI Trends Report 2024!

What is a trend?

A trend is different from both a short-lived fashion phenomenon and media hype. It is a phenomenon of change with a “tipping point” at which a small change in a niche can cause a major upheaval in the mainstream. Trends initiate new business models, consumer behavior and forms of work and thus represent a fundamental change to the status quo. It is crucial for companies to mobilize the right knowledge and resources before the tipping point in order to benefit from a trend.

12 AI trends that will shape 2024

In the AI Trends Report 2024, we identify groundbreaking developments in the field of artificial intelligence. Here are the short versions of the twelve trends, each with a selected quote from our experts.

Part 1: Culture and development

From the 4-day week to omnimodality and AGI: 2024 promises great progress for the world of work, for media production and for the possibilities of AI as a whole.

Thesis I: AI expertise within the company
Companies that deeply embed AI expertise in their corporate culture and build interdisciplinary teams with tech and industry knowledge will secure a competitive advantage. Centralized AI teams and a strong data culture are key to success.

“Data culture can‘t be bought or dictated. You need to win the head, the heart and the herd. We want our employees to consciously create, use and share data and give them access to data, analytics and AI together with the knowledge and the mindset to run the business on data.”

Stefanie Babka, Global Head of Data Culture, Merck

 

Thesis II: 4-day working week thanks to AI
Thanks to AI automation in standard software and company processes, the 4-day working week has become a reality for some German companies. AI tools such as Microsoft’s Copilot increase productivity and make it possible to reduce working hours without compromising growth.

“GenAI will continue to drive automation in many areas. This will be the new benchmark for standard processes in all sectors. While this may have a positive impact on reducing working hours, we need to ensure that GenAI is used responsibly, especially in sensitive and customer-facing areas.”

Dr. Jean Enno Charton, Director Digital Ethics & Bioethics, Merck

 

Thesis III: AGI through omnimodal models
The development of omnimodal AI models that mimic human senses brings the vision of general artificial intelligence (AGI) closer. These models process a variety of inputs and extend human capabilities.

“Multimodal models trained on more than just text have shown that they are better able to draw conclusions and understand the world. We are excited to see what omnimodal models will achieve.”

Dr. Ingo Marquart, NLP Subject Matter Lead, statworx

 

Thesis IV: AI revolution in media production
Generative AI (GenAI) is transforming the media landscape and enabling new forms of creativity, but still falls short of transformational creativity. AI tools are becoming increasingly important for creatives, but it is important to maintain uniqueness against a global average taste.

“Those who integrate AI smartly will have a competitive advantage. There will be leaps in productivity in the areas of ideation, publishing and visuals. However, there will also be a lot of “low” and fake content (postings, messaging), so building trust will become even more important for brands. Social media tasks are shifting towards strategy, management and controlling.”

Nemo Tronnier, Founder & CEO, Social DNA

 

Part 2: Data and technology

In 2024, everything will revolve around data quality, open source models and access to processors. The operators of standard software such as Microsoft and SAP will benefit greatly because they occupy the interface to end users.

Thesis V: Challengers for NVIDIA
New players and technologies are preparing to shake up the GPU market and challenge NVIDIA’s position. Startups and established competitors such as AMD and Intel are looking to capitalize on the resource scarcity and long wait times that smaller players are currently experiencing and are focusing on innovation to break NVIDIA’s dominance.

“Contrary to popular belief, there isn’t really a shortage of AI accelerators if you count NVIDIA, Intel and AMD. The real problem is customer funding, as cloud providers are forced to offer available capacity with long-term contracts. This could change in 18 to 24 months when current deployments are sufficiently amortized. Until then, customers will have to plan for longer commitments.”

Norman Behrend, Chief Customer Officer, Genesis Cloud

 

Thesis VI: Data quality before data quantity
In AI development, the focus is shifting to the quality of the data. Instead of relying solely on quantity, the careful selection and preparation of training data and innovation in model architecture are becoming crucial. Smaller models with high-quality data can be superior to larger models in terms of performance.

“Data is not just one component of the AI landscape; having the right quality data is essential. Solving the ‘first-mile problem’ to ensure data quality and understanding the ‘last-mile problem’, i.e. involving employees in data and AI projects, are crucial for success.”

Walid Mehanna, Chief Data & AI Officer, Merck

 

Thesis VII: The year of the AI integrators
Integrators such as Microsoft, Databricks and Salesforce will be the winners as they bring AI tools to end users. The ability to seamlessly integrate into existing systems will be crucial for AI startups and providers. Companies that offer specialized services or groundbreaking innovations will secure lucrative niches.

“In 2024, AI integrators will show how they make AI accessible to end users. Their role is critical to the democratization of AI in the business world, enabling companies of all sizes to benefit from advanced AI. This development emphasizes the need for user-friendly and ethically responsible AI solutions.”

Marco Di Sazio, Head of Innovation, Bankhaus Metzler

 

Thesis VIII: The open source revolution
Open source AI models are competing with proprietary models such as OpenAI’s GPT and Google’s Gemini. With a community that fosters innovation and knowledge sharing, open source models offer more flexibility and transparency, making them particularly valuable for applications that require clear accountability and customization.

“Especially for SMEs, AI solutions are indispensable. Since a sufficient amount of data for a proprietary model is typically lacking, collaboration becomes crucial. However, the ability to adapt is essential in order to digitally advance your own business model.”

Prof. Dr. Christian Klein, Founder, UMYNO Solutions, Professor of Marketing & Digital Media, FOM University of Applied Sciences

 

Part 3: Transparency and control

The increased use of AI decision-making systems will spark an intensified debate on algorithm transparency and data protection in 2024 – in the search for accountability. The AI Act will become a locational advantage for Europe.

Thesis IX: AI transparency as a competitive advantage
European AI start-ups with a focus on transparency and explainability could become the big winners, as industries such as pharmaceuticals and finance already place high demands on the traceability of AI decisions. The AI Act promotes this development by demanding transparency and adaptability from AI systems, giving European AI solutions an edge in terms of trust.

“Transparency is becoming a key issue in the field of AI. This applies to the construction of AI models, the flow of data and the use of AI itself. It will have a significant impact on discussions about compliance, security and trust. The AI Act could even turn transparency and security into competitive advantages for European companies.”

Jakob Plesner, Attorney at Law, Gorrissen Federspiel

 

Thesis X: AI Act as a seal of quality
The AI Act positions Europe as a safe haven for investments in AI by setting ethical standards that strengthen trust in AI technologies. In view of the increase in deepfakes and the associated risks to society, the AI Act acts as a bulwark against abuse and promotes responsible growth in the AI industry.

“Companies facing technological change need a clear set of rules. By introducing a seal of approval for human-centered AI, the AI Act turns challenges into opportunities. The AI Act will become a blueprint internationally, giving EU companies a head start in responsible AI and making Europe a place for sustainable AI partnerships.”

Catharina Glugla, Head of Data, Cyber & Tech Germany, Allen & Overy LLP

 

Thesis XI: AI agents are revolutionizing consumption
Personal assistance bots that make purchases and select services will become an essential part of everyday life. Influencing their decisions will become a key element for companies to survive in the market. This will profoundly change search engine optimization and online marketing as bots become the new target groups.

“There will be several types of AI agents that act according to human intentions. For example, personal agents that represent an individual and service agents that represent an organization or institution. The interplay between them, such as personal-personal, personal-institutional and institutional-institutional, represents a new paradigm for economic activities and the distribution of value.”

Chi Wang, Principle Researcher, Microsoft Research

 

Thesis XII: Alignment of AI models
Aligning AI models with universal values and human intentions will be critical to avoid unethical outcomes and fully realize the potential of foundation models. Superalignment, where AI models work together to overcome complex challenges, is becoming increasingly important to drive the development of AI responsibly.

“Alignment is, at its core, an analytical problem that is about establishing transparency and control to gain user trust. These are the keys to effective deployment of AI solutions in companies, continuous evaluation and secure iteration based on the right metrics.”

Daniel Lüttgau, Head of AI Development, statworx

 

Concluding remarks

The AI Trends Report 2024 is more than an entertaining stocktake; it can be a useful tool for decision-makers and innovators. Our goal is to provide our readers with strategic advantages by discussing the impact of trends on different sectors and helping them set the course for the future.

This blog post offers only a brief insight into the comprehensive AI Trends Report 2024. We invite you to read the full report to dive deeper into the subject matter and benefit from the detailed analysis and forecasts.

To the AI Trends Report 2024! Tarik Ashry

At the beginning of December, the central EU institutions reached a provisional agreement on a legislative proposal to regulate artificial intelligence in the so-called trilogue. The final legislative text with all the details is now being drafted. As soon as this has been drawn up and reviewed, the law can be officially adopted. We have compiled the current state of knowledge on the AI Act.

As part of the ordinary legislative procedure of the European Union, a trilogue is an informal interinstitutional negotiation between representatives of the European Parliament, the Council of the European Union and the European Commission. The aim of a trialogue is to reach a provisional agreement on a legislative proposal that is acceptable to both the Parliament and the Council, the co-legislators. The provisional agreement must then be adopted by each of these bodies in formal procedures.

Legislation with a global impact

A special feature of the upcoming law is the so-called market location principle: according to this, companies worldwide that offer or operate artificial intelligence on the European market or whose AI-generated output is used within the EU will be affected by the AI Act.

Artificial intelligence is defined as machine-based systems that can autonomously make predictions, recommendations or decisions and thus influence the physical and virtual environment. This applies, for example, to AI solutions that support the recruitment process, predictive maintenance solutions and chatbots such as ChatGPT. The legal requirements that different AI systems must fulfill vary greatly depending on their classification into risk classes.

The risk class determines the legal requirements

The EU’s risk-based approach comprises a total of four risk classes:

  • low,
  • limited,
  • high,
  • and unacceptable risk.

These classes reflect the extent to which artificial intelligence jeopardizes European values and fundamental rights. As the term “unacceptable” for a risk class already indicates, not all AI systems are permissible. AI systems that belong to the “unacceptable risk” category are prohibited by the AI Act. The following applies to the other three risk classes: the higher the risk, the more extensive and stricter the legal requirements for the AI system.

We explain below which AI systems fall into which risk class and which requirements are associated with them. Our assessments are based on the information contained in the “AI Mandates” document dated June 2023. At the time of publication, this document was the most recently published, comprehensive document on the AI Act.

Ban on social scoring and biometric remote identification

Some AI systems have a significant potential to violate human rights and fundamental principles, which is why they are categorized as “unacceptable risk”. These include:

  • Real-time based remote biometric identification systems in publicly accessible spaces (exception: law enforcement agencies may use them to prosecute serious crimes but only with judicial authorization);
  • Biometric remote identification systems in retrospect (exception: law enforcement authorities may use them to prosecute serious crimes but only with judicial authorization);
  • Biometric categorization systems that use sensitive characteristics such as gender, ethnicity or religion;
  • Predictive policing based on so-called profiling – i.e. profiling based on skin color, suspected religious affiliation and similarly sensitive characteristics – geographical location or previous criminal behavior;
  • Emotion recognition systems for law enforcement, border control, the workplace and educational institutions;
  • Arbitrary extraction of biometric data from social media or video surveillance footage to create facial recognition databases;
  • Social scoring leading to disadvantage in social contexts;
  • AI that exploits the vulnerabilities of a particular group of people or uses unconscious techniques that can lead to behaviors that cause physical or psychological harm.

These AI systems are to be banned from the European market under the AI Act. Companies whose AI systems could fall into this risk class should urgently address the upcoming requirements and explore options for action. This is because a key result of the trilogue is that these systems will be banned just six months after official adoption.

Numerous requirements for AI with risks to health, safety and fundamental rights

The “high risk” category includes all AI systems that are not explicitly prohibited but nevertheless pose a high risk to health, safety or fundamental rights. The following areas of application and use are explicitly mentioned:

  • Biometric and biometric-based systems that do not fall into the “unacceptable risk” risk class;
  • Management and operation of critical infrastructure;
  • Education and training;
  • Access and entitlement to basic private and public services and benefits;
  • Employment, human resource management and access to self-employment;
  • Law enforcement;
  • Migration, asylum and border control;
  • Administration of justice and democratic processes

These AI systems are subject to comprehensive legal requirements that must be implemented prior to commissioning and observed throughout the entire AI life cycle:

  • Assessment to evaluate the effects on fundamental and human rights
  • Quality and risk management
  • Data governance structures
  • Quality requirements for training, test and validation data
  • Technical documentation and record-keeping obligations
  • Fulfillment of transparency and provision obligations
  • Human supervision, robustness, security and accuracy
  • Declaration of conformity incl. CE marking obligation
  • Registration in an EU-wide database

AI systems that are used in one of the above-mentioned areas but do not pose a risk to health, safety, the environment or fundamental rights are not subject to the legal requirements. However, this must be proven by informing the competent national authority about the AI system. The authority then has three months to assess the risks of the AI system. The AI can be put into operation within these three months. However, if the examining authority classifies it as high-risk AI, high fines may be imposed.

A special regulation also applies to AI products and AI safety components of products whose conformity is already being tested by third parties on the basis of EU legislation. This is the case for AI in toys, for example. In order to avoid overregulation and additional burdens, these will not be directly affected by the AI Act.

AI with limited risk must comply with transparency obligations

AI systems that interact directly with humans fall into the “limited risk” category. This includes emotion recognition systems, biometric categorization systems and AI-generated or modified content that resembles real people, objects, places or events and could be mistaken for real (“deepfakes”). For these systems, the draft law provides for the obligation to inform consumers about the use of artificial intelligence. This should make it easier for consumers to actively decide for or against their use. A code of conduct is also recommended.

No legal requirements for AI with low risk

Many AI systems, such as predictive maintenance or spam filters, fall into the “low risk” category. Companies that only offer or use such AI solutions will hardly be affected by the AI Act. This is because there are currently no legal requirements for such applications. Only a code of conduct is recommended.

Generative AI such as ChatGPT is regulated separately

Generative AI models and basic models with a wide range of possible applications were not included in the original draft of the AI Act. The regulatory possibilities of such AI models have therefore been the subject of particularly intense debate since the launch of ChatGPT by OpenAI. According to the European Council’s press statement of December 9, these models are now to be regulated on the basis of their risk. In principle, all models must implement transparency requirements. Foundation models with a particular risk – so-called “high-impact foundation models” – will also have to meet additional requirements. How exactly the risk of AI models will be assessed is currently still open. Based on the latest document, the following possible requirements for high-impact foundation models can be estimated:

  • Quality and risk management
  • Data governance structures
  • Technical documentation
  • Fulfillment of transparency and information obligations
  • Ensuring performance, interpretability, correctability, security, cybersecurity
  • Compliance with environmental standards
  • Cooperation with downstream providers
  • Registration in an EU-wide database

Companies should prepare for the AI Act now

Even though the AI Act has not yet been officially adopted and we do not yet know the details of the legal text, companies should prepare for the transition phase now. In this phase, AI systems and associated processes must be designed to comply with the law. The first step is to assess the risk class of each individual AI system. If you are not yet sure which risk classes your AI systems fall into, we recommend our free AI Act Quick Check. It will help you to assess the risk class.

More information:

Sources:

 

Read next …

 How the AI Act will change the AI industry: Everything you need to know about it now
 Unlocking the Black Box – 3 Explainable AI Methods to Prepare for the AI Act

… and explore new

How to Get Your Data Science Project Ready for the Cloud
 How to Scan Your Code and Dependencies in Python

Julia Rettig

Last December, the European Council published a dossier outlining the Council’s preliminary position on the draft law known as the AI Act. This new law is intended to regulate artificial intelligence (AI) and thus becomes a game-changer for the entire tech industry. In the following, we have compiled the most important information from the dossier, which is the current official source on the planned AI Act at the time of publication.

A legal framework for AI

Artificial intelligence has enormous potential to improve and ease all our lives. For example, AI algorithms already support early cancer detection or translate sign language in real time, thereby eliminating language barriers. But in addition to the positive effects, there are risks, as the latest deep fakes from Pope Francis or the Cambridge Analytica scandal illustrate.

The European Union (EU) is currently drafting legislation to regulate artificial intelligence to mitigate the risks of artificial intelligence. With this, the EU wants to protect consumers and ensure the ethically acceptable use of artificial intelligence. The so-called “AI Act” is still in the legislative process but is expected to be passed in 2023 – before the end of the current legislative period. Companies will then have two years to implement the legally binding requirements. Violations will be punished with fines of up to 6% of global annual turnover or €30,000,000 – whichever is higher. Therefore, companies should already start addressing the upcoming legal requirements now.

Legislation with global impact

The planned AI Act is based on the “location principle, ” meaning that not only European companies will be affected by the amendment. Thus, all companies that offer AI systems on the European market or also operate them for internal use within the EU are affected by the AI Act – with a few exceptions. Private use of AI remains untouched by the regulation so far.

Which AI systems are affected?

The definition of AI determines which systems will be affected by the AI Act. For this reason, the AI definition of the AI Act has been the subject of controversial debate in politics, business, and society for a considerable time. The initial definition was so broad that many “normal” software systems would also have been affected. The current proposal defines AI as any system developed through machine learning or logic- and knowledge-based approaches. It remains to be seen whether this definition will ultimately be adopted.

7 Principles for trustworthy AI

The “seven principles for trustworthy AI” are the most important basis of the AI Act. A group of experts from research, the digital economy, and associations developed them on behalf of the European Commission. They include not only technical aspects but also social and ethical factors that can be used to classify the trustworthiness of an AI system:

  1. Human action & oversight: decision-making should be supported without undermining human autonomy.
  2. Technical Robustness & security: accuracy, reliability, and security must be preemptively ensured.
  3. Data privacy & data governance: handling of data must be legally secure and protected.
  4. Transparency: interaction with AI must be clearly communicated, as must its limitations and boundaries.
  5. Diversity, non-discrimination & fairness: Avoidance of unfair bias must be ensured throughout the entire AI lifecycle.
  6. Environmental & societal well-being: AI solutions should have a positive impact on the environment and society as possible.
  7. Accountability: responsibilities for the development, use, and maintenance of AI systems must be defined.

Based on these principles, the AI Act’s risk-based approach was developed, allowing AI systems to be classified into one of four risk classes: low, limited, high, and unacceptable risk.

Four risk classes for trustworthy AI

The risk class of an AI system indicates the extent to which an AI system threatens the principles of trustworthy AI and which legal requirements the system must fulfill – provided the system is fundamentally permissible. This is because, in the future, not all AI systems will be allowed on the European market. For example, most “social scoring” techniques are assessed as “unacceptable” and will not be allowed by the new law.

For the other three risk classes, the rule of thumb is that the higher the risk of an AI system, the higher the legal requirements for it. Companies that offer or operate high-risk systems will have to meet the most requirements. For example, AI used to operate critical (digital) infrastructure or used in medical devices is considered such. To bring these to market, companies will have to observe high-quality standards for the used data, set up a risk management, affix a CE mark, and more.

AI systems in the “limited risk” class are subject to information and transparency obligations. Accordingly, companies must inform users of chatbots, emotion recognition systems, or deep fakes about the use of artificial intelligence. Predictive maintenance or spam filters are two examples of AI systems that fall into the lowest-risk category “low risk”. Companies that exclusively offer or use such AI solutions will hardly be affected by the upcoming AI Act. There are no legal requirements for these applications yet.

What companies can do for now

Even though the AI Act is still in the legislative process, companies should act now. The first step is to clarify how they will be affected by the AI Act. To help you do this, we have developed the AI Act Quick Check. With this free tool, AI systems can be quickly assigned to a risk class free of charge, and requirements for the system can be derived. Finally, it can be used as a basis to estimate how extensive the realization of the AI Act will be in your own company and to take initial measures. Of course, we are also happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!

AI Act Tool     AI Act Fact Sheet

 

Benefit from our expertise!

Of course, we are happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!

     

    Links & Sources:

      Julia Rettig

    A data culture is a key factor for effective data utilization

    With the increasing digitization, the ability to use data effectively has become a crucial success factor for businesses. This way of thinking and acting is often referred to as data culture and plays an important role in transforming a company into a data-driven organization. By promoting a data culture, businesses can benefit from the flexibility of fact-based decision-making and fully leverage the potential of their data. Such a culture enables faster and demonstrably better decisions and embeds data-driven innovation within the company.

    Although the necessity and benefits of a data culture appear obvious, many companies still struggle to establish such a culture. According to a study by New Vantage Partners, only 20% of companies have successfully developed a data culture so far. Furthermore, over 90% of the surveyed companies describe the transformation of culture as the biggest hurdle in the transformation towards a data-driven company.

    A data culture fundamentally changes the way of working

    The causes of this challenge are diverse, and the necessary changes permeate almost all aspects of everyday work. In an effective data culture, each employee preferably uses data and data analysis for decision-making and gives priority to data and facts over individual “gut feeling.” This way of thinking promotes the continuous search for ways to use data to identify competitive advantages, open up new revenue streams, optimize processes, and make better predictions. By adopting a data culture, companies can fully leverage the potential of their data and drive innovations throughout the organization. This requires recognizing data as an important driving force for decision-making and innovation. This ideal requires new demands on individual employee behavior. Additionally, this requires targeted support of this behavior through suitable conditions such as technical infrastructure and organizational processes.

    Three factors significantly shape the data culture

    To anchor a data culture sustainably within a company, three factors are crucial:

    1. Can| Skills
    2. Want | Attitude
    3. Do | Actions

    statworx uses these three factors to make the abstract concept of data culture tangible and to initiate targeted necessary changes. It is crucial to give equal attention to all factors and to consider them as holistically as possible. Initiatives for cultural development often limit themselves to the aspect of attitude and attempt to anchor specific values separate from other influencing factors. These initiatives usually fail due to the reality of companies that oppose them with their processes, lived rituals, practices, and values, and thus prevent the establishment of the culture (actively).

    We have summarized three factors of data culture in a framework for an overview.

    1. Can: Skills form the basis for effective data utilization

    Skills and competencies are the foundation for effective data management. These include both the methodological and technical skills of employees, as well as the organization’s ability to make data usable.

    Ensuring data availability is particularly important for data usability. The “FAIR” standard – Findable, Accessible, Interoperable, Reusable – provides a direction for the essential properties that support this, such as through technologies, knowledge management, and appropriate governance.

    At the level of employee skills, the focus is on data literacy – the ability to understand and effectively use data to make informed decisions. This includes a basic understanding of data types and structures, as well as collection and analysis methods. Data literacy also involves the ability to ask the right questions, interpret data correctly, and identify patterns and trends. Develop relevant competencies, such as through upskilling, targeted workforce planning, and hiring data experts.

    2. Want: A data culture can only flourish in a suitable value context.

    The second factor – Want – deals with the attitudes and intentions of employees and the organization as a whole towards the use of data. For this, both the beliefs and values of individuals and the community within the company must be addressed. There are four aspects are of central importance for a data culture:

    • Collaboration & community instead of competition & selective partnerships
    • Transparency & sharing instead of information concealment & data hoarding
    • Pilot projects & experiments instead of theoretical assessments
    • Openness & willingness to learn instead of pettiness & rigid thinking
    • Data as a central decision-making basis instead of individual opinion & gut feeling

    Example: Company without a data culture

    On an individual level, an employee is convinced that exclusive knowledge and data can provide an advantage. The person has also learned within the organization that this behavior leads to strategic advantages or opportunities for personal positioning, and has been rewarded for such behavior by superiors in the past. The person is therefore convinced that it is absolutely sensible and advantageous to keep data for oneself or within one’s own team and not share it with other departments. The competitive thinking and tendency towards secrecy are firmly anchored as a value.

    In general, behavior like described in the example restricts transparency throughout the entire organization and thereby slows down the organization. If not everyone has the same information, it is difficult to make the best possible decision for the entire company. Only through openness and collaboration can the true value of data in the company be utilized. A data-driven company is based on a culture of collaboration, sharing, and learning. When people are encouraged to exchange their ideas and insights, better decisions can be made.

    Even possible declarations of intent, such as mission statements and manifestos without tangible measures, will change little in the attitude of employees. The big challenge is to anchor the values sustainably and to make them the guiding action principle for all employees, which is actively lived in everyday business. If this succeeds, the organization is on the best way to create the required data mindset to bring an effective and successful data culture to life. Our transformation framework can help to establish and make these values visible.

    We recommend starting to build a data culture step by step because even small experimental projects create added value, serve as positive examples, and build trust. The practical testing of a new innovation, even only in a limited scope, usually brings faster and better results than a theoretical assessment. Ultimately, it is about placing the value of data at the forefront.

    3. Do: Behavior creates the framework and is simultaneously the visible result of a data culture.

    The two factors mentioned above ultimately aim to ensure that employees and the organization as a whole adapt their behavior. Only an actively lived data culture can be successful. Therefore, everyday behavior – Do – plays a central role in establishing a data culture.

    The behavior of an organization can be examined and influenced primarily in two dimensions.

    These factors are:

    1. Activities and rituals
    2. Structural elements of the organization

    Activities and rituals

    Activities and rituals refer to the daily collaboration between employees of an organization. They manifest themselves in all forms of collaboration, from meeting procedures to handling feedback and risks to the annual Christmas party. It is crucial which patterns the collaboration follows and which behavior is rewarded or punished.

    Experience shows that teams that are already familiar with agile methods such as Scrum find the transformation to data-driven decisions easier. Teams that follow strong hierarchies and act risk-averse, on the other hand, have more difficulty overcoming this challenge. One reason for this is that agile ways of working reinforce collaboration between different roles and thus create the foundation for a productive work environment. In this context, the role of leadership, especially senior leadership, is crucial. The individuals at the C-level must necessarily lead by example from the beginning, introduce rituals and activities, and act together as the central driver of the transformation.

    Structure elements of the organization

    While activities and rituals emerge from teams and are not always predetermined, the second dimension reflects a stronger formalization. It refers to the structure elements of an organization. These provide the formal framework for decisions and thus shape behavior, as well as the emergence and anchoring of values and attitudes.

    Internal and external structure elements are distinguished. Internal structure elements are mainly visible within the organization, such as roles, processes, hierarchy levels, or committees. By adapting and restructuring roles, necessary skills can be reflected within the company. Furthermore, rewards and promotions for employees can create an incentive to adopt and pass on the behavior themselves to colleagues. The division of the working environment is also part of the internal structure. Since the work in data-driven companies is based on close collaboration and requires individuals with different skills, it makes sense to create a space for open exchange that allows communication and collaboration.

    External structure elements reflect internal behavior outward. Thus, internal structure elements influence the perception of the company from the outside. This is reflected, for example, in clear communication, the structure of the website, job advertisements, and marketing messages.

    Companies should design their external behavior to be in line with the values of the organization and thus support their own structures. In this way, a harmonious alignment between the internal and external positioning of the company can be achieved.

    First small steps can already create significant changes

    Our experience has shown that the coordinated design of skills, willingness, and action results in a sustainable data culture. It is now clear that a data culture cannot be created overnight, but it is also no longer possible to do without it. It has proven useful to divide this challenge into small steps. With first pilot projects, such as establishing a data culture in just one team and initiatives for particularly committed employees who want to drive change, trust is created in the cultural shift. Positive individual experiences serve as a helpful catalyst for the transformation of the entire organization.

    The philosopher and visionary R. Buckminster Fuller once said, “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” Because with the advancement of technology, companies must be able to adapt to fully tap their potential. This allows decisions to be made faster and more accurately than ever before, drives innovation, and increasingly optimizes processes. The sustainable establishment of a data culture will give companies a competitive advantage in the market. In the future, data culture will be an essential part of any successful business strategy. Companies that do not embrace this will be left behind.

    However, the use of data is a major problem for many companies. Often, data quality and data compilation stand in the way. Even though many companies already have data solutions, they are not optimally utilized. This means that much information remains unused and cannot be incorporated into decision-making.

     

    Sources:

    [1] https://hbr.org/2020/03/how-ceos-can-lead-a-data-driven-culture

    Image: AdobeStock 569760113 Annsophie Huber

    In a fast-paced and data-driven world, the management of information and knowledge is essential. Businesses in particular rely on making knowledge accessible internally as quickly, clearly, and concisely as possible. Knowledge management is the process of creating, extracting, and utilizing knowledge to improve business performance. It includes methods that help organizations identify and extract knowledge, distribute it, and use it to better achieve their goals. However, this can be a complex and challenging task, especially in large companies.

    Natural Language Processing (NLP) promises to provide a solution. This technology has the potential to revolutionize the knowledge strategy of companies. NLP is a branch of artificial intelligence that deals with the interaction between computers and human language. By using NLP, companies can gain insights from large amounts of unstructured text data and convert them into actionable knowledge.

    In this blog post, we examine how NLP can improve knowledge management and how companies can use NLP to perform complex processes quickly, safely, and automatically. We explore the benefits of using NLP in knowledge management, the various NLP techniques used, and how companies can use NLP to achieve their goals better with artificial intelligence.

    Case Study for effective knowledge management

    Using the example of email correspondence in a construction project, we illustrate the application and added value of natural language processing. We use two emails as specific examples that were exchanged during the construction project: an order confirmation for ordered items and a complaint about their quality.

    For a new building, the builder requested quotes for products from a variety of suppliers, including thermal insulation. Eventually, they were ordered from a supplier. In an email, the supplier clarifies the ordered items, their properties and costs, and confirms the delivery on a specified date. Later, the builder discovers that the quality of the delivered products does not meet the expected standards. The builder informs the supplier of this in a written complaint, also via email. The text of these emails contains a wealth of information that can be extracted, processed, and further processed using NLP methods to improve understanding. Due to the large number of different offers and interactions, manual processing is very time-consuming, and programmatic evaluation of the communication provides a remedy.

    Next, we introduce a knowledge management pipeline that gradually checks these two emails for their content and provides users with the maximum benefit through text processing. Click on the interactive boxes to see how the Knowledge Management Pipeline works!

    Klicken Sie auf den unteren Button, um den Inhalt von zu laden.

    Inhalt laden

    Summary (Task: Summarization)

    In the first step, the content of each text can be summarized and brought to the point in a few sentences. This reduces the text to important information and knowledge, removes irrelevant information such as platitudes and repetitions, and greatly reduces the amount of text to be read.

    Especially with long emails, the added value of summary alone is enormous: listing the important content as bullet points saves time, prevents misunderstandings, and avoids overlooking important details.

    General summaries are already helpful, but with the latest language models, NLP can do much more. In a general summary, the text length is reduced as much as possible while maintaining the same information density. Large language models can not only produce a general summary but also customize this process to specific needs of employees. For example, facts can be highlighted, or technical jargon can be simplified. In particular, summaries can be performed for a specific audience, such as a specific department within the company.

    Different departments and roles require different types of information. This is why summaries are particularly useful when tailored to the interests of a specific department or role. For example, the two emails in our case study contain information that is relevant to the legal, operations, or finance department in different ways. Therefore, the next step is to create a separate summary for each department:

    Klicken Sie auf den unteren Button, um den Inhalt von zu laden.

    Inhalt laden

    This makes it even easier for users to identify and understand the information that is relevant to them, while also drawing the right conclusions for their work.

    Generative NLP models not only allow texts to be condensed to the essential, but also provide explanations for ambiguities and details. An example of this is the explanation of a regulation mentioned only by an acronym in the confirmation of an order, whose details the user may not be familiar with. This eliminates the need for a tedious online search for a suitable explanation.

    Klicken Sie auf den unteren Button, um den Inhalt von zu laden.

    Inhalt laden

    Knowledge Extraction (Task: NER, Sentiment Analysis, Classification)

    The next step is to systematically categorize the emails and their contents. This allows incoming emails to be automatically assigned to the correct mailboxes, annotated with metadata, and collected in a structured way.

    For example, emails received on a customer service account can be automatically classified into defined categories (complaints, inquiries, suggestions, etc.). This eliminates the manual categorization of emails, which reduces the likelihood of incorrect categorizations and ensures more robust processes.

    Within these categories, the contents of emails can be further divided using semantic content analysis, for example, to determine the urgency of a request. More on that later.

    Klicken Sie auf den unteren Button, um den Inhalt von zu laden.

    Inhalt laden

    Once the emails are correctly classified, metadata can be extracted and created from each text using “Named Entity Recognition (NER).”

    NER allows entities in texts to be identified and named. Entities can be people, places, organizations, dates, or other named objects. Regarding email inboxes and their contents, NER can be useful in extracting important information and connections within the texts. By identifying and categorizing entities, relevant information can be quickly found and classified.

    In the case of complaints, NER can be used to identify the names of the product, the customer, and the seller. This information can then be used to solve the problem or make changes to the product to avoid future complaints.

    NER can also help automatically highlight relevant facts and connections in emails after they are classified. For example, if an order is received as an email from a customer, NER can extract the relevant information, enrich the email with metadata, and automatically forward it to the appropriate salesperson.

    Similarity (Task: Semantic Similarity)

    Successful knowledge management first requires identifying and gathering relevant data, facts, and documents in a targeted manner. This has been a particularly challenging task with unstructured text data such as emails, which are also stored in information silos (i.e. in mailboxes). To better capture the content of incoming emails and their overlaps, methods for semantic analysis of text can be employed. “Semantic Similarity Analysis” is a technology used to understand the meaning of texts and measure the similarities between different texts.

    In the context of knowledge management, semantic analysis can help group emails and identify those that relate to the same topic or contain similar requests. This can increase the productivity of customer support teams by allowing them to focus on important tasks, rather than spending a lot of time manually sorting or searching through emails.

    In addition, semantic analysis can help identify trends and patterns in incoming emails that may indicate problems or opportunities for improvement in the company. These insights can then be used to proactively address customer needs or improve processes and products.

    Answer Generation (Task: Text Generation)

    Finally, emails need to be answered. Those who have already experimented with text suggestions in email programs know that this task is not yet ready for automation. However, generative models can help answer emails faster and more accurately. A generative language model can quickly and reliably generate response templates based on incoming emails, which then only need to be supplemented, completed and checked by the person processing them. It is important to carefully check each response before sending it, as generative models are known to hallucinate results, i.e. generate convincing answers that contain errors upon closer examination. Here too, AI systems can at least partially remedy the situation by using a “control model” to verify the facts and statements of these “response models” for accuracy.

    Klicken Sie auf den unteren Button, um den Inhalt von zu laden.

    Inhalt laden

    Conclusion

    Natural Language Processing (NLP) offers companies numerous opportunities to improve their knowledge management strategies. NLP enables us to extract precise information from unstructured text and optimize the processing and provision of knowledge for employees.

    By applying NLP methods to emails, documents, and other text sources, companies can automatically categorize, summarize, and reduce content to the most important information. This allows employees to quickly and easily access important information without having to wade through long pages of text. This saves time, reduces error-proneness, and contributes to making better business decisions.

    At the example of a construction project, we demonstrated how NLP can be used in practice to process emails more efficiently and improve knowledge management. The application of NLP techniques, such as summarizing and specifying information for specific departments, can help companies better achieve their goals and improve their performance.

    The application of NLP in knowledge management offers great advantages for companies. It can help automate processes, improve collaboration, increase efficiency, and optimize decision-making quality. Companies that integrate NLP into their knowledge management strategy can gain valuable insights that enable them to better navigate an increasingly complex business environment.

    Image source: AdobeStock 459537717 Oliver Guggenbühl, Jonas Braun

    Preface

    Every data science and AI professional out there will tell you: real-world data science (DS) & AI projects involve various challenges for which neither hands-on coding competitions nor theoretical lectures will prepare you. And sometimes – alarmingly often1, 2 – these real-live issues cause promising AI projects or whole AI initiatives to fail.

    There has been an active discussion of the more technical pitfalls and potential solutions for quite some time now. The more known issues include, for example, siloed data, bad data quality, too inexperienced or under-staffed DS & AI teams, insufficient infrastructure for training and serving models. Another issue is that too many solutions are never moved into production due to organizational problems.

    Only recently, the focus of the discourse has shifted more towards strategic issues. But in my opinion, these perspectives still do not get the attention they deserve.

    That is why in this post, I want to share my take on the most important (non-technical) reasons why DS & AI initiatives fail. Of course, I’ll give you some input of how to solve these issues. I’m a Data & Strategy Consultant at statworx and I am sure this article is rather subjective. It reflects my personal experience of the problems and solutions I came across.

    Issue #1: Poor Alignment of Project Scope and Actual Business Problem Issue #1

    One problem that occurs way more often than one can imagine is the misalignment of developed data science & AI solutions and real business needs. The finished product might perfectly serve the exact task the DS & AI team set out to solve; however, the business users might look for a solution to a similar but significantly different task.

    Too little exchange due to indirect communication channels or a lack of a shared language and frame of reference often leads to fundamental misunderstandings. The problem is that, quite ironically, only an extremely detailed, effective communication could uncover such subtle issues.

    Including Too Few or Selective Perspectives Can Lead To a Rude Awakening

    In other cases, individual sub-processes or the working methods of individual users differ a lot. Often, they vary so much that a solution that is a great benefit for one of the users/processes is hardly advantageous for all the others (while sometimes an option, the development of solution variants is by far less cost-efficient). If you are lucky, you will notice this at the beginning of a project when eliciting requirements. If you are unlucky, the rude awakening only occurs during broader user testing or even during roll-out, revealing that the users or experts who influenced the previous development did not provide universally generalizable input, making the developed tool not generally applicable.

    How to Counteract This Problem:

    • Conduct structured and in-depth requirements engineering. Invest the time to talk to as many experts/users as possible and try to make all implicit assumptions as explicit as possible. While requirements engineering stems from the waterfall paradigm, it can easily be adapted for agile development. The elicited requirements simply must not be understood as definite product features but as items for your initial backlog that are constantly up for (re)evaluation and (re)prioritization.

    • Be sure to define success measures. Do so before the start of the project, ideally in the form of objectively quantifiable KPIs and benchmarks. This helps significantly to pin down the business problem/business value at the heart of the sought-after solution.
    • Whenever possible and as fast as possible, create prototypes, mock-ups, or even Present these solution drafts to as many test users as possible. These methods strongly facilitate the elicitation of candid and precise user feedback to inform further development. Be sure to involve a user sample that is representative of the entirety of users.

    Issue #2: Loss of Efficiency and Resources Due to Non-Orchestrated Data Science & AI Efforts Issue #2

    Decentralized  data science & AI teams often develop their use cases with little to no exchange or alignment between the teams’ current use cases and backlogs. This can cause different teams to accidentally and unnoticeably develop (parts of) the same (or very similar) solution.

    In most cases, when such a situation is discovered, one of the redundant DS & AI solutions is discontinued or denied any future funding for further development or maintenance. Either way, the redundant development of use cases always results in direct waste of time and other resources with no or minimal added value.

    The lack of alignment of an organization’s use case portfolio with the general business or AI strategy also can be problematic. This can cause high opportunity costs: Use cases that do not contribute to the general AI vision might unnecessarily bind valuable resources. Further, potential synergies between more strategically significant use cases might not be fully exploited. Lastly, competence building might happen in areas that are of little to no strategic significance.

    How to Counteract This Problem:

    • Communication is key. That is why there always should be a range of possibilities for the data science professionals within an organization to connect and exchange their lessons learned and best practices – especially for decentralized DS & AI teams. To make this work, it is essential to establish an overall working atmosphere of collaboration. The free sharing of successes and failures and thus internal diffusion of competence can only succeed without competitive thinking.

    • Another option to mitigate the issue is to establish a central committee entrusted with the organization’s DS & AI use case portfolio management. The committee should include representatives of all (decentralized) DS & AI departments and general management. Together, the committee oversees the alignment of use cases and the AI strategy, preventing redundancies while fully exploiting synergies.

    Issue #3: Unrealistically High Expectations of Success in Data Science & AI Issue #3

    It may sound paradoxical, but over-optimism regarding the opportunities and capabilities of  data science & AI can be detrimental to success. That is because overly optimistic expectations often result in underestimating the requirements, such as the time needed for development or the volume and quality of the required database.

    At the same time, the expectations regarding model accuracy are often too high, with little to no understanding of model limitations and basic machine learning mechanics. This inexperience might prevent acknowledgment of many important facts, including but not limited to the following points:  the inevitable extrapolation of historical patterns to the future; the fact that external paradigm shifts or shocks endanger the generalizability and performance of models; the complexity of harmonizing predictions of mathematically unrelated models; low naïve model interpretability or dynamics of model specifications due to retraining.

    DS & AI simply is no magic bullet, and too high expectations can lead to enthusiasm turning into deep opposition. The initial expectations are almost inevitably unfulfilled and thus often give way to a profound and undifferentiated rejection of DS & AI. Subsequently, this can cause less attention-grabbing but beneficial use cases to no longer find support.

    How to Counteract This Problem:

    • When dealing with stakeholders, always try to convey realistic prospects in your communication. Make sure to use unambiguous messages and objective KPIs to manage expectations and address concerns as openly as possible.

    • The education of stakeholders and management in the basics of machine learning and AI empowers them to make more realistic judgments and thus more sensible decisions. Technical in-depth knowledge is often unnecessary. Conceptual expertise with a relatively high level of abstraction is sufficient (and luckily much easier to attain).

    • Finally, whenever possible, a PoC should be carried out before any full-fledged project. This makes it possible to gather empirical indications of the use case’s feasibility and helps in the realistic assessment of the anticipated performance measured by relevant (predefined!) KPIs. It is also important to take the results of such tests seriously. In the case of a negative prognosis, it should never simply be assumed that with more time and effort, all the problems of the PoC will disappear into thin air.

    Issue #4: Resentment and Fundamental Rejection of Data Science & AI Issue #4

    An invisible hurdle, but one that should never be underestimated, lies in the minds of people. This can hold true for the workforce as well as for management. Often, promising data science & AI solutions are thwarted due to deep-rooted but undifferentiated reservations. The right mindset is decisive.

    Although everyone is talking about DS and AI, many organizations still lack real management commitment. Frequently, lip service is paid to DS & AI and substantial funds are invested, but reservations about AI remain.

    This is often ostensibly justified by the inherent biases and uncertainty of AI models and their low direct interpretability. In addition, sometimes, there is a general unwillingness to accept insights that do not correspond with one’s intuition. The fact that human intuition is often subject to much more significant – and, in contrast to AI models, unquantifiable – biases is usually ignored.

    Data Science & AI Solutions Need the Acceptance and Support of the Workforce

    This leads to (decision-making) processes and organizational structures (e.g., roles, responsibilities) not being adapted in such a way that DS and AI solutions can deliver their (full) benefit. But this would be necessary because data science & AI is not just another software solution that seamlessly can be integrated into existing structures.

    DS & AI is a disruptive technology that inevitably will reshape entire industries and organizations alike. Organizations rejecting this change are likely to fail in the long run, precisely because of this paradigm shift. The rejection of change begins with seemingly minor matters such as shifting from project management via the waterfall method towards agile, iterative development. Irrespective of the generally positive reception of certain change measures, there is sometimes a completely irrational refusal to reform current (still) functioning processes. Yet, this is exactly what would be necessary to be – admittedly only after a phase of re-adjustment – competitive in the long term.

    While vision, strategy, and structures must be changed top-down, the day-to-day operational doing can only be revolutionized bottom-up, driven by the workforce. Management commitment and the best tool in the world are useless if the end-users are not able or willing to adopt it. General uncertainty about the long-term AI roadmap and the fear of being replaced by machines fuels fears that lead to DS & AI solutions not being integrated into everyday work. This is, of course, more than problematic, as only the (correct) application of AI solutions creates added value.

    How to Counteract This Problem:

    • Unsurprisingly, sound AI change management is the best approach to mitigate the anti-AI mindset. This should be an integral part of any DS & AI initiative and not only an afterthought, but responsibilities for this task should be assigned. Early, widespread, detailed, and clear communication is vital. Which steps will presumably be implemented, when, and how exactly? Remember that once trust has been lost, it is tough to regain it. Therefore, any uncertainties in the planning should be addressed. It is crucial to create a basic understanding of the matter among all stakeholders and clarify the necessity of change (e.g., otherwise endangered competitiveness, success stories, or competitors’ failures). In addition, dialogue with concerned parties is of great importance. Feedback should be sought early and acted upon where possible. Concerns should always be heard and respected, even if they cannot be addressed. However, false promises must be strictly avoided; instead, try to focus on the advantages of DS & AI.

    • In addition to understanding the need for change, the fundamental ability to change is essential. The fear of the unknown or incomprehensible is inherent in us humans. Therefore, education – only on the level of abstraction and depth necessary for the respective role – can make a big difference. Appropriate training measures are not a one-time endeavor; the development of up-to-date knowledge and training in the field of DS & AI must be ensured in the long term. General data literacy of the workforce must be ensured as well as up- or re-skilling of technical experts. Employees must be given a realistic chance to gain new and more attractive job opportunities by educating themselves and engaging with DS & AI. The most probable outcome should never be to lose (parts of) their old jobs through DS & AI but must be perceived as an opportunity and not as a danger; DS and AI must create perspectives and not spoil them.

    • Adopt or adapt the best practices of DS & AI leaders in terms of defining role and skill profiles, adjusting organizational structures, and value-creation processes. Battle-proven approaches can serve as a blueprint for reforming your organization and thereby ensure you remain competitive in the future.

    Closing Remarks

    As you might have noted, this blog post does not offer easy solutions. This is because the issues at hand are complex and multi-dimensional. This article gave you high-level ideas on how to mitigate the addressed problems, but it must be stressed that these issues call for a holistic solution approach. This requires a clear AI vision and a derived sound AI strategy according to which the vast number of necessary actions can be coordinated and directed.

    That is why I must stress that we have long left the stage where experimental and unstructured data science & AI initiatives could be successful. DS & AI must not be treated as a technical topic that takes place solely in specialist departments. It is time to address AI as a strategic issue. Like the digital revolution, only organizations in which AI completely permeates and reforms daily operations, the general business strategy will be successful in the long term. As described above, this undoubtedly holds many pitfalls in-store but also represents an incredible opportunity.

    If you are willing to integrate these changes but don’t know where to start, we at STATWORX are happy to help. Check out our website and learn more about our AI strategy offerings!

    Quellen

    [1] https://www.forbes.com/sites/forbestechcouncil/2020/10/14/why-do-most-ai-projects-fail/?sh=2f77da018aa3 [2] https://blogs.gartner.com/andrew_white/2019/01/03/our-top-data-and-analytics-predicts-for-2019/

    Lea Waniek Lea Waniek

    “Building trust through human-centric AI”: this is the slogan under which the European Commission presented its proposal for regulating Artificial Intelligence (AI regulation) last week. This historic step positions Europe as the first continent to uniformly regulate AI and the handling of data. With this groundbreaking attempt at regulation, Europe wishes to set standards for the use of AI and data-powered technology – even beyond European borders. That is the right step, as AI is a catalyst of the digital transformation, with significant implications for the economy, society, and the environment. Therefore, clear rules for the use of this technology are needed. This will allow Europe to position itself as a progressive market that is ready for the digital age. In its current form, however, the proposal still raises some questions about its practical implementation. Europe cannot afford to risk its digital competitiveness when competing with America and China for the AI leadership position.

    Building Trust Through Transparency

    Two Key Proposals for AI Regulation to Build Trust

    To build trust in AI products, the proposal for AI regulation relies on two key approaches: Monitoring AI risks while cultivating an “ecosystem of AI excellence.” Specifically, the proposal includes a ban on the use of AI for manipulative and discriminatory purposes or to assess behavior through a “social scoring system”. Use cases that do not fall into these categories will still have to be screened for hazards and placed on a vague risk scale. Special requirements are placed on high-risk applications, with necessary compliance checks both before and after they are put into operation.

    It is crucial that AI applications are to be assessed on a case-by-case basis instead of a previously considered sector-centric regulations. In last year’s white paper on AI and trust, the European Commission called for labeling all applications in business sectors such as healthcare or transportation as “high-risk”. This blanket classification based on defined industries, regardless of the actual use cases, would have been obstructive and meant structural disadvantages for entire European industries. The case-by-case assessment allows for the agile and innovative development of AI in all sectors and subjects all industries to the same standards for risky AI applications.

    Clear Definition of Risks of an AI Application Is Missing

    Despite this new approach, the proposal for AI regulation lacks a concise process to assess the risks of new applications. Since developers themselves are responsible for evaluating their applications, a clearly defined scale for risk assessment is essential. Articles 6 and 7 circumscribe various risks and give examples of “high-risk applications”, but a transparent process for assessing new AI applications is yet to be defined. Startups and smaller companies are heavily represented among AI developers. These companies, in particular, rely on clearly defined standards and processes to avoid being left behind by larger competitors with more appropriate resources. This requires practical guidelines for risk assessment.

    If a use case is classified as a “high-risk application”, then various requirements on data governance and risk management must be met before the product can be launched. For example, training data must be tested for bias and inequalities. Also, the model architecture and training parameters must be documented. After deployment, human oversight of the decisions made by the model must be ensured.

    Accountability for AI products is a noble and important goal. However, the practical implementation of these requirements once more remains questionable. Many modern AI systems no longer use the traditional approach of static training and testing data. Reinforcement Learning instead relies on exploratory training through feedback instead of a testable data set. And even though advances in Explainable AI are steadily shedding light on the decision-making processes of black-box models, complex model architectures of many modern neural networks make the tracing of individual decisions almost impossible to reconstruct.

    The proposal also announces requirements for the accuracy of trained AI products. This poses a particular challenge for developers because no AI system has perfect accuracy. Nor is this ever the objective, as misclassifications are often calculated to have as little impact as possible on the individual use case. Therefore, it is imperative that performance requirements for predictions and classifications be determined on a case-by-case basis and that universal performance requirements be avoided.

    Enabling AI Excellence

    Europe is Falling Behind

    With these requirements, the proposal for AI regulation seeks to inspire confidence in AI technology through transparency and accountability. This is a first, right step toward “AI excellence.” In addition to regulation, however, Europe as a location for Artificial Intelligence must also become more attractive to developers and investors.

    According to a recently published study by the Center for Data Innovation, Europe is already falling behind both the United States and China in the battle for global leadership in AI. China has now surpassed Europe in the number of published studies on Artificial Intelligence and has taken the global lead. European AI companies are also attracting significantly less investment than their U.S. counterparts. European AI companies invest less money in research and development and are also less likely to be acquired than American companies.

    A Step in the Right Direction: Supporting Research and Innovation

    The European Commission recognizes that more support for AI development is needed for excellence on the European market and promises regulatory sandboxes, legal leeway to develop and test innovative AI products, and co-funding for AI research and testing sites. This is needed to make startups and smaller companies more competitive and foster European innovation and competition.

    These are necessary steps to lift Europe onto the path to AI excellence, but they are far from being sufficient. AI developers need easier access to markets outside the EU, facilitating the flow of data across national borders. Opportunities to expand into the U.S. and collaborate with Silicon Valley are essential for the digital industry due to how interconnected digital products and services have become.

    What is entirely missing from the proposal for AI regulation is education about AI and its potential and risks outside of expert circles. As artificial intelligence increasingly permeates all areas of everyday life, education will become more and more critical. To build trust in new technologies, they must first be understood. Educating non-specialists about both the potential and limitations of AI is an essential step in demystifying Artificial Intelligence and strengthening trust in this technology.

    Potential Not Yet Fully Tapped

    With this proposal, the European Commission recognizes that AI is leading the way for the future of the European market. Guidelines for a technology of this scope are important – as is the promotion of innovation. For these strategies to bear fruit, their practical implementation must also be feasible for startups and smaller companies. The potential for AI excellence is abundant in Europe. With clear rules and incentives, it can also be realized.

    Oliver Guggenbühl Oliver Guggenbühl

    “Building trust through human-centric AI”: this is the slogan under which the European Commission presented its proposal for regulating Artificial Intelligence (AI regulation) last week. This historic step positions Europe as the first continent to uniformly regulate AI and the handling of data. With this groundbreaking attempt at regulation, Europe wishes to set standards for the use of AI and data-powered technology – even beyond European borders. That is the right step, as AI is a catalyst of the digital transformation, with significant implications for the economy, society, and the environment. Therefore, clear rules for the use of this technology are needed. This will allow Europe to position itself as a progressive market that is ready for the digital age. In its current form, however, the proposal still raises some questions about its practical implementation. Europe cannot afford to risk its digital competitiveness when competing with America and China for the AI leadership position.

    Building Trust Through Transparency

    Two Key Proposals for AI Regulation to Build Trust

    To build trust in AI products, the proposal for AI regulation relies on two key approaches: Monitoring AI risks while cultivating an “ecosystem of AI excellence.” Specifically, the proposal includes a ban on the use of AI for manipulative and discriminatory purposes or to assess behavior through a “social scoring system”. Use cases that do not fall into these categories will still have to be screened for hazards and placed on a vague risk scale. Special requirements are placed on high-risk applications, with necessary compliance checks both before and after they are put into operation.

    It is crucial that AI applications are to be assessed on a case-by-case basis instead of a previously considered sector-centric regulations. In last year’s white paper on AI and trust, the European Commission called for labeling all applications in business sectors such as healthcare or transportation as “high-risk”. This blanket classification based on defined industries, regardless of the actual use cases, would have been obstructive and meant structural disadvantages for entire European industries. The case-by-case assessment allows for the agile and innovative development of AI in all sectors and subjects all industries to the same standards for risky AI applications.

    Clear Definition of Risks of an AI Application Is Missing

    Despite this new approach, the proposal for AI regulation lacks a concise process to assess the risks of new applications. Since developers themselves are responsible for evaluating their applications, a clearly defined scale for risk assessment is essential. Articles 6 and 7 circumscribe various risks and give examples of “high-risk applications”, but a transparent process for assessing new AI applications is yet to be defined. Startups and smaller companies are heavily represented among AI developers. These companies, in particular, rely on clearly defined standards and processes to avoid being left behind by larger competitors with more appropriate resources. This requires practical guidelines for risk assessment.

    If a use case is classified as a “high-risk application”, then various requirements on data governance and risk management must be met before the product can be launched. For example, training data must be tested for bias and inequalities. Also, the model architecture and training parameters must be documented. After deployment, human oversight of the decisions made by the model must be ensured.

    Accountability for AI products is a noble and important goal. However, the practical implementation of these requirements once more remains questionable. Many modern AI systems no longer use the traditional approach of static training and testing data. Reinforcement Learning instead relies on exploratory training through feedback instead of a testable data set. And even though advances in Explainable AI are steadily shedding light on the decision-making processes of black-box models, complex model architectures of many modern neural networks make the tracing of individual decisions almost impossible to reconstruct.

    The proposal also announces requirements for the accuracy of trained AI products. This poses a particular challenge for developers because no AI system has perfect accuracy. Nor is this ever the objective, as misclassifications are often calculated to have as little impact as possible on the individual use case. Therefore, it is imperative that performance requirements for predictions and classifications be determined on a case-by-case basis and that universal performance requirements be avoided.

    Enabling AI Excellence

    Europe is Falling Behind

    With these requirements, the proposal for AI regulation seeks to inspire confidence in AI technology through transparency and accountability. This is a first, right step toward “AI excellence.” In addition to regulation, however, Europe as a location for Artificial Intelligence must also become more attractive to developers and investors.

    According to a recently published study by the Center for Data Innovation, Europe is already falling behind both the United States and China in the battle for global leadership in AI. China has now surpassed Europe in the number of published studies on Artificial Intelligence and has taken the global lead. European AI companies are also attracting significantly less investment than their U.S. counterparts. European AI companies invest less money in research and development and are also less likely to be acquired than American companies.

    A Step in the Right Direction: Supporting Research and Innovation

    The European Commission recognizes that more support for AI development is needed for excellence on the European market and promises regulatory sandboxes, legal leeway to develop and test innovative AI products, and co-funding for AI research and testing sites. This is needed to make startups and smaller companies more competitive and foster European innovation and competition.

    These are necessary steps to lift Europe onto the path to AI excellence, but they are far from being sufficient. AI developers need easier access to markets outside the EU, facilitating the flow of data across national borders. Opportunities to expand into the U.S. and collaborate with Silicon Valley are essential for the digital industry due to how interconnected digital products and services have become.

    What is entirely missing from the proposal for AI regulation is education about AI and its potential and risks outside of expert circles. As artificial intelligence increasingly permeates all areas of everyday life, education will become more and more critical. To build trust in new technologies, they must first be understood. Educating non-specialists about both the potential and limitations of AI is an essential step in demystifying Artificial Intelligence and strengthening trust in this technology.

    Potential Not Yet Fully Tapped

    With this proposal, the European Commission recognizes that AI is leading the way for the future of the European market. Guidelines for a technology of this scope are important – as is the promotion of innovation. For these strategies to bear fruit, their practical implementation must also be feasible for startups and smaller companies. The potential for AI excellence is abundant in Europe. With clear rules and incentives, it can also be realized.

    Oliver Guggenbühl Oliver Guggenbühl