We are at the beginning of 2024, a time of fundamental change and exciting progress in the world of artificial intelligence. The next few months are seen as a critical milestone in the evolution of AI as it transforms from a promising future technology to a permanent reality in the business and everyday lives of millions. Together with the AI Hub Frankfurt, the central AI network in the Rhine-Main region, we are therefore presenting our trend forecast for 2024, the AI Trends Report 2024.
The report identifies twelve dynamic AI trends that are unfolding in three key areas: Culture and Development, Data and Technology, and Transparency and Control. These trends paint a picture of the rapid changes in the AI landscape and highlight the impact on companies and society.
Our analysis is based on extensive research, industry-specific expertise and input from experts. We highlight each trend to provide a forward-looking insight into AI and help companies prepare for future challenges and opportunities. However, we emphasize that trend forecasts are always speculative in nature and some of our predictions are deliberately bold.
Directly to the AI Trends Report 2024!
What is a trend?
A trend is different from both a short-lived fashion phenomenon and media hype. It is a phenomenon of change with a “tipping point” at which a small change in a niche can cause a major upheaval in the mainstream. Trends initiate new business models, consumer behavior and forms of work and thus represent a fundamental change to the status quo. It is crucial for companies to mobilize the right knowledge and resources before the tipping point in order to benefit from a trend.
12 AI trends that will shape 2024
In the AI Trends Report 2024, we identify groundbreaking developments in the field of artificial intelligence. Here are the short versions of the twelve trends, each with a selected quote from our experts.
Part 1: Culture and development
From the 4-day week to omnimodality and AGI: 2024 promises great progress for the world of work, for media production and for the possibilities of AI as a whole.
Thesis I: AI expertise within the company
Companies that deeply embed AI expertise in their corporate culture and build interdisciplinary teams with tech and industry knowledge will secure a competitive advantage. Centralized AI teams and a strong data culture are key to success.
Stefanie Babka, Global Head of Data Culture, Merck
Thesis II: 4-day working week thanks to AI
Thanks to AI automation in standard software and company processes, the 4-day working week has become a reality for some German companies. AI tools such as Microsoft’s Copilot increase productivity and make it possible to reduce working hours without compromising growth.
Dr. Jean Enno Charton, Director Digital Ethics & Bioethics, Merck
Thesis III: AGI through omnimodal models
The development of omnimodal AI models that mimic human senses brings the vision of general artificial intelligence (AGI) closer. These models process a variety of inputs and extend human capabilities.
Dr. Ingo Marquart, NLP Subject Matter Lead, statworx
Thesis IV: AI revolution in media production
Generative AI (GenAI) is transforming the media landscape and enabling new forms of creativity, but still falls short of transformational creativity. AI tools are becoming increasingly important for creatives, but it is important to maintain uniqueness against a global average taste.
Nemo Tronnier, Founder & CEO, Social DNA
Part 2: Data and technology
In 2024, everything will revolve around data quality, open source models and access to processors. The operators of standard software such as Microsoft and SAP will benefit greatly because they occupy the interface to end users.
Thesis V: Challengers for NVIDIA
New players and technologies are preparing to shake up the GPU market and challenge NVIDIA’s position. Startups and established competitors such as AMD and Intel are looking to capitalize on the resource scarcity and long wait times that smaller players are currently experiencing and are focusing on innovation to break NVIDIA’s dominance.
Norman Behrend, Chief Customer Officer, Genesis Cloud
Thesis VI: Data quality before data quantity
In AI development, the focus is shifting to the quality of the data. Instead of relying solely on quantity, the careful selection and preparation of training data and innovation in model architecture are becoming crucial. Smaller models with high-quality data can be superior to larger models in terms of performance.
Walid Mehanna, Chief Data & AI Officer, Merck
Thesis VII: The year of the AI integrators
Integrators such as Microsoft, Databricks and Salesforce will be the winners as they bring AI tools to end users. The ability to seamlessly integrate into existing systems will be crucial for AI startups and providers. Companies that offer specialized services or groundbreaking innovations will secure lucrative niches.
Marco Di Sazio, Head of Innovation, Bankhaus Metzler
Thesis VIII: The open source revolution
Open source AI models are competing with proprietary models such as OpenAI’s GPT and Google’s Gemini. With a community that fosters innovation and knowledge sharing, open source models offer more flexibility and transparency, making them particularly valuable for applications that require clear accountability and customization.
Prof. Dr. Christian Klein, Founder, UMYNO Solutions, Professor of Marketing & Digital Media, FOM University of Applied Sciences
Part 3: Transparency and control
The increased use of AI decision-making systems will spark an intensified debate on algorithm transparency and data protection in 2024 – in the search for accountability. The AI Act will become a locational advantage for Europe.
Thesis IX: AI transparency as a competitive advantage
European AI start-ups with a focus on transparency and explainability could become the big winners, as industries such as pharmaceuticals and finance already place high demands on the traceability of AI decisions. The AI Act promotes this development by demanding transparency and adaptability from AI systems, giving European AI solutions an edge in terms of trust.
Jakob Plesner, Attorney at Law, Gorrissen Federspiel
Thesis X: AI Act as a seal of quality
The AI Act positions Europe as a safe haven for investments in AI by setting ethical standards that strengthen trust in AI technologies. In view of the increase in deepfakes and the associated risks to society, the AI Act acts as a bulwark against abuse and promotes responsible growth in the AI industry.
Catharina Glugla, Head of Data, Cyber & Tech Germany, Allen & Overy LLP
Thesis XI: AI agents are revolutionizing consumption
Personal assistance bots that make purchases and select services will become an essential part of everyday life. Influencing their decisions will become a key element for companies to survive in the market. This will profoundly change search engine optimization and online marketing as bots become the new target groups.
Chi Wang, Principle Researcher, Microsoft Research
Thesis XII: Alignment of AI models
Aligning AI models with universal values and human intentions will be critical to avoid unethical outcomes and fully realize the potential of foundation models. Superalignment, where AI models work together to overcome complex challenges, is becoming increasingly important to drive the development of AI responsibly.
Daniel Lüttgau, Head of AI Development, statworx
Concluding remarks
The AI Trends Report 2024 is more than an entertaining stocktake; it can be a useful tool for decision-makers and innovators. Our goal is to provide our readers with strategic advantages by discussing the impact of trends on different sectors and helping them set the course for the future.
This blog post offers only a brief insight into the comprehensive AI Trends Report 2024. We invite you to read the full report to dive deeper into the subject matter and benefit from the detailed analysis and forecasts.
At the beginning of December, the central EU institutions reached a provisional agreement on a legislative proposal to regulate artificial intelligence in the so-called trilogue. The final legislative text with all the details is now being drafted. As soon as this has been drawn up and reviewed, the law can be officially adopted. We have compiled the current state of knowledge on the AI Act.
As part of the ordinary legislative procedure of the European Union, a trilogue is an informal interinstitutional negotiation between representatives of the European Parliament, the Council of the European Union and the European Commission. The aim of a trialogue is to reach a provisional agreement on a legislative proposal that is acceptable to both the Parliament and the Council, the co-legislators. The provisional agreement must then be adopted by each of these bodies in formal procedures.
Legislation with a global impact
A special feature of the upcoming law is the so-called market location principle: according to this, companies worldwide that offer or operate artificial intelligence on the European market or whose AI-generated output is used within the EU will be affected by the AI Act.
Artificial intelligence is defined as machine-based systems that can autonomously make predictions, recommendations or decisions and thus influence the physical and virtual environment. This applies, for example, to AI solutions that support the recruitment process, predictive maintenance solutions and chatbots such as ChatGPT. The legal requirements that different AI systems must fulfill vary greatly depending on their classification into risk classes.
The risk class determines the legal requirements
The EU’s risk-based approach comprises a total of four risk classes:
- low,
- limited,
- high,
- and unacceptable risk.
These classes reflect the extent to which artificial intelligence jeopardizes European values and fundamental rights. As the term “unacceptable” for a risk class already indicates, not all AI systems are permissible. AI systems that belong to the “unacceptable risk” category are prohibited by the AI Act. The following applies to the other three risk classes: the higher the risk, the more extensive and stricter the legal requirements for the AI system.
We explain below which AI systems fall into which risk class and which requirements are associated with them. Our assessments are based on the information contained in the “AI Mandates” document dated June 2023. At the time of publication, this document was the most recently published, comprehensive document on the AI Act.
Ban on social scoring and biometric remote identification
Some AI systems have a significant potential to violate human rights and fundamental principles, which is why they are categorized as “unacceptable risk”. These include:
- Real-time based remote biometric identification systems in publicly accessible spaces (exception: law enforcement agencies may use them to prosecute serious crimes but only with judicial authorization);
- Biometric remote identification systems in retrospect (exception: law enforcement authorities may use them to prosecute serious crimes but only with judicial authorization);
- Biometric categorization systems that use sensitive characteristics such as gender, ethnicity or religion;
- Predictive policing based on so-called profiling – i.e. profiling based on skin color, suspected religious affiliation and similarly sensitive characteristics – geographical location or previous criminal behavior;
- Emotion recognition systems for law enforcement, border control, the workplace and educational institutions;
- Arbitrary extraction of biometric data from social media or video surveillance footage to create facial recognition databases;
- Social scoring leading to disadvantage in social contexts;
- AI that exploits the vulnerabilities of a particular group of people or uses unconscious techniques that can lead to behaviors that cause physical or psychological harm.
These AI systems are to be banned from the European market under the AI Act. Companies whose AI systems could fall into this risk class should urgently address the upcoming requirements and explore options for action. This is because a key result of the trilogue is that these systems will be banned just six months after official adoption.
Numerous requirements for AI with risks to health, safety and fundamental rights
The “high risk” category includes all AI systems that are not explicitly prohibited but nevertheless pose a high risk to health, safety or fundamental rights. The following areas of application and use are explicitly mentioned:
- Biometric and biometric-based systems that do not fall into the “unacceptable risk” risk class;
- Management and operation of critical infrastructure;
- Education and training;
- Access and entitlement to basic private and public services and benefits;
- Employment, human resource management and access to self-employment;
- Law enforcement;
- Migration, asylum and border control;
- Administration of justice and democratic processes
These AI systems are subject to comprehensive legal requirements that must be implemented prior to commissioning and observed throughout the entire AI life cycle:
- Assessment to evaluate the effects on fundamental and human rights
- Quality and risk management
- Data governance structures
- Quality requirements for training, test and validation data
- Technical documentation and record-keeping obligations
- Fulfillment of transparency and provision obligations
- Human supervision, robustness, security and accuracy
- Declaration of conformity incl. CE marking obligation
- Registration in an EU-wide database
AI systems that are used in one of the above-mentioned areas but do not pose a risk to health, safety, the environment or fundamental rights are not subject to the legal requirements. However, this must be proven by informing the competent national authority about the AI system. The authority then has three months to assess the risks of the AI system. The AI can be put into operation within these three months. However, if the examining authority classifies it as high-risk AI, high fines may be imposed.
A special regulation also applies to AI products and AI safety components of products whose conformity is already being tested by third parties on the basis of EU legislation. This is the case for AI in toys, for example. In order to avoid overregulation and additional burdens, these will not be directly affected by the AI Act.
AI with limited risk must comply with transparency obligations
AI systems that interact directly with humans fall into the “limited risk” category. This includes emotion recognition systems, biometric categorization systems and AI-generated or modified content that resembles real people, objects, places or events and could be mistaken for real (“deepfakes”). For these systems, the draft law provides for the obligation to inform consumers about the use of artificial intelligence. This should make it easier for consumers to actively decide for or against their use. A code of conduct is also recommended.
No legal requirements for AI with low risk
Many AI systems, such as predictive maintenance or spam filters, fall into the “low risk” category. Companies that only offer or use such AI solutions will hardly be affected by the AI Act. This is because there are currently no legal requirements for such applications. Only a code of conduct is recommended.
Generative AI such as ChatGPT is regulated separately
Generative AI models and basic models with a wide range of possible applications were not included in the original draft of the AI Act. The regulatory possibilities of such AI models have therefore been the subject of particularly intense debate since the launch of ChatGPT by OpenAI. According to the European Council’s press statement of December 9, these models are now to be regulated on the basis of their risk. In principle, all models must implement transparency requirements. Foundation models with a particular risk – so-called “high-impact foundation models” – will also have to meet additional requirements. How exactly the risk of AI models will be assessed is currently still open. Based on the latest document, the following possible requirements for high-impact foundation models can be estimated:
- Quality and risk management
- Data governance structures
- Technical documentation
- Fulfillment of transparency and information obligations
- Ensuring performance, interpretability, correctability, security, cybersecurity
- Compliance with environmental standards
- Cooperation with downstream providers
- Registration in an EU-wide database
Companies should prepare for the AI Act now
Even though the AI Act has not yet been officially adopted and we do not yet know the details of the legal text, companies should prepare for the transition phase now. In this phase, AI systems and associated processes must be designed to comply with the law. The first step is to assess the risk class of each individual AI system. If you are not yet sure which risk classes your AI systems fall into, we recommend our free AI Act Quick Check. It will help you to assess the risk class.
More information:
- Lunch & Learn „Done Deal“ (only available in German)
- Lunch & Learn „Alles, was du über den AI Act Wissen musst “ (only available in German)
- Factsheet AI Act
Sources:
- Press statement of the European Council: „Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world“
- AI Mandates (June 2023)
- “General approach” of the Council of the European Union: https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/
- Legislative proposal (“AI Act”) of the European Commission: https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX%3A52021PC0206
- Ethical guidelines for trustworthy AI: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Read next …
… and explore new
Last December, the European Council published a dossier outlining the Council’s preliminary position on the draft law known as the AI Act. This new law is intended to regulate artificial intelligence (AI) and thus becomes a game-changer for the entire tech industry. In the following, we have compiled the most important information from the dossier, which is the current official source on the planned AI Act at the time of publication.
A legal framework for AI
Artificial intelligence has enormous potential to improve and ease all our lives. For example, AI algorithms already support early cancer detection or translate sign language in real time, thereby eliminating language barriers. But in addition to the positive effects, there are risks, as the latest deep fakes from Pope Francis or the Cambridge Analytica scandal illustrate.
The European Union (EU) is currently drafting legislation to regulate artificial intelligence to mitigate the risks of artificial intelligence. With this, the EU wants to protect consumers and ensure the ethically acceptable use of artificial intelligence. The so-called “AI Act” is still in the legislative process but is expected to be passed in 2023 – before the end of the current legislative period. Companies will then have two years to implement the legally binding requirements. Violations will be punished with fines of up to 6% of global annual turnover or €30,000,000 – whichever is higher. Therefore, companies should already start addressing the upcoming legal requirements now.
Legislation with global impact
The planned AI Act is based on the “location principle, ” meaning that not only European companies will be affected by the amendment. Thus, all companies that offer AI systems on the European market or also operate them for internal use within the EU are affected by the AI Act – with a few exceptions. Private use of AI remains untouched by the regulation so far.
Which AI systems are affected?
The definition of AI determines which systems will be affected by the AI Act. For this reason, the AI definition of the AI Act has been the subject of controversial debate in politics, business, and society for a considerable time. The initial definition was so broad that many “normal” software systems would also have been affected. The current proposal defines AI as any system developed through machine learning or logic- and knowledge-based approaches. It remains to be seen whether this definition will ultimately be adopted.
7 Principles for trustworthy AI
The “seven principles for trustworthy AI” are the most important basis of the AI Act. A group of experts from research, the digital economy, and associations developed them on behalf of the European Commission. They include not only technical aspects but also social and ethical factors that can be used to classify the trustworthiness of an AI system:
- Human action & oversight: decision-making should be supported without undermining human autonomy.
- Technical Robustness & security: accuracy, reliability, and security must be preemptively ensured.
- Data privacy & data governance: handling of data must be legally secure and protected.
- Transparency: interaction with AI must be clearly communicated, as must its limitations and boundaries.
- Diversity, non-discrimination & fairness: Avoidance of unfair bias must be ensured throughout the entire AI lifecycle.
- Environmental & societal well-being: AI solutions should have a positive impact on the environment and society as possible.
- Accountability: responsibilities for the development, use, and maintenance of AI systems must be defined.
Based on these principles, the AI Act’s risk-based approach was developed, allowing AI systems to be classified into one of four risk classes: low, limited, high, and unacceptable risk.
Four risk classes for trustworthy AI
The risk class of an AI system indicates the extent to which an AI system threatens the principles of trustworthy AI and which legal requirements the system must fulfill – provided the system is fundamentally permissible. This is because, in the future, not all AI systems will be allowed on the European market. For example, most “social scoring” techniques are assessed as “unacceptable” and will not be allowed by the new law.
For the other three risk classes, the rule of thumb is that the higher the risk of an AI system, the higher the legal requirements for it. Companies that offer or operate high-risk systems will have to meet the most requirements. For example, AI used to operate critical (digital) infrastructure or used in medical devices is considered such. To bring these to market, companies will have to observe high-quality standards for the used data, set up a risk management, affix a CE mark, and more.
AI systems in the “limited risk” class are subject to information and transparency obligations. Accordingly, companies must inform users of chatbots, emotion recognition systems, or deep fakes about the use of artificial intelligence. Predictive maintenance or spam filters are two examples of AI systems that fall into the lowest-risk category “low risk”. Companies that exclusively offer or use such AI solutions will hardly be affected by the upcoming AI Act. There are no legal requirements for these applications yet.
What companies can do for now
Even though the AI Act is still in the legislative process, companies should act now. The first step is to clarify how they will be affected by the AI Act. To help you do this, we have developed the AI Act Quick Check. With this free tool, AI systems can be quickly assigned to a risk class free of charge, and requirements for the system can be derived. Finally, it can be used as a basis to estimate how extensive the realization of the AI Act will be in your own company and to take initial measures. Of course, we are also happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!
Benefit from our expertise!
Of course, we are happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!
“Building trust through human-centric AI”: this is the slogan under which the European Commission presented its proposal for regulating Artificial Intelligence (AI regulation) last week. This historic step positions Europe as the first continent to uniformly regulate AI and the handling of data. With this groundbreaking attempt at regulation, Europe wishes to set standards for the use of AI and data-powered technology – even beyond European borders. That is the right step, as AI is a catalyst of the digital transformation, with significant implications for the economy, society, and the environment. Therefore, clear rules for the use of this technology are needed. This will allow Europe to position itself as a progressive market that is ready for the digital age. In its current form, however, the proposal still raises some questions about its practical implementation. Europe cannot afford to risk its digital competitiveness when competing with America and China for the AI leadership position.
Building Trust Through Transparency
Two Key Proposals for AI Regulation to Build Trust
To build trust in AI products, the proposal for AI regulation relies on two key approaches: Monitoring AI risks while cultivating an “ecosystem of AI excellence.” Specifically, the proposal includes a ban on the use of AI for manipulative and discriminatory purposes or to assess behavior through a “social scoring system”. Use cases that do not fall into these categories will still have to be screened for hazards and placed on a vague risk scale. Special requirements are placed on high-risk applications, with necessary compliance checks both before and after they are put into operation.
It is crucial that AI applications are to be assessed on a case-by-case basis instead of a previously considered sector-centric regulations. In last year’s white paper on AI and trust, the European Commission called for labeling all applications in business sectors such as healthcare or transportation as “high-risk”. This blanket classification based on defined industries, regardless of the actual use cases, would have been obstructive and meant structural disadvantages for entire European industries. The case-by-case assessment allows for the agile and innovative development of AI in all sectors and subjects all industries to the same standards for risky AI applications.
Clear Definition of Risks of an AI Application Is Missing
Despite this new approach, the proposal for AI regulation lacks a concise process to assess the risks of new applications. Since developers themselves are responsible for evaluating their applications, a clearly defined scale for risk assessment is essential. Articles 6 and 7 circumscribe various risks and give examples of “high-risk applications”, but a transparent process for assessing new AI applications is yet to be defined. Startups and smaller companies are heavily represented among AI developers. These companies, in particular, rely on clearly defined standards and processes to avoid being left behind by larger competitors with more appropriate resources. This requires practical guidelines for risk assessment.
If a use case is classified as a “high-risk application”, then various requirements on data governance and risk management must be met before the product can be launched. For example, training data must be tested for bias and inequalities. Also, the model architecture and training parameters must be documented. After deployment, human oversight of the decisions made by the model must be ensured.
Accountability for AI products is a noble and important goal. However, the practical implementation of these requirements once more remains questionable. Many modern AI systems no longer use the traditional approach of static training and testing data. Reinforcement Learning instead relies on exploratory training through feedback instead of a testable data set. And even though advances in Explainable AI are steadily shedding light on the decision-making processes of black-box models, complex model architectures of many modern neural networks make the tracing of individual decisions almost impossible to reconstruct.
The proposal also announces requirements for the accuracy of trained AI products. This poses a particular challenge for developers because no AI system has perfect accuracy. Nor is this ever the objective, as misclassifications are often calculated to have as little impact as possible on the individual use case. Therefore, it is imperative that performance requirements for predictions and classifications be determined on a case-by-case basis and that universal performance requirements be avoided.
Enabling AI Excellence
Europe is Falling Behind
With these requirements, the proposal for AI regulation seeks to inspire confidence in AI technology through transparency and accountability. This is a first, right step toward “AI excellence.” In addition to regulation, however, Europe as a location for Artificial Intelligence must also become more attractive to developers and investors.
According to a recently published study by the Center for Data Innovation, Europe is already falling behind both the United States and China in the battle for global leadership in AI. China has now surpassed Europe in the number of published studies on Artificial Intelligence and has taken the global lead. European AI companies are also attracting significantly less investment than their U.S. counterparts. European AI companies invest less money in research and development and are also less likely to be acquired than American companies.
A Step in the Right Direction: Supporting Research and Innovation
The European Commission recognizes that more support for AI development is needed for excellence on the European market and promises regulatory sandboxes, legal leeway to develop and test innovative AI products, and co-funding for AI research and testing sites. This is needed to make startups and smaller companies more competitive and foster European innovation and competition.
These are necessary steps to lift Europe onto the path to AI excellence, but they are far from being sufficient. AI developers need easier access to markets outside the EU, facilitating the flow of data across national borders. Opportunities to expand into the U.S. and collaborate with Silicon Valley are essential for the digital industry due to how interconnected digital products and services have become.
What is entirely missing from the proposal for AI regulation is education about AI and its potential and risks outside of expert circles. As artificial intelligence increasingly permeates all areas of everyday life, education will become more and more critical. To build trust in new technologies, they must first be understood. Educating non-specialists about both the potential and limitations of AI is an essential step in demystifying Artificial Intelligence and strengthening trust in this technology.
Potential Not Yet Fully Tapped
With this proposal, the European Commission recognizes that AI is leading the way for the future of the European market. Guidelines for a technology of this scope are important – as is the promotion of innovation. For these strategies to bear fruit, their practical implementation must also be feasible for startups and smaller companies. The potential for AI excellence is abundant in Europe. With clear rules and incentives, it can also be realized.