Imagine you could automate everyday routine tasks while creating space for creative and innovative activities. This is exactly what the new AI chatbot hagebauGPT enables for the employees of hagebau – a Europe-wide network of wholesale and retail traders in the fields of building materials, wood, tiles, and DIY.
With hagebauGPT, employees can access company databases and internal knowledge sources securely and efficiently. This technology not only promotes the safe handling of generative AI but also improves business processes within the company. In the long term, this is expected to lead to greater efficiency and support employees in their daily work routine. In short, hagebauGPT impressively demonstrates how tailored AI solutions can create real benefits and a new dimension in the workplace.
The Challenge
When the new generative AI tools like ChatGPT, Midjourney, and others were released, hagebau quickly recognized the potential of these technologies to support their internal processes. However, with limited IT resources, the company faced the challenge of effectively implementing these innovative solutions.
With statworx as a strategic partner, hagebau decided to develop its own data-secure chatbot – hagebauGPT. This chatbot is based on the CustomGPT platform from statworx, which was specifically tailored to the needs of the company. In addition to secure integration into the hagebau cloud, CustomGPT offers the ability to integrate industry-specific functionalities and a customized user interface that aligns with the company’s brand guidelines.
The Technology
hagebauGPT uses Retrieval-Augmented Generation (RAG) to enhance generative AI models with specific knowledge from external data sources. The process consists of three steps: First, relevant knowledge (Retrieval) is found from the available data. Then, an instruction (Augment) is created, which the language model uses to generate a precise answer (Generation). RAG is particularly useful for answering specific questions, as it targets relevant parts of a dataset and thus reduces the risk of errors. Semantic search plays a crucial role by searching not only for keywords but also for meanings. This allows relevant information to be efficiently found from various data sources.
A typical use case for RAG is the deployment in FAQ bots, which use structured FAQ databases to respond to user inquiries. For unstructured data, such as technical manuals or marketing materials, advanced strategies are necessary to transform them into searchable formats. Here, RAG can be further optimized by combining semantic vector search and fuzzy keyword search. This hybrid search method ensures that both precise and contextually relevant information is efficiently identified.
The Outcome
The chatbot offers a variety of features, including processing voice inputs and interacting with internal manuals. Users can also upload and edit their own documents. Thanks to RAG, hagebauGPT integrates company data and also provides control over data security and privacy, as all data remains within the EU. These functionalities not only promote efficiency but also the creativity of employees by enabling new ways of interaction and problem-solving.
After a successful pilot phase, hagebauGPT was made accessible to all employees in May 2024. The response was overwhelmingly positive: Many employees actively use the chatbot and contribute new ideas for further use cases. This shows: hagebau’s journey with hagebauGPT is an example of how targeted investments in AI technology can yield long-term benefits. The company plans to further expand the chatbot’s functionalities with a particular focus on efficiency optimization. Through integration into existing business applications and continuous inclusion of employee feedback, the platform will be further improved and new, innovative applications will be explored.
Conclusion
The collaboration between hagebau and statworx impressively demonstrates how AI-powered technologies can not only increase efficiency but also provide a platform for creative solutions. Companies wishing to follow similar paths can derive concrete best practices from this.
CustomGPT offers companies the opportunity to meet their specific business requirements while ensuring data protection and security. In our case study with hagebau, you can read in detail how the implementation of a CustomGPT solution could also work in your company.
Epoch 4 – Outlook: What’s Next?
Welcome to the final installment of our blog series on the history of generative artificial intelligence! So far, we have explored the journey from the earliest statistical models to neural networks and modern applications. But what does the future hold for us? In this concluding part, we will examine the upcoming challenges and opportunities for generative AI.
Interpolating vs. Extrapolating
A central aspect in the advancement of GenAI is the transition from interpolation to extrapolation. While today’s models like GPT-4o and DALL-E 3 deliver impressive performances within the trained data space (interpolation), the capability for extrapolation—creating content beyond the learned scope—is still in its infancy. The next generation of models might aim to surpass this boundary, generating even more creative and versatile content. Whether and how this will happen is currently a hotly debated topic. As of now, there are no concrete concepts on what this new generation of extrapolating models might look like.
Agents
Another exciting area is AI agents. These intelligent systems can operate autonomously, make decisions, and execute tasks without human intervention. This capability sets them apart from ChatGPT and other chatbots, which can “only” provide useful answers to queries. Such agents could, in the future, take on complex tasks in various fields such as medicine, finance, or customer service, far exceeding today’s capabilities.
Ethical and Legal Questions
The growing prevalence of GenAI also brings ethical and legal challenges. Addressing bias—prejudiced or discriminatory outcomes—remains a critical issue. Moreover, ethical standards and legal frameworks for the use of third-party GenAI and proprietary models must be developed to minimize misuse and negative impacts. Currently, intellectual property rights are a focal point. The verdicts in the legal battles between Stability AI and Getty Images, OpenAI and the New York Times, as well as Universal, Sony, and Warner against Suno and Udo, are eagerly anticipated.
From Model to System
A significant development is the shift from individual models to integrated systems. What does this mean in practice? Generative AI will be embedded into complex systems that close security gaps and enhance the reliability of applications. An example of this is that ChatGPT does not directly execute terminal commands but serves a custom API with predefined behavior. This integration allows the benefits of GenAI to be harnessed while minimizing potential risks.
Outlook and Conclusion
The future of generative artificial intelligence is both promising and challenging. The capability for extrapolation, the development of autonomous agents, and the integration of models into secure systems are just some of the exciting developments that await us. At the same time, we must continually address ethical and legal questions to ensure the responsible use of these powerful technologies.
Overall, the history of generative AI shows how far we have come—from the first statistical models to highly advanced, multimodal systems. But the journey is far from over. The next few years promise further significant leaps. It is up to all of us to translate technological advancements into societal progress.
This was the fourth and final part of our series on the history and future of generative artificial intelligence. We hope you enjoyed reading it as much as we enjoyed writing it. If you want to learn more about AI, you can find many more blog posts, whitepapers, and interviews on our website.
Welcome back to our blog series on the history of generative Artificial Intelligence! In the last part, we examined the transition from traditional statistical models to neural networks and the first major breakthroughs in AI. In this installment, we focus on the current developments and practical applications that have brought generative AI into the hands of the general population.
Epoch 3 – Transition
Period: November 2022 – Present
Period | Paradigms | Techniques | User Profile | Examples |
Nov 22 – present | Plug & Play, text-to-anything, Multimodality, Open-Source Hype | RLHF, APIs, PEFT, RAG | General public using chat interfaces, IT experts using APIs and open-source models | Text: ChatGPT, Bard, Mistral; Image: Stable Diffusion, DALL-E, Midjourney; Video: Runway ML, Pika Labs; Audio: Voicebox, MusicGen, Suno |
The Breakthrough of Language Models
Although language models like GPT-3 could already write convincing texts and, with the right prompt, even retrieve information, they were initially not very user-friendly. Besides the technical hurdle that an interface to a language model (API) required programming knowledge, these models were not yet capable of natural conversations.
A significant advancement came in January 2022, when OpenAI fine-tuned GPT-3 to follow instructions rather than merely complete sentences. The result, InstructGPT, can be seen as a clear precursor to the breakthrough of ChatGPT in December 2022.
Not only could ChatGPT engage in natural conversations with up to 3,000 words—it also emerged as a promising assistant for various everyday tasks. Packaged in an accessible web application, the release of ChatGPT marked a watershed moment in AI and technology history. Instead of leaving automation to IT experts, office tasks such as writing emails or summarizing texts could now be partially automated by average users as needed. It is no wonder that Andrej Karpathy, a founding member of OpenAI and former AI director at Tesla, tweeted:
Multimodal Generative AI
But to think of modern generative AI only in terms of text would be to overlook the impressive development of multimodal GenAI models. Since April 2022, DALL-E 2 has been able to generate realistic drawings, artworks, and photographs based on short text prompts. Commercial GenAI platforms like RunwayML have, since February 2023, even enabled the animation of images or the creation of complete videos based solely on text prompts. It is thus not surprising that creating music or sound effects with AI has become quick and accessible to everyone. Early models like Google’s MusicLM (January 2023) or Meta’s AudioGen (August 2023) did not yet deliver studio-quality sounds but already demonstrated the technology’s potential. The major breakthrough in GenAI for audio came in the spring of 2024, when Suno, Udio, and Elevenlabs generated high-quality songs and sounds, sparking a significant debate on copyright and fair use.
Who Benefits?
With all these powerful AI models, the question arises: who benefits from these new technologies? Is it once again only the large tech companies, particularly those with a poor reputation regarding data privacy? The answer is: partly, yes. While major breakthroughs are still often led by the likes of Microsoft, Google, and others, smaller, freely available models—known as open-source models—are increasingly achieving significant successes. The language model from French startup Mistral AI recently outperformed OpenAI’s GPT-3.5 in standard test metrics—and did so with a much more resource-efficient and faster model than its Silicon Valley competitor. With Meta being one of the leading open-source developers, particularly with their Llama models, even one of the world’s largest tech companies is contributing to this trend. Those who wish for generally available, private AI assistants can look forward to a bright future.
Challenges and Opportunities
The third phase in the history of GenAI is characterized by the widespread availability of high-performance AI models, either through commercial web applications and platforms or freely available open-source models. Companies are increasingly realizing that the value creation through AI is not solely tied to the availability of highly qualified IT experts. Rather, it is essential to maximize the benefits created by AI through the broad application of existing technologies. Nonetheless, numerous challenges remain, including the security of input and output data or the fairness of AI decisions.
How Do We Move to the Next Level?
A key to the success of this epoch is the paradigm of “Plug & Play”. This means that models like ChatGPT and DALL-E 2 can be used easily and without deep technical knowledge. These models are easily accessible through “Reinforcement Learning from Human Feedback” (RLHF) and API interfaces. Andrej Karpathy’s statement that “the hottest new programming language is English” underscores the democratization of AI usage.
Another crucial aspect is the fine-tuning of models to human preferences, which has significantly improved user-friendliness and applicability. At the same time, open-source models are experiencing a boom, as they can run on regular computers, making them accessible to a broader user base.
Ethical and legal questions are also in focus, especially in dealing with third-party GenAI and proprietary models. Issues such as bias and fairness are not to be underestimated, as they significantly influence the acceptance and integrity of AI applications.
What’s Next?
In the next part of our series, we take a look into the future of generative AI. Don’t miss out as we explore the upcoming challenges and opportunities in the world of generative artificial intelligence.
Don’t miss Part 4 of our blog series.
Welcome to our four-part blog series on the history of Generative Artificial Intelligence. Our journey through history will highlight the significant milestones and show how the entire concept of generative AI has fundamentally transformed with each step of development. From the early attempts of sketching probability distributions with pen and paper to today’s sophisticated algorithms that generate complex and creative content – each of the four steps marks a revolution, not just an update.
Why is the history of Generative AI so exciting? Because it demonstrates how each technological advancement has not only changed the methods but also the assumptions, usage, audience, and interaction with the models. What began as a tool for statistical analysis is today a creative partner capable of producing art, music, text, and much more.
Join us on this journey through the history of GenAI.
Epoch 1 – Foundations
A well-kept secret: If you rearrange the letters of “Data Science”, you get “Statistics”. Just kidding. But it is true that the roots of data science go back to the 18th century. Back then, α, Θ, and other mathematical symbols had more of a mothball charm than venture capital appeal.
Mathematicians like Gauss, Bayes, and a number of clever Frenchmen recognized the value of counting early on. They counted, recounted, and compared the results – all by hand and very laboriously. Yet these methods are still relevant and proven today – a true evergreen!
With the invention and availability of electricity, a new era began. Data could now be processed and analyzed much more efficiently. The idea of an “electronic marble run” for data emerged – a system with switches and paths that triggered various actions based on data input, such as lighting a bulb or executing a function.
An early, actually functional form of Artificial Intelligence (AI) was born: algorithms based on observations and derived rules.
Period | Paradigms | Techniques | User Profile | Examples |
1700-1960 | Pen, soldering iron, punch card | Counting, sorting, making assumptions | Engineers, manufacturers, researchers | Accounting, assembly lines, natural sciences |
1960-2010 | Programming application-specific code | The same as before, but automated | Statisticians, computer scientists, early data scientists, and machine learning researchers | Spam filters, (sentiment) text analysis, Optical Character Recognition (OCR) |
What Makes These Early Models Generative? Well, the “electronic marble run” could also be operated in reverse. Forward, it was a statistical model that assigned a category or value to an observation. For this, the model needed to have an idea of the data. Backward, however, high-probability examples of mushrooms, marbles, data – in other words, images or tables – could be generated through random draws. The generative capabilities of the models were often underestimated, as the forward function was the focus.
This methodology is called the Naïve Bayesian Classifier. “Naïve” here is not meant derogatorily but refers to simplifying assumptions that make modeling significantly easier. With naïve methods, one does not have to assume complex relationships between variables like mycelium, stem, and cap of a mushroom. One simply says: If the average quality of all three parts is good enough, then the mushroom is good.
Some of the first applications of these models included handwriting recognition (for example, at the post office, known as Optical Character Recognition or OCR), as well as spam filters and general text analyses up to this day.
This was the first insight into the foundations of generative Artificial Intelligence. In the next part of our series, we will dive into the world of neural networks and machine learning, which laid the foundation for modern AI systems. Stay curious and don’t miss the next milestone in the history of generative AI!
Don’t miss Part 2 of our blog series.
Welcome back to our series on the history of generative Artificial Intelligence. In the first part, we explored the basics and saw how early statistical models like the Naïve Bayesian Classifier paved the way for today’s AI. Now, we take a significant leap forward and delve into the second epoch—a transitional period where neural networks and GPUs take center stage and revolutionize the world of AI.
Epoch 2 – Transition
From 2015 – Awe in the Pre-Stage
The AI winter is over, and neural networks as well as GPUs (Graphics Processing Units) have made their grand entrance. However, these new technological marvels are largely confined to tech experts. But that doesn’t mean impressive products and applications aren’t being developed—quite the opposite! StyleGANs (Generative Adversarial Networks) deliver unprecedented image quality, and transformer models like BERT (Bidirectional Encoder Representations from Transformers) capture text with remarkable detail.
Despite this, direct operation of these models remains out of reach for the general public due to their technical and specific nature. One must choose, expand, link, and train specific models and architectures. Nevertheless, applications such as chatbots, customer service automation, generative design, and AutoML solutions make their way to the market.
Period | Paradigms | Techniques | User Profile | Examples |
2015-2019 | Latent Spaces, Embeddings | Masked Language Models, GANs | Programmers, Data Scientists | BERT, StyleGAN |
2019-2022 | Text Prompts | Few Shot, Prompt Engineering | Programmers (API), End Users | GPT-3 |
From 2019 – Lift-off
“Bigger is better” becomes the new credo. Open source is sidelined, and the world of Natural Language Processing (NLP) is turned on its head: Large Language Models (LLMs) are here! However, the first model, GPT-2, is not released in 2019 due to concerns over potential misuse:
“The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse.” (Guardian, 14.02.2019)
The words “Musk,” “nonprofit,” and “fear of misuse” in one sentence – almost surreal in hindsight. By the end of the year, GPT-2 is eventually released. It finds significant use in research to explore the fundamental properties of LLMs and later serves to understand the implications of advancements by comparing it to larger models.
In 2020, GPT-3 follows—with ten times more data and a model a hundred times larger. In 2021, DALL-E is introduced, followed by DALL-E 2 in 2022. Texts can now be processed and created using natural (written) language, although not yet in the now-familiar dialogue form, but through Few-Shot Prompts. For images, however, this was not the case; in DALL-E and DALL-E 2, example images could not be provided in the prompt. In this paradigm, which is still common in non-chat variants of GPTs today, the model was not trained to conduct a conversation but merely to complete texts. This means it requires examples, such as question-answer pairs, to understand how to continue the text.
An example of a Few-Shot Prompt: After three provided examples, the actual user input follows up to the word “Label:”, expecting the model to grasp the task or meaning and continue the text by giving the correct answer.
The public, as well as developers, are impressively confronted with state-of-the-art technology, for example, through the first articles written with GPT-3.
In the next part of our series, we will look at the latest developments and the revolution of generative Artificial Intelligence. Read on to discover how we transition from Few-Shot Prompts to practical applications, making generative AI accessible to the general public!
Don’t miss Part 3 of our blog series.
AI chatbots are quickly becoming essential in businesses, but not all chatbots are created equal. Some can be set up swiftly with ease, yet they often lack the customizability and advanced features that could truly elevate performance, like enhancing customer service responses. Tailor-made solutions that do offer those capabilities can become complex and costly, particularly when they include sophisticated, use-case-specific technologies like Retrieval-Augmented Generation (RAG) that enhance the precision and reliability of generative AI models, allowing them to converse using a company’s own databases and produce verified facts.
How do chatbots work?
Custom GPT-chatbots absorb vast amounts of text to comprehend contexts and identify patterns. They’re programmed to respond personally to various user inquiries, with customization to specific needs, training with selected data, and integration into platforms such as websites or mobile apps.
statworx’s CustomGPT stands out by combining the best of both worlds: high customizability with quick implementation. This tailor-made solution offers secure and efficient use of ChatGPT-like models, with interfaces that can be designed in a company’s brand style and easily integrated into existing business applications like CRM systems and support tools.
So, what’s crucial when companies seek the ideal chatbot solution?
Requirement Analysis: First, a company’s specific needs should be pinpointed to ensure the chatbot is perfectly tailored. What tasks should it handle? Which departments should it support? What functionalities are necessary?
Model Training: A custom GPT-chatbot needs to be equipped with relevant data and information to assure high accuracy and responsiveness. If the necessary data isn’t available, the technical effort might not be justifiable.
System Integration: Seamless integration of the chatbot into existing communication channels like websites, apps, or social media is crucial for effective use. Different solutions may suit different infrastructures.
Ready to deploy and adaptable
The CustomGPT-chatbot from statworx is notable for its quick setup, often within weeks, thanks to a mix of proven standard solutions and custom adjustments. It allows file uploads and the ability to chat, extracting secure information from the company’s own data. With advanced features like fact-checking, data filtering, and user feedback integration, it stands apart from other systems.
Moreover, CustomGPT gives companies the freedom to choose their chatbot’s vocabulary, communication style, and overall tone, enhancing brand experience and recognition through personalized, unique interactions. It’s also optimized for mobile displays on smartphones.
Technical Implementation
On the technical front, Python is the core language for CustomGPT’s backend, with statworx developers utilizing FastAPI, a modern web framework that supports both Websockets for stateful communication and a REST API for services. CustomGPT is versatile, suitable for various infrastructures, from a simple cloud function to a machine cluster if needed.
A key feature of its architecture is the connection to a data layer, providing a flexible backend that can quickly adapt to changing conditions and requirements. The frontend application, built with React, seamlessly interacts with the backend, which, for example, leverages the powerful Azure AI search function. This configuration allows for the implementation of custom search solutions and efficient fulfillment of specific requirements.
The benefits at a glance:
Data Protection and Security
CustomGPT ensures all data is stored and processed within the European Union, with full control retained by the company, setting it apart from other GPT-based solutions.
Integration and Flexibility
Its flexible integration into existing business applications is supported by modularity and vendor independence, allowing CustomGPT to adapt to various infrastructures and models, including open-source options.
Features and Customization
CustomGPT’s customization includes integration with organizational data, user role adaptation, and the use of analytics to enhance conversations, offering flexibility and personalization for corporate applications.
Personalized Customer Experience
By tailoring to a company’s specific needs, Custom GPT-chatbots can provide personalized and effective customer interactions.
Efficient Customer Support
CustomGPT chatbots can answer questions, resolve issues, and provide information around the clock, increasing customer satisfaction and efficiency.
Scalability
Companies can effortlessly scale their customer support capacity with GPT-chatbots to maintain consistent service quality, even during high demand.
The time for your own chatbot is now. Therefore, statworx focuses on upstream development with quick deployment and easy implementation. This means all users of a statworx CustomGPT benefit from patches, bug fixes, and new features over time. CustomGPT remains versatile and flexible, meeting the specific, changing needs of companies and addressing complex requirements. Contact us now for a consultation.
We are at the beginning of 2024, a time of fundamental change and exciting progress in the world of artificial intelligence. The next few months are seen as a critical milestone in the evolution of AI as it transforms from a promising future technology to a permanent reality in the business and everyday lives of millions. Together with the AI Hub Frankfurt, the central AI network in the Rhine-Main region, we are therefore presenting our trend forecast for 2024, the AI Trends Report 2024.
The report identifies twelve dynamic AI trends that are unfolding in three key areas: Culture and Development, Data and Technology, and Transparency and Control. These trends paint a picture of the rapid changes in the AI landscape and highlight the impact on companies and society.
Our analysis is based on extensive research, industry-specific expertise and input from experts. We highlight each trend to provide a forward-looking insight into AI and help companies prepare for future challenges and opportunities. However, we emphasize that trend forecasts are always speculative in nature and some of our predictions are deliberately bold.
Directly to the AI Trends Report 2024!
What is a trend?
A trend is different from both a short-lived fashion phenomenon and media hype. It is a phenomenon of change with a “tipping point” at which a small change in a niche can cause a major upheaval in the mainstream. Trends initiate new business models, consumer behavior and forms of work and thus represent a fundamental change to the status quo. It is crucial for companies to mobilize the right knowledge and resources before the tipping point in order to benefit from a trend.
12 AI trends that will shape 2024
In the AI Trends Report 2024, we identify groundbreaking developments in the field of artificial intelligence. Here are the short versions of the twelve trends, each with a selected quote from our experts.
Part 1: Culture and development
From the 4-day week to omnimodality and AGI: 2024 promises great progress for the world of work, for media production and for the possibilities of AI as a whole.
Thesis I: AI expertise within the company
Companies that deeply embed AI expertise in their corporate culture and build interdisciplinary teams with tech and industry knowledge will secure a competitive advantage. Centralized AI teams and a strong data culture are key to success.
Stefanie Babka, Global Head of Data Culture, Merck
Thesis II: 4-day working week thanks to AI
Thanks to AI automation in standard software and company processes, the 4-day working week has become a reality for some German companies. AI tools such as Microsoft’s Copilot increase productivity and make it possible to reduce working hours without compromising growth.
Dr. Jean Enno Charton, Director Digital Ethics & Bioethics, Merck
Thesis III: AGI through omnimodal models
The development of omnimodal AI models that mimic human senses brings the vision of general artificial intelligence (AGI) closer. These models process a variety of inputs and extend human capabilities.
Dr. Ingo Marquart, NLP Subject Matter Lead, statworx
Thesis IV: AI revolution in media production
Generative AI (GenAI) is transforming the media landscape and enabling new forms of creativity, but still falls short of transformational creativity. AI tools are becoming increasingly important for creatives, but it is important to maintain uniqueness against a global average taste.
Nemo Tronnier, Founder & CEO, Social DNA
Part 2: Data and technology
In 2024, everything will revolve around data quality, open source models and access to processors. The operators of standard software such as Microsoft and SAP will benefit greatly because they occupy the interface to end users.
Thesis V: Challengers for NVIDIA
New players and technologies are preparing to shake up the GPU market and challenge NVIDIA’s position. Startups and established competitors such as AMD and Intel are looking to capitalize on the resource scarcity and long wait times that smaller players are currently experiencing and are focusing on innovation to break NVIDIA’s dominance.
Norman Behrend, Chief Customer Officer, Genesis Cloud
Thesis VI: Data quality before data quantity
In AI development, the focus is shifting to the quality of the data. Instead of relying solely on quantity, the careful selection and preparation of training data and innovation in model architecture are becoming crucial. Smaller models with high-quality data can be superior to larger models in terms of performance.
Walid Mehanna, Chief Data & AI Officer, Merck
Thesis VII: The year of the AI integrators
Integrators such as Microsoft, Databricks and Salesforce will be the winners as they bring AI tools to end users. The ability to seamlessly integrate into existing systems will be crucial for AI startups and providers. Companies that offer specialized services or groundbreaking innovations will secure lucrative niches.
Marco Di Sazio, Head of Innovation, Bankhaus Metzler
Thesis VIII: The open source revolution
Open source AI models are competing with proprietary models such as OpenAI’s GPT and Google’s Gemini. With a community that fosters innovation and knowledge sharing, open source models offer more flexibility and transparency, making them particularly valuable for applications that require clear accountability and customization.
Prof. Dr. Christian Klein, Founder, UMYNO Solutions, Professor of Marketing & Digital Media, FOM University of Applied Sciences
Part 3: Transparency and control
The increased use of AI decision-making systems will spark an intensified debate on algorithm transparency and data protection in 2024 – in the search for accountability. The AI Act will become a locational advantage for Europe.
Thesis IX: AI transparency as a competitive advantage
European AI start-ups with a focus on transparency and explainability could become the big winners, as industries such as pharmaceuticals and finance already place high demands on the traceability of AI decisions. The AI Act promotes this development by demanding transparency and adaptability from AI systems, giving European AI solutions an edge in terms of trust.
Jakob Plesner, Attorney at Law, Gorrissen Federspiel
Thesis X: AI Act as a seal of quality
The AI Act positions Europe as a safe haven for investments in AI by setting ethical standards that strengthen trust in AI technologies. In view of the increase in deepfakes and the associated risks to society, the AI Act acts as a bulwark against abuse and promotes responsible growth in the AI industry.
Catharina Glugla, Head of Data, Cyber & Tech Germany, Allen & Overy LLP
Thesis XI: AI agents are revolutionizing consumption
Personal assistance bots that make purchases and select services will become an essential part of everyday life. Influencing their decisions will become a key element for companies to survive in the market. This will profoundly change search engine optimization and online marketing as bots become the new target groups.
Chi Wang, Principle Researcher, Microsoft Research
Thesis XII: Alignment of AI models
Aligning AI models with universal values and human intentions will be critical to avoid unethical outcomes and fully realize the potential of foundation models. Superalignment, where AI models work together to overcome complex challenges, is becoming increasingly important to drive the development of AI responsibly.
Daniel Lüttgau, Head of AI Development, statworx
Concluding remarks
The AI Trends Report 2024 is more than an entertaining stocktake; it can be a useful tool for decision-makers and innovators. Our goal is to provide our readers with strategic advantages by discussing the impact of trends on different sectors and helping them set the course for the future.
This blog post offers only a brief insight into the comprehensive AI Trends Report 2024. We invite you to read the full report to dive deeper into the subject matter and benefit from the detailed analysis and forecasts.
Business success hinges on how companies interact with their customers. No company can afford to provide inadequate care and support. On the contrary, companies that offer fast and precise handling of customer inquiries can distinguish themselves from the competition, build trust in their brand, and retain people in the long run. Our collaboration with Geberit, a leading manufacturer of sanitary technology in Europe, demonstrates how this can be achieved at an entirely new level through the use of generative AI.
What is generative AI?
Generative AI models automatically create content from existing texts, images, and audio files. Thanks to intelligent algorithms and deep learning, this content is hardly distinguishable, if at all, from human-made content. This allows companies to offer their customers personalized user experiences, interact with them automatically, and create and distribute relevant digital content tailored to their target audience. GenAI can also tackle complex tasks by processing vast amounts of data, recognizing patterns, and learning new skills. This technology enables unprecedented gains in productivity. Routine tasks like data preparation, report generation, and database searches can be automated and greatly optimized with suitable models.
The Challenge: One Million Emails
Geberit faced a challenge: every year, one million emails landed in various mailboxes of the customer service department of Geberit’s German distribution company. It was common for inquiries to end up in the wrong departments, leading to significant additional effort.
The Solution: An AI-powered Email Bot
To correct this misdirection, we developed an AI system that automatically assigns emails to the correct departments. This intelligent classification system was trained with a dataset of anonymized customer inquiries and utilizes advanced machine and deep learning methods, including Google’s BERT model.
The Highlight: Automated Response Suggestions with ChatGPT
But the innovation didn’t stop there. The system was further developed to generate automated response emails. ChatGPT is used to create customer-specific suggestions. Customer service agents only need to review the generated emails and can send them directly.
The Result: 70 Percent Better Sorting
The result of this groundbreaking solution speaks for itself: a reduction of misassigned emails by over 70 percent. This not only means significant time savings of almost three full working months but also an optimization of resources. The success of the project is making waves at Geberit: a central mailbox for all inquiries, expansion into other country markets, and even a digital assistant are in the planning.
Customer Service 2.0 – Innovation, Efficiency, Satisfaction
The introduction of GenAI has not only revolutionized Geberit’s customer service but also demonstrates the potential in the targeted application of AI technologies. Intelligent classification of inquiries and automated response generation not only saves resources but also increases customer satisfaction. A pioneering example of how AI is shaping the future of customer service.