Business success hinges on how companies interact with their customers. No company can afford to provide inadequate care and support. On the contrary, companies that offer fast and precise handling of customer inquiries can distinguish themselves from the competition, build trust in their brand, and retain people in the long run. Our collaboration with Geberit, a leading manufacturer of sanitary technology in Europe, demonstrates how this can be achieved at an entirely new level through the use of generative AI.
What is generative AI?
Generative AI models automatically create content from existing texts, images, and audio files. Thanks to intelligent algorithms and deep learning, this content is hardly distinguishable, if at all, from human-made content. This allows companies to offer their customers personalized user experiences, interact with them automatically, and create and distribute relevant digital content tailored to their target audience. GenAI can also tackle complex tasks by processing vast amounts of data, recognizing patterns, and learning new skills. This technology enables unprecedented gains in productivity. Routine tasks like data preparation, report generation, and database searches can be automated and greatly optimized with suitable models.
The Challenge: One Million Emails
Geberit faced a challenge: every year, one million emails landed in various mailboxes of the customer service department of Geberit’s German distribution company. It was common for inquiries to end up in the wrong departments, leading to significant additional effort.
The Solution: An AI-powered Email Bot
To correct this misdirection, we developed an AI system that automatically assigns emails to the correct departments. This intelligent classification system was trained with a dataset of anonymized customer inquiries and utilizes advanced machine and deep learning methods, including Google’s BERT model.
The Highlight: Automated Response Suggestions with ChatGPT
But the innovation didn’t stop there. The system was further developed to generate automated response emails. ChatGPT is used to create customer-specific suggestions. Customer service agents only need to review the generated emails and can send them directly.
The Result: 70 Percent Better Sorting
The result of this groundbreaking solution speaks for itself: a reduction of misassigned emails by over 70 percent. This not only means significant time savings of almost three full working months but also an optimization of resources. The success of the project is making waves at Geberit: a central mailbox for all inquiries, expansion into other country markets, and even a digital assistant are in the planning.
Customer Service 2.0 – Innovation, Efficiency, Satisfaction
The introduction of GenAI has not only revolutionized Geberit’s customer service but also demonstrates the potential in the targeted application of AI technologies. Intelligent classification of inquiries and automated response generation not only saves resources but also increases customer satisfaction. A pioneering example of how AI is shaping the future of customer service.
Intelligent chatbots are one of the most exciting and already visible applications of Artificial Intelligence. Since the beginning of 2023, ChatGPT and similar models have enabled straightforward interactions with large AI language models, providing an impressive range of everyday assistance. Whether it’s tutoring in statistics, recipe ideas for a three-course meal with specific ingredients, or a haiku on a particular topic, modern chatbots deliver answers in an instant. However, they still face a challenge: although these models have learned a lot during training, they aren’t actually knowledge databases. As a result, they often produce nonsensical content—albeit convincingly.
The ability to provide a large language model with its own documents offers a solution to this problem. This is precisely what our partner Microsoft asked us for on a special occasion.
Microsoft’s Azure cloud platform has proven itself as a top-tier platform for the entire machine learning process in recent years. To facilitate entry into Azure, Microsoft asked us to implement an exciting AI application in Azure and document it down to the last detail. This so-called MicroHack is designed to provide interested parties with an accessible resource for an exciting use case.
We dedicated our MicroHack to the topic of “Retrieval-Augmented Generation” to elevate large language models to the next level. The requirements were simple: build an AI chatbot in Azure, enable it to process information from your own documents, document every step of the project, and publish the results on the official MicroHacks GitHub repository as challenges and solutions—freely accessible to all.
Wait, why does AI need to read documents?
Large Language Models (LLMs) impress not only with their creative abilities but also as collections of compressed knowledge. During the extensive training process of an LLM, the model learns not only the grammar of a language but also semantics and contextual relationships. In short, large language models acquire knowledge. This enables an LLM to be queried and generate convincing answers—with a catch. While the learned language skills of an LLM often suffice for the vast majority of applications, the same cannot be said for learned knowledge. Without retraining on additional documents, the knowledge level of an LLM remains static.
This leads to the following problems:
- Trained LLMs may have extensive general or even specialized knowledge, but they cannot provide information from non-publicly accessible sources.
- The knowledge of a trained LLM quickly becomes outdated. The so-called “training cutoff” means that the LLM cannot make statements about events, documents, or sources that occurred or were created after the start of training.
- The technical nature of large language models as text completion machines leads them to invent facts when they haven’t learned a suitable answer. These so-called “hallucinations” mean that the answers of an LLM are never completely trustworthy without verification—regardless of how convincing they may seem.
However, machine learning also has a solution for these problems: “Retrieval-augmented Generation” (RAG). This term refers to a workflow that doesn’t just have an LLM answer a simple question but extends this task with a “knowledge retrieval” component: the search for relevant knowledge in a database.
The concept of RAG is simple: search a database for a document that answers the question posed. Then, use a generative LLM to answer the question based on the found passage. This transforms an LLM into a chatbot that answers questions with information from its own database—solving the problems described above.
What happens exactly in such a “RAG”?
RAG consists of two steps: “Retrieval” and “Generation”. For the Retrieval component, a so-called “semantic search” is employed: a database of documents is searched using vector search. Vector search means that the similarity between question and documents isn’t determined by the intersection of keywords, but by the distance between numerical representations of the content of all documents and the query, known as embedding vectors. The idea is remarkably simple: the closer two texts are in content, the smaller their vector distance. As the first puzzle piece, we need a machine learning model that creates robust embeddings for our texts. With this, we then extract the most suitable documents from the database, whose content will hopefully answer our query.
Figure 1: Representation of the typical RAG workflow
Modern vector databases make this process very easy: when connected to an embedding model, these databases store documents directly with their corresponding embeddings—and return the most similar documents to a search query.
Based on the contents of the found documents, an answer to the question is generated in the next step. For this, a generative language model is needed, which receives a suitable prompt for this purpose. Since generative language models do nothing more than continue given text, careful prompt design is necessary to minimize the model’s room for interpretation in solving this task. This way, users receive answers to their queries that were generated based on their own documents—and thus are not dependent on the training data for their content.
How can such a workflow be implemented in Azure?
For the implementation of such a workflow, we needed four separate steps—and structured our MicroHack accordingly:
Step 1: Setup for Document Processing in Azure
In the first step, we laid the foundations for the RAG pipeline. Various Azure services for secure password storage, data storage, and processing of our text documents had to be prepared.
As the first major piece of the puzzle, we used the Azure Form Recognizer, which reliably extracts text from scanned documents. This text should serve as the basis for our chatbot and therefore needed to be extracted, embedded, and stored in a vector database from the documents. From the many offerings for vector databases, we chose Chroma.
Chroma offers many advantages: the database is open-source, provides a developer-friendly API for use, and supports high-dimensional embedding vectors. OpenAI’s embeddings are 1536-dimensional, which is not supported by all vector databases. For the deployment of Chroma, we used an Azure VM along with its own Chroma Docker container.
However, the Azure Form Recognizer and the Chroma instance alone were not sufficient for our purposes: to transport the contents of our documents into the vector database, we had to integrate the individual parts into an automated pipeline. The idea here was that every time a new document is stored in our Azure data store, the Azure Form Recognizer should become active, extract the content from the document, and then pass it on to Chroma. Next, the contents should be embedded and stored in the database—so that the document will become part of the searchable space and can be used to answer questions in the future. For this, we used an Azure Function, a service that executes code as soon as a defined trigger occurs—such as the upload of a document in our defined storage.
To complete this pipeline, only one thing was missing: the embedding model.
Step 2: Completion of the Pipeline
For all machine learning components, we used the OpenAI service in Azure. Specifically, we needed two models for the RAG workflow: an embedding model and a generative model. The OpenAI service offers several models for these purposes.
For the embedding model, “text-embedding-ada-002” was the obvious choice, OpenAI’s newest model for calculating embeddings. This model was used twice: first for creating the embeddings of the documents, and secondly for calculating the embedding of the search query. This was essential: to calculate reliable vector similarities, the embeddings for the search must come from the same model.
With that, the Azure Function could be completed and deployed—the text processing pipeline was complete. In the end, the functional pipeline looked like this:
Figure 2: The complete RAG workflow in Azure
Step 3: Answer Generation
To complete the RAG workflow, an answer should be generated based on the documents found in Chroma. We decided to use “GPT3.5-turbo” for text generation, which is also available in the OpenAI service.
This model needed to be instructed to answer the posed question based on the content of the documents returned by Chroma. Careful prompt engineering was necessary for this. To prevent hallucinations and get as accurate answers as possible, we included both a detailed instruction and several few-shot examples in the prompt. In the end, we settled on the following prompt:
"""I want you to act like a sentient search engine which generates natural sounding texts to answer user queries. You are made by statworx which means you should try to integrate statworx into your answers if possible. Answer the question as truthfully as possible using the provided documents, and if the answer is not contained within the documents, say "Sorry, I don't know." Examples: Question: What is AI? Answer: AI stands for artificial intelligence, which is a field of computer science focused on the development of machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing. Question: Who won the 2014 Soccer World Cup? Answer: Sorry, I don't know. Question: What are some trending use cases for AI right now? Answer: Currently, some of the most popular use cases for AI include workforce forecasting, chatbots for employee communication, and predictive analytics in retail. Question: Who is the founder and CEO of statworx? Answer: Sebastian Heinz is the founder and CEO of statworx. Question: Where did Sebastian Heinz work before statworx? Answer: Sorry, I don't know. Documents:\n"""
Finally, the contents of the found documents were appended to the prompt, providing the generative model with all the necessary information.
Step 4: Frontend Development and Deployment of a Functional App
To interact with the RAG system, we built a simple streamlit app that also allowed for the upload of new documents to our Azure storage—thereby triggering the document processing pipeline again and expanding the search space with additional documents.
For the deployment of the streamlit app, we used the Azure App Service, designed to quickly and scalably deploy simple applications. For an easy deployment, we integrated the streamlit app into a Docker image, which could be accessed over the internet in no time thanks to the Azure App Service.
And this is what our finished app looked like:
Figure 3: The finished streamlit app in action
What did we learn from the MicroHack?
During the implementation of this MicroHack, we learned a lot. Not all steps went smoothly from the start, and we were forced to rethink some plans and decisions. Here are our five takeaways from the development process:
Not all databases are equal.
We changed our choice of vector database several times during development: from OpenSearch to ElasticSearch and ultimately to Chroma. While OpenSearch and ElasticSearch offer great search functions (including vector search), they are still not AI-native vector databases. Chroma, on the other hand, was designed from the ground up to be used in conjunction with LLMs—and therefore proved to be the best choice for this project.
Chroma is a great open-source vector DB for smaller projects and prototyping.
Chroma is particularly suitable for smaller use cases and rapid prototyping. While the open-source database is still too young and immature for large-scale production systems, Chroma’s simple API and straightforward deployment allow for the rapid development of simple use cases; perfect for this MicroHack.
Azure Functions are a fantastic solution for executing smaller pieces of code on demand.
Azure Functions are ideal for running code that isn’t needed at pre-planned intervals. The event triggers were perfect for this MicroHack: the code is only needed when a new document is uploaded to Azure. Azure Functions take care of all the infrastructure; we only needed to provide the code and the trigger.
Azure App Service is great for deploying streamlit apps.
Our streamlit app couldn’t have had an easier deployment than with the Azure App Service. Once we had integrated the app into a Docker image, the service took care of the entire deployment—and scaled the app according to demand.
Networking should not be underestimated.
For all the services used to work together, communication between the individual services must be ensured. The development process required a considerable amount of networking and whitelisting, without which the functional pipeline would not have worked. For the development process, it’s essential to allocate enough time for the deployment of networking.
The MicroHack was a great opportunity to test the capabilities of Azure for a modern machine learning workflow like RAG. We thank Microsoft for the opportunity and support, and we are proud to have contributed our in-house MicroHack to the official GitHub repository. You can find the complete MicroHack, including challenges, solutions, and documentation, here on the official MicroHacks GitHub—allowing you to guide a similar chatbot with your own documents in Azure.
In 2021, the European Commission submitted a legislative proposal for regulating artificial intelligence. This proposal, known as the AI-Act, has passed through additional important bodies in May, bringing the adoption of the draft law closer. One particular feature of the planned law is the so-called “place of market” principle. According to this principle, companies worldwide that offer or operate artificial intelligence in the European market or use AI-generated output within the EU will be affected by the AI Act.
Artificial intelligence refers to machine-based systems that autonomously make forecasts, recommendations, or decisions, thereby influencing the physical and virtual environment. This includes, for example, AI solutions that support the recruiting process, predictive maintenance solutions, and chatbots like ChatGPT. The legal requirements that different AI systems must meet vary greatly depending on their classification into risk classes.
Risk class determines the legal requirements
The EU’s risk-based approach includes a total of four risk classes: low, limited, high, and unacceptable risk. These classes reflect the extent to which artificial intelligence poses a threat to European values and fundamental rights. As the names of the risk classes suggest, not all AI systems are permissible. AI systems belonging to the “unacceptable risk” category are intended to be banned according to the AI Act. For the remaining three risk classes, the higher the risk, the more extensive and stringent the legal requirements for the AI system.
We will explain below which AI systems fall into which risk class and the associated requirements. The information provided refers to the joint report of IMCO and LIBE from May 2023. At the time of publication, this document represents the current state of the AI Act.
Ban on social scoring and biometric remote identification
Some AI systems have significant potential to violate human rights and fundamental principles, which is why they are classified as “unacceptable risk.” These include:
- Real-time biometric remote identification systems in publicly accessible spaces
- Biometric remote identification systems retroactively, except for law enforcement authorities investigating serious crimes and only with judicial authorization
- Biometric categorization systems that use sensitive characteristics such as gender, ethnic origin, or religion
- Predictive policing based on profiling, including profiling based on skin color, suspected religious affiliation, similar sensitive attributes, geographical location, or previous criminal behavior
- Systems for emotion recognition in law enforcement, border control, workplace, and educational institutions
- Arbitrary extraction of biometric data from social media or video surveillance footage for the creation of facial recognition databases
- Social scoring that leads to discrimination in social contexts
- AI that exploits vulnerabilities of a specific group of people or employs unconscious techniques that can cause physical or psychological harm.
These AI systems are intended to be banned in the European market under the AI Act. Companies whose AI systems could fall into this risk class should urgently
Numerous requirements for AI with risks to health, safety, or fundamental rights
The high-risk category includes all AI systems that are not explicitly prohibited but still pose a high risk to health, safety, or fundamental rights. The following application and use areas are explicitly mentioned in the present legislative proposal:
- Biometric and biometrically supported systems that do not fall into the “unacceptable risk” class
- Management and operation of critical infrastructure
- General and vocational education
- Access to and entitlement to basic private and public services
- Employment, personnel management, and access to self-employment
- Law enforcement
- Migration, asylum, and border control
- Justice and democratic processes
Comprehensive legal requirements are provided for these AI systems, which must be implemented before their operation and complied with throughout the AI life cycle:
- Quality and risk management
- Data governance structures
- Quality requirements for training, testing, and validation data
- Technical documentation and record-keeping obligations
- Compliance with transparency and disclosure requirements
- Human oversight, robustness, safety, and accuracy
- Declaration of conformity, including CE marking obligations
- Registration in a Europe-wide database
AI systems used in any of the aforementioned areas that do not pose a risk to health, safety, the environment, and fundamental rights are not subject to legal requirements. However, it is necessary to demonstrate this by informing the relevant national authority about the AI system. The authority then has three months to assess the risks of the AI system. Within these three months, the AI system can already be put into operation. However, if it is determined that the AI system is considered high-risk, significant fines may apply.
Special provisions apply to AI products and AI safety components of products that have already been tested for conformity by third parties based on EU regulations. This is the case, for example, with AI in toys. To avoid overregulation and additional burden, these products will not be directly affected by the AI-Act.
AI with limited risk must fulfill transparency obligations
AI systems that directly interact with humans fall into the “limited risk” category. This includes emotion recognition systems, biometric categorization systems, as well as AI-generated or altered content that resembles real persons, objects, places, or events and could be mistakenly perceived as real (“Deepfakes”). The draft law stipulates that these systems are obligated to inform consumers about the use of artificial intelligence. This is intended to facilitate active decision-making by consumers regarding the utilization of such systems. Additionally, a code of conduct is recommended.
No legal requirements for AI with low risk
Many AI systems, such as predictive maintenance or spam filters, fall into the “low-risk” category. Companies that exclusively offer or utilize such AI solutions will be minimally affected by the AI Act since there are no legal obligations currently provided for such applications. Only a code of conduct is recommended.
Regulation of generative AI such as ChatGPT
Generative AI models and base models with versatile applications were not initially considered in the submitted draft of the AI-Act. Therefore, the regulation of such AI models, particularly highlighted since the launch of ChatGPT by OpenAI, is being intensely discussed. The current draft proposal by the two committees suggests comprehensive requirements for general-purpose AI models, including:
- Quality and risk management
- Data governance structures
- Technical documentation
- Compliance with transparency and information obligations
- Ensuring performance, interpretability, correctability, security, cybersecurity
- Compliance with environmental standards
- Collaboration with downstream providers
- Registration in a Europe-wide database
Companies can already prepare for the AI Act
While the exact form of the legal requirements remains to be seen, it is evident that the risk-based approach to regulating artificial intelligence has gained significant support within EU institutions. Therefore, it is highly likely that the AI Act will be adopted with the defined risk classes.
Following the official adoption of the legislative proposal, a two-year transition period will commence for companies. During this period, it is essential to align AI systems and associated processes with the legal requirements. Given the potential fines of up to €40,000,000 for non-compliance, we recommend that companies evaluate the requirements of the AI Act for their own organization at an early stage.
A first step is assessing the risk class of each AI system. If you are unsure about the risk class of your AI systems based on the examples mentioned above, we recommend using our free AI Act Quick Check.
 European Parliament Committee on the Internal Market and Consumer Protection: https://www.europarl.europa.eu/committees/de/imco/home/members
 Committee on Civil Liberties, Justice and Home Affairs: https://www.europarl.europa.eu/committees/de/libe/home/highlights
- Lunch & Learn “Alles, was du über den AI Act Wissen musst “ Video (only available in german)
- Factsheet AI Act
- „General approach“ of the Council of the European Union: https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/
- Proposal (AI Ac“) of the European Commission: https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX%3A52021PC0206
- Ethic guidelines for trustworthy AI: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
- Current status of the legislative process: https://eur-lex.europa.eu/procedure/EN/2021_106?qid=1657016300941&sortOrder=des
- More information: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Read next …
… and explore new
Have you ever imagined a restaurant where AI powers everything? From the menu to the cocktails, hosting, music, and art? No? Ok, then, please click here.
If yes, well, it’s not a dream anymore. We made it happen: Welcome to “the byte” – Germany’s (maybe the world’s first) AI-powered Pop-up Restaurant!
As someone who has worked in data and AI consulting for over ten years, building statworx and the AI Hub Frankfurt, I have always thought of exploring the possibilities of AI outside of typical business applications. Why? Because AI will impact every aspect of our society, not just the economy. AI will be everywhere – in school, arts & music, design, and culture. Everywhere. Exploring these directions of AI’s impact led me to meet Jonathan Speier and James Ardinast from S-O-U-P, two like-minded founders from Frankfurt, who are rethinking how technology will shape cities and societies.
S-O-U-P is their initiative that operates at the intersection of culture, urbanity, and lifestyle. With their yearly “S-O-U-P Urban Festival” they connect creatives, businesses, gastronomy, and lifestyle people from Frankfurt and beyond.
When Jonathan and I started discussing AI and its impact on society and culture, we quickly came up with the idea of an AI-generated menu for a restaurant. Luckily, James, Jonathan’s S-O-U-P co-founder, is a successful gastro entrepreneur from Frankfurt. Now the pieces came together. After another meeting with James in one of his restaurants (and some drinks), we committed to launching Germany’s first AI-powered Pop-up Restaurant: the byte!
the byte: Our concept
We envisioned the byte to be an immersive experience, including AI in as many elements of the experience as possible. Everything, from the menu to the cocktails, music, branding, and art on the wall: everything was AI-generated. Bringing AI into all of these components also pushed me far beyond of what I typically do, namely helping large companies with their data & AI challenges.
Before creating the menu, we developed the visual identity of our project. We decided on a “lo-fi” appeal, using a pixelated font in combination with AI-generated visuals of plates and dishes. Our key visual, a neon-lit white plate, was created using DALL-E 2 and was found across all of our marketing materials:
We hosted the byte in one of Frankfurt’s coolest restaurant event locations: Stanley, a restaurant location that features approx. 60 seats and a fully-fledged bar inside the restaurant (ideal for our AI-generated cocktails). The atmosphere is rather dark and cozy, with dark marble walls, highlighted with white carpets on the table, and a big red window that lets you see the kitchen from outside.
The heart of our concept was a 5-course menu that we designed to elevate the classical Frankfurter cuisine with the multicultural and diverse influences of Frankfurt (for everyone, who knows the Frankfurter kitchen, I am sure you know that this was not an easy task).
Using GPT-4 and some prompt engineering magic, we generated several menu candidates that were test-cooked by the experienced Stanley kitchen crew (thank you, guys for this great work!) and then assembled into a final menu. Below, you can find our prompt to create the menu candidates:
“Create a 5-course menu that elevates the classical Frankfurter kitchen. The menu must be a fusion of classical Frankfurter cuisine combined with the multicultural influences of Frankfurt. Describe each course, its ingredients as well as a detailed description of each dish’s presentation.”
Surprisingly, only minor adjustments were necessary to the recipes, even though some AI creations were extremely adventurous! This was our final menu:
- Handkäs’ Mousse with Pickled Beetroot on Roasted Sourdough Bread
- Next Level Green Sauce (with Cilantro and Mint) topped with a Fried Panko Egg
- Cream Soup from White Asparagus with Coconut Milk and Fried Curry Fish
- Currywurst (Beef & Vegan) by Best Worscht in Town with Carrot-Ginger-Mash and Pine Nuts
- Frankfurt Cheesecake with Äppler Jelly, Apple Foam and Oat-Pecanut-Crumble
My favorite was the “Next Level” Green Sauce, an oriental twist of the classical 7-herb Frankfurter Green Sauce topped with a fried panko egg. Yummy! Below you can see the menu out in the wild 🍲
Alongside the menu, we also prompted GPT to create recipes that twisted famous cocktail classics to match our Frankfurt fusion theme. The results:
- Frankfurt Spritz (Frankfurter Äbbelwoi, Mint, Sparkling Water)
- Frankfurt Mule (Variation of a Moscow Mule with Calvados)
- The Main (Variation of a Swimming Pool Cocktail)
My favorite was the Frankfurt Spritz, as it was fresh, herbal, and delicate (see pic below):
AI Host: Ambrosia the Culinary AI
An important part of our concept was “Ambrosia”, an AI-generated host that guided the guests around the evening, explaining the concept and how the menu was created. We thought it was important to manifest the AI as something the guests can experience. We hired a professional screenwriter for the script and used murf.ai to create several text-2-speech assets that were played at the beginning of the dinner and in-between courses.
Note: Ambrosia starts talking at 0:15.
Music plays an important role for the vibe of an event. We decided to use mubert, a generative AI start-up that allowed us to create and stream AI music in different genres, such as “Minimal House” for a progressive vibe throughout the evening. After the main course, a DJ took over and accompanied our guests into the night 💃🍸
Throughout the restaurant, we placed AI-generated art pieces by the local AI artist Vladimir Alexeev (a.k.a. “Merzmensch”), here are some examples:
As an interactive element for the guests, we created a small web app that takes the first name of a person and transforms it into a dish, including a reasoning why that name perfectly matches the dish 🙂 You can try it out here: Playground
The byte was officially announced at the S-O-U-P festival press conference in early May 2023. We also launched additional marketing activities through social media and our friends and family networks. As a result, the byte was fully booked for three days straight, and we got broad media coverage in various gastronomy magazines and the daily press. The guests were (mostly) amazed by our AI creations, and we received inquiries from other European restaurants and companies interested in exclusively booking the byte as an experience for their employees 🤩 Nailed it!
Closing and Next Steps
Creating the byte together with Jonathan and James was an outstanding experience. It further encouraged me that AI will transform not only our economy but all aspects of our daily lives. There is massive potential at the intersection of creativity, culture, and AI that is currently only being tapped.
We definitely want to continue the byte in Frankfurt and other cities in Germany and Europe. Moreover, James, Jonathan, and I are already thinking of new ways to bring AI into culture and society. Stay tuned! 😏
The byte was not just a restaurant; it was an immersive experience. We wanted to create something that had never been done before and did it – in just eight weeks. And that’s the inspiration I want to leave you with today:
Trying new things that move you out of your comfort zone is the ultimate source of growth. You never know what you’re capable of until you try. So, go out there and try something new, like building an AI-powered pop-up restaurant. Who knows, you might surprise yourself. Bon apétit!
statworx at Big Data & AI World
From media to politics, and from large corporations to small businesses, artificial intelligence has finally gained mainstream recognition in 2023. As AI specialists, we were delighted to represent statworx at one of the largest AI expos in the DACH region, “Big Data & AI World,” held in our hometown of Frankfurt. This event centered around the themes of Big Data and Artificial Intelligence, making it an ideal environment for our team of AI experts. However, our purpose went beyond mere exploration and networking. Visitors had the opportunity to engage in an enthralling Pac-Man game with a unique twist at our booth. In this post, we aim to provide you with a comprehensive overview of this exhilarating expo.
Fig. 1: our exhibition stand
Tangible AI Experience
Our Pac-Man challenge, where we provided booth visitors with an up-close encounter of the captivating world of artificial intelligence, emerged as a clear crowd favorite. Through our arcade machine, attendees not only immersed themselves in the timeless retro game but also witnessed the remarkable capabilities of modern technology. Leveraging AI, we analyzed players’ real-time facial expressions to discern their emotions. This fusion of cutting-edge technology and an interactive gaming experience was met with exceptional enthusiasm.
Our AI solution for emotion analysis of players ran seamlessly on a powerful M1-chip-equipped MacBook, enabling real-time image processing and fluid graphics display. The facial recognition of the players was made possible by a smart algorithm that instantly detected all the faces in the video. Subsequently, the face closest to the camera was selected and focused on, ensuring precise analysis even amidst long queues. Further processing involved a Convolutional Neural Network (CNN), specifically the ResNet18 model, which accurately detected players’ emotions.
Functioning as a multimedia server, our backend processed the webcam stream, facial recognition algorithms, and emotion detection. It could be operated either on-site using a MacBook or remotely in the cloud. Thanks to this versatility, we developed an appealing frontend to vividly present the real-time analysis results. Additionally, after each game, the results were sent to the players via email by linking the model with our CRM system. For the email, we created a digital postcard that provides not only screenshots of the most intense emotions but also a comprehensive evaluation.
Fig. 2: Visitor at Pac-Man game machine
Artificial Intelligence – Real Emotions
Our Pac-Man challenge sparked excitement among expo visitors. Alongside the unique gaming experience on our retro arcade machine, participants gained insights into their own emotional states during gameplay. They were able to meticulously observe the prevailing emotions at different points in the game. Often, a slight surge of anger or sadness could be measured when Pac-Man met an untimely digital demise.
However, players exhibited varying reactions to the game. While some seemed to experience a rollercoaster of emotions, others maintained an unwavering poker face that even the AI could only elicit a neutral expression from. This led to intriguing conversations about how the measured emotions corresponded with the players’ experiences. It was evident, without the need for AI, that visitors left our booth with positive emotions, driven in part by the prospect of winning the original NES console we raffled among all participants.
Fig. 3: digital post card
The AI Community on the Move
The “Big Data & AI World” served not only as a valuable experience for our company but also as a reflection of the burgeoning growth in the AI industry. The expo offered a platform for professionals, innovators, and enthusiasts to exchange ideas and collectively shape the future of artificial intelligence.
The energy and enthusiasm emanating from the diverse companies and startups were palpable throughout the aisles and exhibition areas. Witnessing the application of AI technologies across various fields, including medicine, logistics, automotive, and entertainment, was truly inspiring. At statworx, we have already accumulated extensive project experience in these domains, fostering engaging discussions with fellow exhibitors.
Participating in the “Big Data & AI World” was a major success for us. The Pac-Man Challenge with emotion analysis attracted numerous visitors and brought joy to all participants. It was evident that it wasn’t just AI itself but particularly its integration into a stimulating gaming experience that left a lasting impression on many.
Overall, the expo was not only an opportunity to showcase our AI solutions but also a meeting point for the entire AI community. The sense of growth and energy in the industry was palpable. The exchange of ideas, discussions about challenges, and the establishment of new connections were inspiring and promising for the future of the German AI industry.
Last December, the European Council published a dossier outlining the Council’s preliminary position on the draft law known as the AI Act. This new law is intended to regulate artificial intelligence (AI) and thus becomes a game-changer for the entire tech industry. In the following, we have compiled the most important information from the dossier, which is the current official source on the planned AI Act at the time of publication.
A legal framework for AI
Artificial intelligence has enormous potential to improve and ease all our lives. For example, AI algorithms already support early cancer detection or translate sign language in real time, thereby eliminating language barriers. But in addition to the positive effects, there are risks, as the latest deep fakes from Pope Francis or the Cambridge Analytica scandal illustrate.
The European Union (EU) is currently drafting legislation to regulate artificial intelligence to mitigate the risks of artificial intelligence. With this, the EU wants to protect consumers and ensure the ethically acceptable use of artificial intelligence. The so-called “AI Act” is still in the legislative process but is expected to be passed in 2023 – before the end of the current legislative period. Companies will then have two years to implement the legally binding requirements. Violations will be punished with fines of up to 6% of global annual turnover or €30,000,000 – whichever is higher. Therefore, companies should already start addressing the upcoming legal requirements now.
Legislation with global impact
The planned AI Act is based on the “location principle, ” meaning that not only European companies will be affected by the amendment. Thus, all companies that offer AI systems on the European market or also operate them for internal use within the EU are affected by the AI Act – with a few exceptions. Private use of AI remains untouched by the regulation so far.
Which AI systems are affected?
The definition of AI determines which systems will be affected by the AI Act. For this reason, the AI definition of the AI Act has been the subject of controversial debate in politics, business, and society for a considerable time. The initial definition was so broad that many “normal” software systems would also have been affected. The current proposal defines AI as any system developed through machine learning or logic- and knowledge-based approaches. It remains to be seen whether this definition will ultimately be adopted.
7 Principles for trustworthy AI
The “seven principles for trustworthy AI” are the most important basis of the AI Act. A group of experts from research, the digital economy, and associations developed them on behalf of the European Commission. They include not only technical aspects but also social and ethical factors that can be used to classify the trustworthiness of an AI system:
- Human action & oversight: decision-making should be supported without undermining human autonomy.
- Technical Robustness & security: accuracy, reliability, and security must be preemptively ensured.
- Data privacy & data governance: handling of data must be legally secure and protected.
- Transparency: interaction with AI must be clearly communicated, as must its limitations and boundaries.
- Diversity, non-discrimination & fairness: Avoidance of unfair bias must be ensured throughout the entire AI lifecycle.
- Environmental & societal well-being: AI solutions should have a positive impact on the environment and society as possible.
- Accountability: responsibilities for the development, use, and maintenance of AI systems must be defined.
Based on these principles, the AI Act’s risk-based approach was developed, allowing AI systems to be classified into one of four risk classes: low, limited, high, and unacceptable risk.
Four risk classes for trustworthy AI
The risk class of an AI system indicates the extent to which an AI system threatens the principles of trustworthy AI and which legal requirements the system must fulfill – provided the system is fundamentally permissible. This is because, in the future, not all AI systems will be allowed on the European market. For example, most “social scoring” techniques are assessed as “unacceptable” and will not be allowed by the new law.
For the other three risk classes, the rule of thumb is that the higher the risk of an AI system, the higher the legal requirements for it. Companies that offer or operate high-risk systems will have to meet the most requirements. For example, AI used to operate critical (digital) infrastructure or used in medical devices is considered such. To bring these to market, companies will have to observe high-quality standards for the used data, set up a risk management, affix a CE mark, and more.
AI systems in the “limited risk” class are subject to information and transparency obligations. Accordingly, companies must inform users of chatbots, emotion recognition systems, or deep fakes about the use of artificial intelligence. Predictive maintenance or spam filters are two examples of AI systems that fall into the lowest-risk category “low risk”. Companies that exclusively offer or use such AI solutions will hardly be affected by the upcoming AI Act. There are no legal requirements for these applications yet.
What companies can do for now
Even though the AI Act is still in the legislative process, companies should act now. The first step is to clarify how they will be affected by the AI Act. To help you do this, we have developed the AI Act Quick Check. With this free tool, AI systems can be quickly assigned to a risk class free of charge, and requirements for the system can be derived. Finally, it can be used as a basis to estimate how extensive the realization of the AI Act will be in your own company and to take initial measures. Of course, we are also happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!
Benefit from our expertise!
Of course, we are happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!