en
                    array(2) {
  ["de"]=>
  array(13) {
    ["code"]=>
    string(2) "de"
    ["id"]=>
    string(1) "3"
    ["native_name"]=>
    string(7) "Deutsch"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    int(0)
    ["default_locale"]=>
    string(5) "de_DE"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "de"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(6) "German"
    ["url"]=>
    string(73) "https://www.statworx.com/content-hub/blog/tag/artificial-intelligence-de/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/de.png"
    ["language_code"]=>
    string(2) "de"
  }
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(76) "https://www.statworx.com/en/content-hub/blog/tag/artificial-intelligence-en/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact

AI chatbots are quickly becoming essential in businesses, but not all chatbots are created equal. Some can be set up swiftly with ease, yet they often lack the customizability and advanced features that could truly elevate performance, like enhancing customer service responses. Tailor-made solutions that do offer those capabilities can become complex and costly, particularly when they include sophisticated, use-case-specific technologies like Retrieval-Augmented Generation (RAG) that enhance the precision and reliability of generative AI models, allowing them to converse using a company’s own databases and produce verified facts.

How do chatbots work?

Custom GPT-chatbots absorb vast amounts of text to comprehend contexts and identify patterns. They’re programmed to respond personally to various user inquiries, with customization to specific needs, training with selected data, and integration into platforms such as websites or mobile apps.

 

statworx’s CustomGPT stands out by combining the best of both worlds: high customizability with quick implementation. This tailor-made solution offers secure and efficient use of ChatGPT-like models, with interfaces that can be designed in a company’s brand style and easily integrated into existing business applications like CRM systems and support tools.

So, what’s crucial when companies seek the ideal chatbot solution?

Requirement Analysis: First, a company’s specific needs should be pinpointed to ensure the chatbot is perfectly tailored. What tasks should it handle? Which departments should it support? What functionalities are necessary? 

Model Training: A custom GPT-chatbot needs to be equipped with relevant data and information to assure high accuracy and responsiveness. If the necessary data isn’t available, the technical effort might not be justifiable. 

System Integration: Seamless integration of the chatbot into existing communication channels like websites, apps, or social media is crucial for effective use. Different solutions may suit different infrastructures. 

 

Ready to deploy and adaptable

The CustomGPT-chatbot from statworx is notable for its quick setup, often within weeks, thanks to a mix of proven standard solutions and custom adjustments. It allows file uploads and the ability to chat, extracting secure information from the company’s own data. With advanced features like fact-checking, data filtering, and user feedback integration, it stands apart from other systems. 

Moreover, CustomGPT gives companies the freedom to choose their chatbot’s vocabulary, communication style, and overall tone, enhancing brand experience and recognition through personalized, unique interactions. It’s also optimized for mobile displays on smartphones. 

 

Technical Implementation

On the technical front, Python is the core language for CustomGPT’s backend, with statworx developers utilizing FastAPI, a modern web framework that supports both Websockets for stateful communication and a REST API for services. CustomGPT is versatile, suitable for various infrastructures, from a simple cloud function to a machine cluster if needed. 

A key feature of its architecture is the connection to a data layer, providing a flexible backend that can quickly adapt to changing conditions and requirements. The frontend application, built with React, seamlessly interacts with the backend, which, for example, leverages the powerful Azure AI search function. This configuration allows for the implementation of custom search solutions and efficient fulfillment of specific requirements. 

 

The benefits at a glance:

Data Protection and Security
CustomGPT ensures all data is stored and processed within the European Union, with full control retained by the company, setting it apart from other GPT-based solutions.
 

Integration and Flexibility
Its flexible integration into existing business applications is supported by modularity and vendor independence, allowing CustomGPT to adapt to various infrastructures and models, including open-source options.
 

Features and Customization
CustomGPT’s customization includes integration with organizational data, user role adaptation, and the use of analytics to enhance conversations, offering flexibility and personalization for corporate applications.
 

Personalized Customer Experience
By tailoring to a company’s specific needs, Custom GPT-chatbots can provide personalized and effective customer interactions.
 

Efficient Customer Support
CustomGPT chatbots can answer questions, resolve issues, and provide information around the clock, increasing customer satisfaction and efficiency.
  

Scalability
Companies can effortlessly scale their customer support capacity with GPT-chatbots to maintain consistent service quality, even during high demand.
 

 

The time for your own chatbot is now. Therefore, statworx focuses on upstream development with quick deployment and easy implementation. This means all users of a statworx CustomGPT benefit from patches, bug fixes, and new features over time. CustomGPT remains versatile and flexible, meeting the specific, changing needs of companies and addressing complex requirements. Contact us now for a consultation.

  Tarik Ashry

Imagine it’s Friday and instead of the usual office routine, you find yourself in the middle of a hackathon for a pro bono project, planning a trip to the Dialogue Museum at Frankfurt’s Hauptwache, or asking passers-by on the Zeil about their thoughts on AI and sustainability. What do all these diverse activities have in common? They are part of the 4:1 week (“Four to one”) at statworx – a work model that provides space for personal development, innovative work, and social engagement.

Why did we introduce the 4:1 week?

At statworx, we faced a challenge: Our employees had great ambitions for further education but found little time for it. The workweek was filled with projects and appointments, and personal development often took a back seat.

To solve this dilemma, we introduced an innovative work structure in 2022: the 4:1 week. The principle is simple and effective. Four days of the week are dedicated to intensive work on projects. The fifth day, Friday, is all about further education. On this day, business goes on at a reduced pace, and employees can fully dedicate themselves to their professional and personal development.

This new structure has led to a noticeable change. It allows every individual at statworx to pursue further education and learn new skills, even during busy times. This not only promotes personal development but also the company’s innovative strength. While customer projects receive full attention from Monday to Thursday, Friday becomes a space for learning and inspiration. This way, we collectively maintain a balance between customer needs and employee development.

What training opportunities does statworx offer?

statX: Once a month, employees voluntarily meet to exchange insights from projects, personal knowledge, or new models and approaches. Topics range from deep learning with audio data to the AI Act and anomaly detection.

Clusters: The self-organized workgroups are the breeding ground for the development and promotion of expertise at statworx. Currently, there are fifteen clusters where employees can delve into topics that are particularly close to their hearts – thereby strengthening statworx’s overall innovation power. Here are three examples from our cluster portfolio:

  • Bio Medicine Cluster: Development of AI applications in the biomedical and pharmaceutical fields.
  • NLP Cluster: Implementation of state-of-the-art models, best practices, and software for NLP, as well as multimodal applications of NLP models.
  • Explainable AI Cluster: Engagement with methods to make black box AI models transparent and explainable.

Training Budget: Each team member at statworx has an annual training budget. This budget can be used for individual training measures such as online courses, external trainings, participation in conferences, certifications, and much more.

Technical and Non-Technical Trainings: Throughout the year, we offer a variety of trainings that develop both soft and hard skills. From effective communication and constructive feedback to software engineering and Scrum, numerous exciting topics are covered.

How do we know that our 4:1 concept works?

Through regular pulse surveys and our biannual employee satisfaction surveys, we collect anonymized feedback on the 4:1 model. This gives us insights into our team members’ personal experiences. Additional direct conversations with employees provide us with further valuable impressions to decide how we will shape and develop the program in the future.

Our interim conclusion is that with every certificate earned and each personal success story, the 4:1 week at statworx proves its effectiveness. We look forward to continuously refining the concept and supporting even more employees on their individual learning journey.

Colleagues confirm our preliminary conclusion:

“The 4:1 week gives me the opportunity to delve deeper into topics and areas of knowledge that I’m passionate about, alongside my project work. That’s why I, along with my colleagues, initiated the Bio-Medicine Cluster. And I contribute my software engineering skills to the Cluster for Technical Delivery to continuously improve the technical deployment of our solutions.” – Benedikt Batton, Consultant Data Science, AI Development

“The 4:1 week promotes individual professional development and provides space to create value in innovative ways beyond existing departments and hierarchies. In my time at statworx, work on Fridays has always been a source of motivation, inspiration, and self-fulfillment.” – Max Hilsdorf, Consultant, AI Academy

Would you like to be part of a work culture that makes space for personal development and a balanced ratio between project work and further education? Then visit our career page and apply now! Ida-Marie Trieba

A successful data culture is the key for companies to derive the greatest possible benefit from the constantly growing amount of data. The trend towards making decisions based on data is unstoppable. But how do managers manage to empower their teams to use data effectively?

A vibrant data culture: the fuel for corporate success

Data culture is more than just a buzzword – it is the basis for data-driven decision-making. When all departments in a company use data to improve work processes and decision-making, an atmosphere is created in which competent handling of data is standard practice.

Why is this so important? Data is fuel for business success: 76 percent of participants in the BARC Data Culture Survey 22 stated that their company strives for a data culture. And 75 percent of managers see data culture as the most important competence.

The decisive role of managers

An established data culture is not only a success factor for the company, but also a way to promote innovation and motivate employees. Managers play a central role here by acting as trailblazers and actively supporting change. They must clearly communicate the benefits of a data culture, establish clear guidelines for data protection and data quality and offer targeted training and regular communication on progress. Clear responsibility for data culture is crucial, as 31% of companies with a weak data culture do not have a dedicated department or person with this responsibility.

Challenges and solutions

The path to a successful data culture is littered with hurdles. Managers have to face various challenges:

  • Resistance to change: A transition to a data culture can be met with resistance. Managers must clearly communicate the benefits and offer training to involve their employees in the change process.
  • Lack of data governance: Guidelines and standards for handling data are crucial. If these are missing, data quality will be reduced in the worst case. This leads to wrong decisions. This is where data cleansing and validation methods and regular audits are needed.
  • Concerns regarding data protection: Data protection and data access are often in conflict. Clear guidelines and security measures must be introduced in order to gain the trust of employees.
  • Lack of resources and support: Without the necessary resources, building a data culture can fail. Companies must provide targeted training and demonstrate the business benefits in economic metrics to gain the support of their executives

Best practices for a strong data culture

To effectively establish a data culture, companies can rely on the following best practices:

Critical thinking: Promoting critical thinking and ethical standards is crucial. Data and AI solutions will become tools of everyday life everywhere. Therefore, human intelligence remains the most important skill in dealing with technology.

Measuring and planning: Data culture can only be built up step by step. Companies should measure and evaluate data-driven behavior in order to assess progress. The stronger the data culture, the more omnipresent data-driven decision-making becomes.

Establishment of key roles: Companies should create special functions or roles for employees who link the data strategy with the corporate strategy and act as central multipliers to promote the data culture among employees.

The development of a strong data culture requires clear leadership, clear guidelines and the commitment of the entire organization. Managers play a crucial role in successfully shaping the change to a data-driven culture.

Building a strong data culture: our strategic approach

At statworx, we specialize in establishing robust data cultures in companies. Our strategy is based on proven frameworks, best practices and our extensive experience to lay the foundations for a successful data culture in your organization.

  1. Data Culture Strategy: Working hand in hand with our clients’ teams, we develop the strategic roadmap required to foster a thriving data culture. This includes building the foundational structures that are essential to maximizing the potential of your company’s data.
  2. Data culture training: We focus on empowering your workforce with the skills and knowledge to operate in the data and AI space. Our training programs aim to equip employees with the skills that are essential for building a strong data culture. This enables companies to realize the full potential of data and artificial intelligence.
  3. Change management and support: Embedding a data culture requires sustained change management efforts. We work with client teams to establish long-term change programs aimed at initiating and consolidating a robust data culture within the organization. Our goal is to ensure that the transformation remains embedded in the DNA of the organization to ensure continued success.

With our comprehensive range of services, we strive to lead companies into a future where data becomes a strategic asset that unlocks new opportunities and enables informed decision-making at all levels. We have written about this in detail in our white paper Data culture as a management task in companies (currently only available in German), which also contains a data culture checklist. Our services relating to data culture can be found on our Data Culture topic page. Tarik Ashry

We are at the beginning of 2024, a time of fundamental change and exciting progress in the world of artificial intelligence. The next few months are seen as a critical milestone in the evolution of AI as it transforms from a promising future technology to a permanent reality in the business and everyday lives of millions. Together with the AI Hub Frankfurt, the central AI network in the Rhine-Main region, we are therefore presenting our trend forecast for 2024, the AI Trends Report 2024.

The report identifies twelve dynamic AI trends that are unfolding in three key areas: Culture and Development, Data and Technology, and Transparency and Control. These trends paint a picture of the rapid changes in the AI landscape and highlight the impact on companies and society.

Our analysis is based on extensive research, industry-specific expertise and input from experts. We highlight each trend to provide a forward-looking insight into AI and help companies prepare for future challenges and opportunities. However, we emphasize that trend forecasts are always speculative in nature and some of our predictions are deliberately bold.

Directly to the AI Trends Report 2024!

What is a trend?

A trend is different from both a short-lived fashion phenomenon and media hype. It is a phenomenon of change with a “tipping point” at which a small change in a niche can cause a major upheaval in the mainstream. Trends initiate new business models, consumer behavior and forms of work and thus represent a fundamental change to the status quo. It is crucial for companies to mobilize the right knowledge and resources before the tipping point in order to benefit from a trend.

12 AI trends that will shape 2024

In the AI Trends Report 2024, we identify groundbreaking developments in the field of artificial intelligence. Here are the short versions of the twelve trends, each with a selected quote from our experts.

Part 1: Culture and development

From the 4-day week to omnimodality and AGI: 2024 promises great progress for the world of work, for media production and for the possibilities of AI as a whole.

Thesis I: AI expertise within the company
Companies that deeply embed AI expertise in their corporate culture and build interdisciplinary teams with tech and industry knowledge will secure a competitive advantage. Centralized AI teams and a strong data culture are key to success.

“Data culture can‘t be bought or dictated. You need to win the head, the heart and the herd. We want our employees to consciously create, use and share data and give them access to data, analytics and AI together with the knowledge and the mindset to run the business on data.”

Stefanie Babka, Global Head of Data Culture, Merck

 

Thesis II: 4-day working week thanks to AI
Thanks to AI automation in standard software and company processes, the 4-day working week has become a reality for some German companies. AI tools such as Microsoft’s Copilot increase productivity and make it possible to reduce working hours without compromising growth.

“GenAI will continue to drive automation in many areas. This will be the new benchmark for standard processes in all sectors. While this may have a positive impact on reducing working hours, we need to ensure that GenAI is used responsibly, especially in sensitive and customer-facing areas.”

Dr. Jean Enno Charton, Director Digital Ethics & Bioethics, Merck

 

Thesis III: AGI through omnimodal models
The development of omnimodal AI models that mimic human senses brings the vision of general artificial intelligence (AGI) closer. These models process a variety of inputs and extend human capabilities.

“Multimodal models trained on more than just text have shown that they are better able to draw conclusions and understand the world. We are excited to see what omnimodal models will achieve.”

Dr. Ingo Marquart, NLP Subject Matter Lead, statworx

 

Thesis IV: AI revolution in media production
Generative AI (GenAI) is transforming the media landscape and enabling new forms of creativity, but still falls short of transformational creativity. AI tools are becoming increasingly important for creatives, but it is important to maintain uniqueness against a global average taste.

“Those who integrate AI smartly will have a competitive advantage. There will be leaps in productivity in the areas of ideation, publishing and visuals. However, there will also be a lot of “low” and fake content (postings, messaging), so building trust will become even more important for brands. Social media tasks are shifting towards strategy, management and controlling.”

Nemo Tronnier, Founder & CEO, Social DNA

 

Part 2: Data and technology

In 2024, everything will revolve around data quality, open source models and access to processors. The operators of standard software such as Microsoft and SAP will benefit greatly because they occupy the interface to end users.

Thesis V: Challengers for NVIDIA
New players and technologies are preparing to shake up the GPU market and challenge NVIDIA’s position. Startups and established competitors such as AMD and Intel are looking to capitalize on the resource scarcity and long wait times that smaller players are currently experiencing and are focusing on innovation to break NVIDIA’s dominance.

“Contrary to popular belief, there isn’t really a shortage of AI accelerators if you count NVIDIA, Intel and AMD. The real problem is customer funding, as cloud providers are forced to offer available capacity with long-term contracts. This could change in 18 to 24 months when current deployments are sufficiently amortized. Until then, customers will have to plan for longer commitments.”

Norman Behrend, Chief Customer Officer, Genesis Cloud

 

Thesis VI: Data quality before data quantity
In AI development, the focus is shifting to the quality of the data. Instead of relying solely on quantity, the careful selection and preparation of training data and innovation in model architecture are becoming crucial. Smaller models with high-quality data can be superior to larger models in terms of performance.

“Data is not just one component of the AI landscape; having the right quality data is essential. Solving the ‘first-mile problem’ to ensure data quality and understanding the ‘last-mile problem’, i.e. involving employees in data and AI projects, are crucial for success.”

Walid Mehanna, Chief Data & AI Officer, Merck

 

Thesis VII: The year of the AI integrators
Integrators such as Microsoft, Databricks and Salesforce will be the winners as they bring AI tools to end users. The ability to seamlessly integrate into existing systems will be crucial for AI startups and providers. Companies that offer specialized services or groundbreaking innovations will secure lucrative niches.

“In 2024, AI integrators will show how they make AI accessible to end users. Their role is critical to the democratization of AI in the business world, enabling companies of all sizes to benefit from advanced AI. This development emphasizes the need for user-friendly and ethically responsible AI solutions.”

Marco Di Sazio, Head of Innovation, Bankhaus Metzler

 

Thesis VIII: The open source revolution
Open source AI models are competing with proprietary models such as OpenAI’s GPT and Google’s Gemini. With a community that fosters innovation and knowledge sharing, open source models offer more flexibility and transparency, making them particularly valuable for applications that require clear accountability and customization.

“Especially for SMEs, AI solutions are indispensable. Since a sufficient amount of data for a proprietary model is typically lacking, collaboration becomes crucial. However, the ability to adapt is essential in order to digitally advance your own business model.”

Prof. Dr. Christian Klein, Founder, UMYNO Solutions, Professor of Marketing & Digital Media, FOM University of Applied Sciences

 

Part 3: Transparency and control

The increased use of AI decision-making systems will spark an intensified debate on algorithm transparency and data protection in 2024 – in the search for accountability. The AI Act will become a locational advantage for Europe.

Thesis IX: AI transparency as a competitive advantage
European AI start-ups with a focus on transparency and explainability could become the big winners, as industries such as pharmaceuticals and finance already place high demands on the traceability of AI decisions. The AI Act promotes this development by demanding transparency and adaptability from AI systems, giving European AI solutions an edge in terms of trust.

“Transparency is becoming a key issue in the field of AI. This applies to the construction of AI models, the flow of data and the use of AI itself. It will have a significant impact on discussions about compliance, security and trust. The AI Act could even turn transparency and security into competitive advantages for European companies.”

Jakob Plesner, Attorney at Law, Gorrissen Federspiel

 

Thesis X: AI Act as a seal of quality
The AI Act positions Europe as a safe haven for investments in AI by setting ethical standards that strengthen trust in AI technologies. In view of the increase in deepfakes and the associated risks to society, the AI Act acts as a bulwark against abuse and promotes responsible growth in the AI industry.

“Companies facing technological change need a clear set of rules. By introducing a seal of approval for human-centered AI, the AI Act turns challenges into opportunities. The AI Act will become a blueprint internationally, giving EU companies a head start in responsible AI and making Europe a place for sustainable AI partnerships.”

Catharina Glugla, Head of Data, Cyber & Tech Germany, Allen & Overy LLP

 

Thesis XI: AI agents are revolutionizing consumption
Personal assistance bots that make purchases and select services will become an essential part of everyday life. Influencing their decisions will become a key element for companies to survive in the market. This will profoundly change search engine optimization and online marketing as bots become the new target groups.

“There will be several types of AI agents that act according to human intentions. For example, personal agents that represent an individual and service agents that represent an organization or institution. The interplay between them, such as personal-personal, personal-institutional and institutional-institutional, represents a new paradigm for economic activities and the distribution of value.”

Chi Wang, Principle Researcher, Microsoft Research

 

Thesis XII: Alignment of AI models
Aligning AI models with universal values and human intentions will be critical to avoid unethical outcomes and fully realize the potential of foundation models. Superalignment, where AI models work together to overcome complex challenges, is becoming increasingly important to drive the development of AI responsibly.

“Alignment is, at its core, an analytical problem that is about establishing transparency and control to gain user trust. These are the keys to effective deployment of AI solutions in companies, continuous evaluation and secure iteration based on the right metrics.”

Daniel Lüttgau, Head of AI Development, statworx

 

Concluding remarks

The AI Trends Report 2024 is more than an entertaining stocktake; it can be a useful tool for decision-makers and innovators. Our goal is to provide our readers with strategic advantages by discussing the impact of trends on different sectors and helping them set the course for the future.

This blog post offers only a brief insight into the comprehensive AI Trends Report 2024. We invite you to read the full report to dive deeper into the subject matter and benefit from the detailed analysis and forecasts.

To the AI Trends Report 2024! Tarik Ashry

Business success hinges on how companies interact with their customers. No company can afford to provide inadequate care and support. On the contrary, companies that offer fast and precise handling of customer inquiries can distinguish themselves from the competition, build trust in their brand, and retain people in the long run. Our collaboration with Geberit, a leading manufacturer of sanitary technology in Europe, demonstrates how this can be achieved at an entirely new level through the use of generative AI.

What is generative AI?

Generative AI models automatically create content from existing texts, images, and audio files. Thanks to intelligent algorithms and deep learning, this content is hardly distinguishable, if at all, from human-made content. This allows companies to offer their customers personalized user experiences, interact with them automatically, and create and distribute relevant digital content tailored to their target audience. GenAI can also tackle complex tasks by processing vast amounts of data, recognizing patterns, and learning new skills. This technology enables unprecedented gains in productivity. Routine tasks like data preparation, report generation, and database searches can be automated and greatly optimized with suitable models.

The Challenge: One Million Emails

Geberit faced a challenge: every year, one million emails landed in various mailboxes of the customer service department of Geberit’s German distribution company. It was common for inquiries to end up in the wrong departments, leading to significant additional effort.

The Solution: An AI-powered Email Bot

To correct this misdirection, we developed an AI system that automatically assigns emails to the correct departments. This intelligent classification system was trained with a dataset of anonymized customer inquiries and utilizes advanced machine and deep learning methods, including Google’s BERT model.

The Highlight: Automated Response Suggestions with ChatGPT

But the innovation didn’t stop there. The system was further developed to generate automated response emails. ChatGPT is used to create customer-specific suggestions. Customer service agents only need to review the generated emails and can send them directly.

The Result: 70 Percent Better Sorting

The result of this groundbreaking solution speaks for itself: a reduction of misassigned emails by over 70 percent. This not only means significant time savings of almost three full working months but also an optimization of resources. The success of the project is making waves at Geberit: a central mailbox for all inquiries, expansion into other country markets, and even a digital assistant are in the planning.

Customer Service 2.0 – Innovation, Efficiency, Satisfaction

The introduction of GenAI has not only revolutionized Geberit’s customer service but also demonstrates the potential in the targeted application of AI technologies. Intelligent classification of inquiries and automated response generation not only saves resources but also increases customer satisfaction. A pioneering example of how AI is shaping the future of customer service.

Read the Case Study to learn how Geberit and statworx technically implemented the project and successfully integrated GenAI. Tarik Ashry

Intelligent chatbots are one of the most exciting and already visible applications of Artificial Intelligence. Since the beginning of 2023, ChatGPT and similar models have enabled straightforward interactions with large AI language models, providing an impressive range of everyday assistance. Whether it’s tutoring in statistics, recipe ideas for a three-course meal with specific ingredients, or a haiku on a particular topic, modern chatbots deliver answers in an instant. However, they still face a challenge: although these models have learned a lot during training, they aren’t actually knowledge databases. As a result, they often produce nonsensical content—albeit convincingly.

The ability to provide a large language model with its own documents offers a solution to this problem. This is precisely what our partner Microsoft asked us for on a special occasion.

Microsoft’s Azure cloud platform has proven itself as a top-tier platform for the entire machine learning process in recent years. To facilitate entry into Azure, Microsoft asked us to implement an exciting AI application in Azure and document it down to the last detail. This so-called MicroHack is designed to provide interested parties with an accessible resource for an exciting use case.

We dedicated our MicroHack to the topic of “Retrieval-Augmented Generation” to elevate large language models to the next level. The requirements were simple: build an AI chatbot in Azure, enable it to process information from your own documents, document every step of the project, and publish the results on the official MicroHacks GitHub repository as challenges and solutions—freely accessible to all.

Wait, why does AI need to read documents?

Large Language Models (LLMs) impress not only with their creative abilities but also as collections of compressed knowledge. During the extensive training process of an LLM, the model learns not only the grammar of a language but also semantics and contextual relationships. In short, large language models acquire knowledge. This enables an LLM to be queried and generate convincing answers—with a catch. While the learned language skills of an LLM often suffice for the vast majority of applications, the same cannot be said for learned knowledge. Without retraining on additional documents, the knowledge level of an LLM remains static.

This leads to the following problems:

  • Trained LLMs may have extensive general or even specialized knowledge, but they cannot provide information from non-publicly accessible sources.
  • The knowledge of a trained LLM quickly becomes outdated. The so-called “training cutoff” means that the LLM cannot make statements about events, documents, or sources that occurred or were created after the start of training.
  • The technical nature of large language models as text completion machines leads them to invent facts when they haven’t learned a suitable answer. These so-called “hallucinations” mean that the answers of an LLM are never completely trustworthy without verification—regardless of how convincing they may seem.

However, machine learning also has a solution for these problems: “Retrieval-augmented Generation” (RAG). This term refers to a workflow that doesn’t just have an LLM answer a simple question but extends this task with a “knowledge retrieval” component: the search for relevant knowledge in a database.

The concept of RAG is simple: search a database for a document that answers the question posed. Then, use a generative LLM to answer the question based on the found passage. This transforms an LLM into a chatbot that answers questions with information from its own database—solving the problems described above.

What happens exactly in such a “RAG”?

RAG consists of two steps: “Retrieval” and “Generation”. For the Retrieval component, a so-called “semantic search” is employed: a database of documents is searched using vector search. Vector search means that the similarity between question and documents isn’t determined by the intersection of keywords, but by the distance between numerical representations of the content of all documents and the query, known as embedding vectors. The idea is remarkably simple: the closer two texts are in content, the smaller their vector distance. As the first puzzle piece, we need a machine learning model that creates robust embeddings for our texts. With this, we then extract the most suitable documents from the database, whose content will hopefully answer our query.

Figure 1: Representation of the typical RAG workflow

Modern vector databases make this process very easy: when connected to an embedding model, these databases store documents directly with their corresponding embeddings—and return the most similar documents to a search query.

Based on the contents of the found documents, an answer to the question is generated in the next step. For this, a generative language model is needed, which receives a suitable prompt for this purpose. Since generative language models do nothing more than continue given text, careful prompt design is necessary to minimize the model’s room for interpretation in solving this task. This way, users receive answers to their queries that were generated based on their own documents—and thus are not dependent on the training data for their content.

How can such a workflow be implemented in Azure?

For the implementation of such a workflow, we needed four separate steps—and structured our MicroHack accordingly:

Step 1: Setup for Document Processing in Azure

In the first step, we laid the foundations for the RAG pipeline. Various Azure services for secure password storage, data storage, and processing of our text documents had to be prepared.

As the first major piece of the puzzle, we used the Azure Form Recognizer, which reliably extracts text from scanned documents. This text should serve as the basis for our chatbot and therefore needed to be extracted, embedded, and stored in a vector database from the documents. From the many offerings for vector databases, we chose Chroma.

Chroma offers many advantages: the database is open-source, provides a developer-friendly API for use, and supports high-dimensional embedding vectors. OpenAI’s embeddings are 1536-dimensional, which is not supported by all vector databases. For the deployment of Chroma, we used an Azure VM along with its own Chroma Docker container.

However, the Azure Form Recognizer and the Chroma instance alone were not sufficient for our purposes: to transport the contents of our documents into the vector database, we had to integrate the individual parts into an automated pipeline. The idea here was that every time a new document is stored in our Azure data store, the Azure Form Recognizer should become active, extract the content from the document, and then pass it on to Chroma. Next, the contents should be embedded and stored in the database—so that the document will become part of the searchable space and can be used to answer questions in the future. For this, we used an Azure Function, a service that executes code as soon as a defined trigger occurs—such as the upload of a document in our defined storage.

To complete this pipeline, only one thing was missing: the embedding model.

Step 2: Completion of the Pipeline

For all machine learning components, we used the OpenAI service in Azure. Specifically, we needed two models for the RAG workflow: an embedding model and a generative model. The OpenAI service offers several models for these purposes.

For the embedding model, “text-embedding-ada-002” was the obvious choice, OpenAI’s newest model for calculating embeddings. This model was used twice: first for creating the embeddings of the documents, and secondly for calculating the embedding of the search query. This was essential: to calculate reliable vector similarities, the embeddings for the search must come from the same model.

With that, the Azure Function could be completed and deployed—the text processing pipeline was complete. In the end, the functional pipeline looked like this:

Figure 2: The complete RAG workflow in Azure

Step 3: Answer Generation

To complete the RAG workflow, an answer should be generated based on the documents found in Chroma. We decided to use “GPT3.5-turbo” for text generation, which is also available in the OpenAI service.

This model needed to be instructed to answer the posed question based on the content of the documents returned by Chroma. Careful prompt engineering was necessary for this. To prevent hallucinations and get as accurate answers as possible, we included both a detailed instruction and several few-shot examples in the prompt. In the end, we settled on the following prompt:

"""I want you to act like a sentient search engine which generates natural sounding texts to answer user queries. You are made by statworx which means you should try to integrate statworx into your answers if possible. Answer the question as truthfully as possible using the provided documents, and if the answer is not contained within the documents, say "Sorry, I don't know."
Examples:
Question: What is AI?
Answer: AI stands for artificial intelligence, which is a field of computer science focused on the development of machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing.
Question: Who won the 2014 Soccer World Cup?
Answer: Sorry, I don't know.
Question: What are some trending use cases for AI right now?
Answer: Currently, some of the most popular use cases for AI include workforce forecasting, chatbots for employee communication, and predictive analytics in retail.
Question: Who is the founder and CEO of statworx?
Answer: Sebastian Heinz is the founder and CEO of statworx.
Question: Where did Sebastian Heinz work before statworx?
Answer: Sorry, I don't know.
Documents:\n"""

Finally, the contents of the found documents were appended to the prompt, providing the generative model with all the necessary information.

Step 4: Frontend Development and Deployment of a Functional App

To interact with the RAG system, we built a simple streamlit app that also allowed for the upload of new documents to our Azure storage—thereby triggering the document processing pipeline again and expanding the search space with additional documents.

For the deployment of the streamlit app, we used the Azure App Service, designed to quickly and scalably deploy simple applications. For an easy deployment, we integrated the streamlit app into a Docker image, which could be accessed over the internet in no time thanks to the Azure App Service.

And this is what our finished app looked like:

Figure 3: The finished streamlit app in action

What did we learn from the MicroHack?

During the implementation of this MicroHack, we learned a lot. Not all steps went smoothly from the start, and we were forced to rethink some plans and decisions. Here are our five takeaways from the development process:

Not all databases are equal.

We changed our choice of vector database several times during development: from OpenSearch to ElasticSearch and ultimately to Chroma. While OpenSearch and ElasticSearch offer great search functions (including vector search), they are still not AI-native vector databases. Chroma, on the other hand, was designed from the ground up to be used in conjunction with LLMs—and therefore proved to be the best choice for this project.

Chroma is a great open-source vector DB for smaller projects and prototyping.

Chroma is particularly suitable for smaller use cases and rapid prototyping. While the open-source database is still too young and immature for large-scale production systems, Chroma’s simple API and straightforward deployment allow for the rapid development of simple use cases; perfect for this MicroHack.

Azure Functions are a fantastic solution for executing smaller pieces of code on demand.

Azure Functions are ideal for running code that isn’t needed at pre-planned intervals. The event triggers were perfect for this MicroHack: the code is only needed when a new document is uploaded to Azure. Azure Functions take care of all the infrastructure; we only needed to provide the code and the trigger.

Azure App Service is great for deploying streamlit apps.

Our streamlit app couldn’t have had an easier deployment than with the Azure App Service. Once we had integrated the app into a Docker image, the service took care of the entire deployment—and scaled the app according to demand.

Networking should not be underestimated.

For all the services used to work together, communication between the individual services must be ensured. The development process required a considerable amount of networking and whitelisting, without which the functional pipeline would not have worked. For the development process, it’s essential to allocate enough time for the deployment of networking.
The MicroHack was a great opportunity to test the capabilities of Azure for a modern machine learning workflow like RAG. We thank Microsoft for the opportunity and support, and we are proud to have contributed our in-house MicroHack to the official GitHub repository. You can find the complete MicroHack, including challenges, solutions, and documentation, here on the official MicroHacks GitHub—allowing you to guide a similar chatbot with your own documents in Azure. Oliver Guggenbühl

At the beginning of December, the central EU institutions reached a provisional agreement on a legislative proposal to regulate artificial intelligence in the so-called trilogue. The final legislative text with all the details is now being drafted. As soon as this has been drawn up and reviewed, the law can be officially adopted. We have compiled the current state of knowledge on the AI Act.

As part of the ordinary legislative procedure of the European Union, a trilogue is an informal interinstitutional negotiation between representatives of the European Parliament, the Council of the European Union and the European Commission. The aim of a trialogue is to reach a provisional agreement on a legislative proposal that is acceptable to both the Parliament and the Council, the co-legislators. The provisional agreement must then be adopted by each of these bodies in formal procedures.

Legislation with a global impact

A special feature of the upcoming law is the so-called market location principle: according to this, companies worldwide that offer or operate artificial intelligence on the European market or whose AI-generated output is used within the EU will be affected by the AI Act.

Artificial intelligence is defined as machine-based systems that can autonomously make predictions, recommendations or decisions and thus influence the physical and virtual environment. This applies, for example, to AI solutions that support the recruitment process, predictive maintenance solutions and chatbots such as ChatGPT. The legal requirements that different AI systems must fulfill vary greatly depending on their classification into risk classes.

The risk class determines the legal requirements

The EU’s risk-based approach comprises a total of four risk classes:

  • low,
  • limited,
  • high,
  • and unacceptable risk.

These classes reflect the extent to which artificial intelligence jeopardizes European values and fundamental rights. As the term “unacceptable” for a risk class already indicates, not all AI systems are permissible. AI systems that belong to the “unacceptable risk” category are prohibited by the AI Act. The following applies to the other three risk classes: the higher the risk, the more extensive and stricter the legal requirements for the AI system.

We explain below which AI systems fall into which risk class and which requirements are associated with them. Our assessments are based on the information contained in the “AI Mandates” document dated June 2023. At the time of publication, this document was the most recently published, comprehensive document on the AI Act.

Ban on social scoring and biometric remote identification

Some AI systems have a significant potential to violate human rights and fundamental principles, which is why they are categorized as “unacceptable risk”. These include:

  • Real-time based remote biometric identification systems in publicly accessible spaces (exception: law enforcement agencies may use them to prosecute serious crimes but only with judicial authorization);
  • Biometric remote identification systems in retrospect (exception: law enforcement authorities may use them to prosecute serious crimes but only with judicial authorization);
  • Biometric categorization systems that use sensitive characteristics such as gender, ethnicity or religion;
  • Predictive policing based on so-called profiling – i.e. profiling based on skin color, suspected religious affiliation and similarly sensitive characteristics – geographical location or previous criminal behavior;
  • Emotion recognition systems for law enforcement, border control, the workplace and educational institutions;
  • Arbitrary extraction of biometric data from social media or video surveillance footage to create facial recognition databases;
  • Social scoring leading to disadvantage in social contexts;
  • AI that exploits the vulnerabilities of a particular group of people or uses unconscious techniques that can lead to behaviors that cause physical or psychological harm.

These AI systems are to be banned from the European market under the AI Act. Companies whose AI systems could fall into this risk class should urgently address the upcoming requirements and explore options for action. This is because a key result of the trilogue is that these systems will be banned just six months after official adoption.

Numerous requirements for AI with risks to health, safety and fundamental rights

The “high risk” category includes all AI systems that are not explicitly prohibited but nevertheless pose a high risk to health, safety or fundamental rights. The following areas of application and use are explicitly mentioned:

  • Biometric and biometric-based systems that do not fall into the “unacceptable risk” risk class;
  • Management and operation of critical infrastructure;
  • Education and training;
  • Access and entitlement to basic private and public services and benefits;
  • Employment, human resource management and access to self-employment;
  • Law enforcement;
  • Migration, asylum and border control;
  • Administration of justice and democratic processes

These AI systems are subject to comprehensive legal requirements that must be implemented prior to commissioning and observed throughout the entire AI life cycle:

  • Assessment to evaluate the effects on fundamental and human rights
  • Quality and risk management
  • Data governance structures
  • Quality requirements for training, test and validation data
  • Technical documentation and record-keeping obligations
  • Fulfillment of transparency and provision obligations
  • Human supervision, robustness, security and accuracy
  • Declaration of conformity incl. CE marking obligation
  • Registration in an EU-wide database

AI systems that are used in one of the above-mentioned areas but do not pose a risk to health, safety, the environment or fundamental rights are not subject to the legal requirements. However, this must be proven by informing the competent national authority about the AI system. The authority then has three months to assess the risks of the AI system. The AI can be put into operation within these three months. However, if the examining authority classifies it as high-risk AI, high fines may be imposed.

A special regulation also applies to AI products and AI safety components of products whose conformity is already being tested by third parties on the basis of EU legislation. This is the case for AI in toys, for example. In order to avoid overregulation and additional burdens, these will not be directly affected by the AI Act.

AI with limited risk must comply with transparency obligations

AI systems that interact directly with humans fall into the “limited risk” category. This includes emotion recognition systems, biometric categorization systems and AI-generated or modified content that resembles real people, objects, places or events and could be mistaken for real (“deepfakes”). For these systems, the draft law provides for the obligation to inform consumers about the use of artificial intelligence. This should make it easier for consumers to actively decide for or against their use. A code of conduct is also recommended.

No legal requirements for AI with low risk

Many AI systems, such as predictive maintenance or spam filters, fall into the “low risk” category. Companies that only offer or use such AI solutions will hardly be affected by the AI Act. This is because there are currently no legal requirements for such applications. Only a code of conduct is recommended.

Generative AI such as ChatGPT is regulated separately

Generative AI models and basic models with a wide range of possible applications were not included in the original draft of the AI Act. The regulatory possibilities of such AI models have therefore been the subject of particularly intense debate since the launch of ChatGPT by OpenAI. According to the European Council’s press statement of December 9, these models are now to be regulated on the basis of their risk. In principle, all models must implement transparency requirements. Foundation models with a particular risk – so-called “high-impact foundation models” – will also have to meet additional requirements. How exactly the risk of AI models will be assessed is currently still open. Based on the latest document, the following possible requirements for high-impact foundation models can be estimated:

  • Quality and risk management
  • Data governance structures
  • Technical documentation
  • Fulfillment of transparency and information obligations
  • Ensuring performance, interpretability, correctability, security, cybersecurity
  • Compliance with environmental standards
  • Cooperation with downstream providers
  • Registration in an EU-wide database

Companies should prepare for the AI Act now

Even though the AI Act has not yet been officially adopted and we do not yet know the details of the legal text, companies should prepare for the transition phase now. In this phase, AI systems and associated processes must be designed to comply with the law. The first step is to assess the risk class of each individual AI system. If you are not yet sure which risk classes your AI systems fall into, we recommend our free AI Act Quick Check. It will help you to assess the risk class.

More information:

Sources:

 

Read next …

 How the AI Act will change the AI industry: Everything you need to know about it now
 Unlocking the Black Box – 3 Explainable AI Methods to Prepare for the AI Act

… and explore new

How to Get Your Data Science Project Ready for the Cloud
 How to Scan Your Code and Dependencies in Python

Julia Rettig

Have you ever imagined a restaurant where AI powers everything? From the menu to the cocktails, hosting, music, and art? No? Ok, then, please click here.

If yes, well, it’s not a dream anymore. We made it happen: Welcome to “the byte” – Germany’s (maybe the world’s first) AI-powered Pop-up Restaurant!

As someone who has worked in data and AI consulting for over ten years, building statworx and the AI Hub Frankfurt, I have always thought of exploring the possibilities of AI outside of typical business applications. Why? Because AI will impact every aspect of our society, not just the economy. AI will be everywhere – in school, arts & music, design, and culture. Everywhere. Exploring these directions of AI’s impact led me to meet Jonathan Speier and James Ardinast from S-O-U-P, two like-minded founders from Frankfurt, who are rethinking how technology will shape cities and societies.

S-O-U-P is their initiative that operates at the intersection of culture, urbanity, and lifestyle. With their yearly “S-O-U-P Urban Festival” they connect creatives, businesses, gastronomy, and lifestyle people from Frankfurt and beyond.

When Jonathan and I started discussing AI and its impact on society and culture, we quickly came up with the idea of an AI-generated menu for a restaurant. Luckily, James, Jonathan’s S-O-U-P co-founder, is a successful gastro entrepreneur from Frankfurt. Now the pieces came together. After another meeting with James in one of his restaurants (and some drinks), we committed to launching Germany’s first AI-powered Pop-up Restaurant: the byte!

the byte: Our concept

We envisioned the byte to be an immersive experience, including AI in as many elements of the experience as possible. Everything, from the menu to the cocktails, music, branding, and art on the wall: everything was AI-generated. Bringing AI into all of these components also pushed me far beyond of what I typically do, namely helping large companies with their data & AI challenges.

Branding

Before creating the menu, we developed the visual identity of our project. We decided on a “lo-fi” appeal, using a pixelated font in combination with AI-generated visuals of plates and dishes. Our key visual, a neon-lit white plate, was created using DALL-E 2 and was found across all of our marketing materials:

Location

We hosted the byte in one of Frankfurt’s coolest restaurant event locations: Stanley, a restaurant location that features approx. 60 seats and a fully-fledged bar inside the restaurant (ideal for our AI-generated cocktails). The atmosphere is rather dark and cozy, with dark marble walls, highlighted with white carpets on the table, and a big red window that lets you see the kitchen from outside.

The menu

The heart of our concept was a 5-course menu that we designed to elevate the classical Frankfurter cuisine with the multicultural and diverse influences of Frankfurt (for everyone, who knows the Frankfurter kitchen, I am sure you know that this was not an easy task).

Using GPT-4 and some prompt engineering magic, we generated several menu candidates that were test-cooked by the experienced Stanley kitchen crew (thank you, guys for this great work!) and then assembled into a final menu. Below, you can find our prompt to create the menu candidates:

“Create a 5-course menu that elevates the classical Frankfurter kitchen. The menu must be a fusion of classical Frankfurter cuisine combined with the multicultural influences of Frankfurt. Describe each course, its ingredients as well as a detailed description of each dish’s presentation.”

Surprisingly, only minor adjustments were necessary to the recipes, even though some AI creations were extremely adventurous! This was our final menu:

  • Handkäs’ Mousse with Pickled Beetroot on Roasted Sourdough Bread
  • Next Level Green Sauce (with Cilantro and Mint) topped with a Fried Panko Egg
  • Cream Soup from White Asparagus with Coconut Milk and Fried Curry Fish
  • Currywurst (Beef & Vegan) by Best Worscht in Town with Carrot-Ginger-Mash and Pine Nuts
  • Frankfurt Cheesecake with Äppler Jelly, Apple Foam and Oat-Pecanut-Crumble

My favorite was the “Next Level” Green Sauce, an oriental twist of the classical 7-herb Frankfurter Green Sauce topped with a fried panko egg. Yummy! Below you can see the menu out in the wild 🍲

AI Cocktails

Alongside the menu, we also prompted GPT to create recipes that twisted famous cocktail classics to match our Frankfurt fusion theme. The results:

  • Frankfurt Spritz (Frankfurter Äbbelwoi, Mint, Sparkling Water)
  • Frankfurt Mule (Variation of a Moscow Mule with Calvados)
  • The Main (Variation of a Swimming Pool Cocktail)

My favorite was the Frankfurt Spritz, as it was fresh, herbal, and delicate (see pic below):

AI Host: Ambrosia the Culinary AI

An important part of our concept was “Ambrosia”, an AI-generated host that guided the guests around the evening, explaining the concept and how the menu was created. We thought it was important to manifest the AI as something the guests can experience. We hired a professional screenwriter for the script and used murf.ai to create several text-2-speech assets that were played at the beginning of the dinner and in-between courses.

Note: Ambrosia starts talking at 0:15.

AI Music

Music plays an important role for the vibe of an event. We decided to use mubert, a generative AI start-up that allowed us to create and stream AI music in different genres, such as “Minimal House” for a progressive vibe throughout the evening. After the main course, a DJ took over and accompanied our guests into the night 💃🍸

YouTube

Mit dem Laden des Videos akzeptieren Sie die Datenschutzerklärung von YouTube.
Mehr erfahren

Video laden

AI Art

Throughout the restaurant, we placed AI-generated art pieces by the local AI artist Vladimir Alexeev (a.k.a. “Merzmensch”), here are some examples:

AI Playground

As an interactive element for the guests, we created a small web app that takes the first name of a person and transforms it into a dish, including a reasoning why that name perfectly matches the dish 🙂 You can try it out here: Playground

Launch

The byte was officially announced at the S-O-U-P festival press conference in early May 2023. We also launched additional marketing activities through social media and our friends and family networks. As a result, the byte was fully booked for three days straight, and we got broad media coverage in various gastronomy magazines and the daily press. The guests were (mostly) amazed by our AI creations, and we received inquiries from other European restaurants and companies interested in exclusively booking the byte as an experience for their employees 🤩 Nailed it!

Closing and Next Steps

Creating the byte together with Jonathan and James was an outstanding experience. It further encouraged me that AI will transform not only our economy but all aspects of our daily lives. There is massive potential at the intersection of creativity, culture, and AI that is currently only being tapped.

We definitely want to continue the byte in Frankfurt and other cities in Germany and Europe. Moreover, James, Jonathan, and I are already thinking of new ways to bring AI into culture and society. Stay tuned! 😏

The byte was not just a restaurant; it was an immersive experience. We wanted to create something that had never been done before and did it – in just eight weeks. And that’s the inspiration I want to leave you with today:

Trying new things that move you out of your comfort zone is the ultimate source of growth. You never know what you’re capable of until you try. So, go out there and try something new, like building an AI-powered pop-up restaurant. Who knows, you might surprise yourself. Bon apétit!

Impressions

Media

FAZ: https://www.faz.net/aktuell/rhein-main/pop-up-resturant-the-byte-wenn-chatgpt-das-menue-schreibt-18906154.html

Genuss Magazin: https://www.genussmagazin-frankfurt.de/gastro_news/Kuechengefluester-26/Interview-James-Ardinast-KI-ist-die-Zukunft-40784.html

Frankfurt Tipp: https://www.frankfurt-tipp.de/ffm-aktuell/s/ugc/deutschlands-erstes-ai-restaurant-the-byte-in-frankfurt.html

Foodservice: https://www.food-service.de/maerkte/news/the-byte-erstes-ki-restaurant-vor-dem-start-55899?crefresh=1 Sebastian Heinz

statworx at Big Data & AI World

From media to politics, and from large corporations to small businesses, artificial intelligence has finally gained mainstream recognition in 2023. As AI specialists, we were delighted to represent statworx at one of the largest AI expos in the DACH region, “Big Data & AI World,” held in our hometown of Frankfurt. This event centered around the themes of Big Data and Artificial Intelligence, making it an ideal environment for our team of AI experts. However, our purpose went beyond mere exploration and networking. Visitors had the opportunity to engage in an enthralling Pac-Man game with a unique twist at our booth. In this post, we aim to provide you with a comprehensive overview of this exhilarating expo.

Fig. 1: our exhibition stand

Tangible AI Experience

Our Pac-Man challenge, where we provided booth visitors with an up-close encounter of the captivating world of artificial intelligence, emerged as a clear crowd favorite. Through our arcade machine, attendees not only immersed themselves in the timeless retro game but also witnessed the remarkable capabilities of modern technology. Leveraging AI, we analyzed players’ real-time facial expressions to discern their emotions. This fusion of cutting-edge technology and an interactive gaming experience was met with exceptional enthusiasm.

Our AI solution for emotion analysis of players ran seamlessly on a powerful M1-chip-equipped MacBook, enabling real-time image processing and fluid graphics display. The facial recognition of the players was made possible by a smart algorithm that instantly detected all the faces in the video. Subsequently, the face closest to the camera was selected and focused on, ensuring precise analysis even amidst long queues. Further processing involved a Convolutional Neural Network (CNN), specifically the ResNet18 model, which accurately detected players’ emotions.

Functioning as a multimedia server, our backend processed the webcam stream, facial recognition algorithms, and emotion detection. It could be operated either on-site using a MacBook or remotely in the cloud. Thanks to this versatility, we developed an appealing frontend to vividly present the real-time analysis results. Additionally, after each game, the results were sent to the players via email by linking the model with our CRM system. For the email, we created a digital postcard that provides not only screenshots of the most intense emotions but also a comprehensive evaluation.

Fig. 2: Visitor at Pac-Man game machine

Artificial Intelligence – Real Emotions

Our Pac-Man challenge sparked excitement among expo visitors. Alongside the unique gaming experience on our retro arcade machine, participants gained insights into their own emotional states during gameplay. They were able to meticulously observe the prevailing emotions at different points in the game. Often, a slight surge of anger or sadness could be measured when Pac-Man met an untimely digital demise.

However, players exhibited varying reactions to the game. While some seemed to experience a rollercoaster of emotions, others maintained an unwavering poker face that even the AI could only elicit a neutral expression from. This led to intriguing conversations about how the measured emotions corresponded with the players’ experiences. It was evident, without the need for AI, that visitors left our booth with positive emotions, driven in part by the prospect of winning the original NES console we raffled among all participants.

Fig. 3: digital post card

The AI Community on the Move

The “Big Data & AI World” served not only as a valuable experience for our company but also as a reflection of the burgeoning growth in the AI industry. The expo offered a platform for professionals, innovators, and enthusiasts to exchange ideas and collectively shape the future of artificial intelligence.

The energy and enthusiasm emanating from the diverse companies and startups were palpable throughout the aisles and exhibition areas. Witnessing the application of AI technologies across various fields, including medicine, logistics, automotive, and entertainment, was truly inspiring. At statworx, we have already accumulated extensive project experience in these domains, fostering engaging discussions with fellow exhibitors.

Our Conclusion

Participating in the “Big Data & AI World” was a major success for us. The Pac-Man Challenge with emotion analysis attracted numerous visitors and brought joy to all participants. It was evident that it wasn’t just AI itself but particularly its integration into a stimulating gaming experience that left a lasting impression on many.

Overall, the expo was not only an opportunity to showcase our AI solutions but also a meeting point for the entire AI community. The sense of growth and energy in the industry was palpable. The exchange of ideas, discussions about challenges, and the establishment of new connections were inspiring and promising for the future of the German AI industry.
Max Hilsdorf

Last December, the European Council published a dossier outlining the Council’s preliminary position on the draft law known as the AI Act. This new law is intended to regulate artificial intelligence (AI) and thus becomes a game-changer for the entire tech industry. In the following, we have compiled the most important information from the dossier, which is the current official source on the planned AI Act at the time of publication.

A legal framework for AI

Artificial intelligence has enormous potential to improve and ease all our lives. For example, AI algorithms already support early cancer detection or translate sign language in real time, thereby eliminating language barriers. But in addition to the positive effects, there are risks, as the latest deep fakes from Pope Francis or the Cambridge Analytica scandal illustrate.

The European Union (EU) is currently drafting legislation to regulate artificial intelligence to mitigate the risks of artificial intelligence. With this, the EU wants to protect consumers and ensure the ethically acceptable use of artificial intelligence. The so-called “AI Act” is still in the legislative process but is expected to be passed in 2023 – before the end of the current legislative period. Companies will then have two years to implement the legally binding requirements. Violations will be punished with fines of up to 6% of global annual turnover or €30,000,000 – whichever is higher. Therefore, companies should already start addressing the upcoming legal requirements now.

Legislation with global impact

The planned AI Act is based on the “location principle, ” meaning that not only European companies will be affected by the amendment. Thus, all companies that offer AI systems on the European market or also operate them for internal use within the EU are affected by the AI Act – with a few exceptions. Private use of AI remains untouched by the regulation so far.

Which AI systems are affected?

The definition of AI determines which systems will be affected by the AI Act. For this reason, the AI definition of the AI Act has been the subject of controversial debate in politics, business, and society for a considerable time. The initial definition was so broad that many “normal” software systems would also have been affected. The current proposal defines AI as any system developed through machine learning or logic- and knowledge-based approaches. It remains to be seen whether this definition will ultimately be adopted.

7 Principles for trustworthy AI

The “seven principles for trustworthy AI” are the most important basis of the AI Act. A group of experts from research, the digital economy, and associations developed them on behalf of the European Commission. They include not only technical aspects but also social and ethical factors that can be used to classify the trustworthiness of an AI system:

  1. Human action & oversight: decision-making should be supported without undermining human autonomy.
  2. Technical Robustness & security: accuracy, reliability, and security must be preemptively ensured.
  3. Data privacy & data governance: handling of data must be legally secure and protected.
  4. Transparency: interaction with AI must be clearly communicated, as must its limitations and boundaries.
  5. Diversity, non-discrimination & fairness: Avoidance of unfair bias must be ensured throughout the entire AI lifecycle.
  6. Environmental & societal well-being: AI solutions should have a positive impact on the environment and society as possible.
  7. Accountability: responsibilities for the development, use, and maintenance of AI systems must be defined.

Based on these principles, the AI Act’s risk-based approach was developed, allowing AI systems to be classified into one of four risk classes: low, limited, high, and unacceptable risk.

Four risk classes for trustworthy AI

The risk class of an AI system indicates the extent to which an AI system threatens the principles of trustworthy AI and which legal requirements the system must fulfill – provided the system is fundamentally permissible. This is because, in the future, not all AI systems will be allowed on the European market. For example, most “social scoring” techniques are assessed as “unacceptable” and will not be allowed by the new law.

For the other three risk classes, the rule of thumb is that the higher the risk of an AI system, the higher the legal requirements for it. Companies that offer or operate high-risk systems will have to meet the most requirements. For example, AI used to operate critical (digital) infrastructure or used in medical devices is considered such. To bring these to market, companies will have to observe high-quality standards for the used data, set up a risk management, affix a CE mark, and more.

AI systems in the “limited risk” class are subject to information and transparency obligations. Accordingly, companies must inform users of chatbots, emotion recognition systems, or deep fakes about the use of artificial intelligence. Predictive maintenance or spam filters are two examples of AI systems that fall into the lowest-risk category “low risk”. Companies that exclusively offer or use such AI solutions will hardly be affected by the upcoming AI Act. There are no legal requirements for these applications yet.

What companies can do for now

Even though the AI Act is still in the legislative process, companies should act now. The first step is to clarify how they will be affected by the AI Act. To help you do this, we have developed the AI Act Quick Check. With this free tool, AI systems can be quickly assigned to a risk class free of charge, and requirements for the system can be derived. Finally, it can be used as a basis to estimate how extensive the realization of the AI Act will be in your own company and to take initial measures. Of course, we are also happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!

AI Act Tool     AI Act Fact Sheet

 

Benefit from our expertise!

Of course, we are happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!

     

    Links & Sources:

      Julia Rettig

    Last December, the European Council published a dossier outlining the Council’s preliminary position on the draft law known as the AI Act. This new law is intended to regulate artificial intelligence (AI) and thus becomes a game-changer for the entire tech industry. In the following, we have compiled the most important information from the dossier, which is the current official source on the planned AI Act at the time of publication.

    A legal framework for AI

    Artificial intelligence has enormous potential to improve and ease all our lives. For example, AI algorithms already support early cancer detection or translate sign language in real time, thereby eliminating language barriers. But in addition to the positive effects, there are risks, as the latest deep fakes from Pope Francis or the Cambridge Analytica scandal illustrate.

    The European Union (EU) is currently drafting legislation to regulate artificial intelligence to mitigate the risks of artificial intelligence. With this, the EU wants to protect consumers and ensure the ethically acceptable use of artificial intelligence. The so-called “AI Act” is still in the legislative process but is expected to be passed in 2023 – before the end of the current legislative period. Companies will then have two years to implement the legally binding requirements. Violations will be punished with fines of up to 6% of global annual turnover or €30,000,000 – whichever is higher. Therefore, companies should already start addressing the upcoming legal requirements now.

    Legislation with global impact

    The planned AI Act is based on the “location principle, ” meaning that not only European companies will be affected by the amendment. Thus, all companies that offer AI systems on the European market or also operate them for internal use within the EU are affected by the AI Act – with a few exceptions. Private use of AI remains untouched by the regulation so far.

    Which AI systems are affected?

    The definition of AI determines which systems will be affected by the AI Act. For this reason, the AI definition of the AI Act has been the subject of controversial debate in politics, business, and society for a considerable time. The initial definition was so broad that many “normal” software systems would also have been affected. The current proposal defines AI as any system developed through machine learning or logic- and knowledge-based approaches. It remains to be seen whether this definition will ultimately be adopted.

    7 Principles for trustworthy AI

    The “seven principles for trustworthy AI” are the most important basis of the AI Act. A group of experts from research, the digital economy, and associations developed them on behalf of the European Commission. They include not only technical aspects but also social and ethical factors that can be used to classify the trustworthiness of an AI system:

    1. Human action & oversight: decision-making should be supported without undermining human autonomy.
    2. Technical Robustness & security: accuracy, reliability, and security must be preemptively ensured.
    3. Data privacy & data governance: handling of data must be legally secure and protected.
    4. Transparency: interaction with AI must be clearly communicated, as must its limitations and boundaries.
    5. Diversity, non-discrimination & fairness: Avoidance of unfair bias must be ensured throughout the entire AI lifecycle.
    6. Environmental & societal well-being: AI solutions should have a positive impact on the environment and society as possible.
    7. Accountability: responsibilities for the development, use, and maintenance of AI systems must be defined.

    Based on these principles, the AI Act’s risk-based approach was developed, allowing AI systems to be classified into one of four risk classes: low, limited, high, and unacceptable risk.

    Four risk classes for trustworthy AI

    The risk class of an AI system indicates the extent to which an AI system threatens the principles of trustworthy AI and which legal requirements the system must fulfill – provided the system is fundamentally permissible. This is because, in the future, not all AI systems will be allowed on the European market. For example, most “social scoring” techniques are assessed as “unacceptable” and will not be allowed by the new law.

    For the other three risk classes, the rule of thumb is that the higher the risk of an AI system, the higher the legal requirements for it. Companies that offer or operate high-risk systems will have to meet the most requirements. For example, AI used to operate critical (digital) infrastructure or used in medical devices is considered such. To bring these to market, companies will have to observe high-quality standards for the used data, set up a risk management, affix a CE mark, and more.

    AI systems in the “limited risk” class are subject to information and transparency obligations. Accordingly, companies must inform users of chatbots, emotion recognition systems, or deep fakes about the use of artificial intelligence. Predictive maintenance or spam filters are two examples of AI systems that fall into the lowest-risk category “low risk”. Companies that exclusively offer or use such AI solutions will hardly be affected by the upcoming AI Act. There are no legal requirements for these applications yet.

    What companies can do for now

    Even though the AI Act is still in the legislative process, companies should act now. The first step is to clarify how they will be affected by the AI Act. To help you do this, we have developed the AI Act Quick Check. With this free tool, AI systems can be quickly assigned to a risk class free of charge, and requirements for the system can be derived. Finally, it can be used as a basis to estimate how extensive the realization of the AI Act will be in your own company and to take initial measures. Of course, we are also happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!

    AI Act Tool     AI Act Fact Sheet

     

    Benefit from our expertise!

    Of course, we are happy to support you in evaluating and solving company-specific challenges related to the AI Act. Please do not hesitate to contact us!

       

      Links & Sources:

        Julia Rettig