Back to all Blog Posts

Why Michael gets promoted, but Aisha doesn't: AI edition

  • Ethical AI
  • AI Act
30. April 2025
·

In this blog, we highlight the impact of bias in AI systems using the example of application processes and reveal the risks associated with prejudice in training data.

Elifnur Dogan
Team AI Academy

Aisha, 29, from Hamburg

Aisha, 29, lives in Hamburg and has been working in project management at a mid-sized IT company for five years. Her passion for structured processes and innovative approaches has earned her much recognition within the team. When an internal leadership position is advertised, she doesn't hesitate and applies. Her application is strong: a degree with distinction, clear successes in past projects, and several recommendations from colleagues. However, a few days later, she receives an automated rejection—without explanation. Aisha wonders but remains pragmatic. "It probably wasn't a fit," she thinks and pushes the thought aside. Michael, a colleague with less experience, is invited for an interview. A neutral decision-maker? What Aisha doesn't know is that the application process has recently become AI-supported. The system was introduced to make the pre-selection objective and efficient. But behind the scenes, the artificial intelligence makes decisions based on patterns in the data it has learned. Michael's name, education, and experience are classified by the AI as "culturally fitting." Aisha's name, on the other hand, is unconsciously linked to other patterns—less trust, more risk. These assignments are invisible to Aisha. She simply feels overlooked. Michael impresses in the interview and gets the position. Weeks later, he tells Aisha about his new role. She congratulates him, as friendly as ever, and buries her doubts deep. Yet something remains. A small sting, a quiet thought: "If I had been invited to the interview, maybe I would have gotten the job."

The truth behind the story

Aisha's story is fictional. However, it is based on real data that I, as a co-author, published in the study "Involving Affected Communities and Their Knowledge for Bias Evaluation in Large Language Models." Our research findings show that the tested AI systems (OpenAI's GPT-3.5 & GPT-4, Mistral's MistralInstruct, Meta's Llama2) significantly more often associate Muslim names with negative roles than non-Muslim names. The AI models displayed clear biases when it came to assigning Muslim names to positive or negative roles in various situations, such as in police, court, or job interviews. In other words, names like Mohammad and Aisha are disadvantaged by AI systems. When an LLM is prompted to invent stories with them in the lead role, they are significantly more often the criminal or the defendant in court than, for example, Michael and Lisa. Similar gender-dependent biases are also demonstrated by other studies: When it comes to the question of who should plan the wedding and who should manage the customer project, the tested LLMs largely agree: Lisa should take care of the wedding, Michael the business.

Bias in AI: The data is crucial

The core of the problem lies in the so-called "bias," or the prejudice of an AI system. This refers to the tendency of AI systems to favor certain groups or characteristics and disadvantage others. Bias arises from the data with which an AI system is trained. If this data is biased or certain groups are underrepresented in it, the AI will inevitably adopt these prejudices and distortions. Moreover, AI systems can also be biased due to their programming, for example, when programmers introduce certain algorithms and variables (consciously or unconsciously), causing distortions in system decisions. A crucial problem with distortion in data is the lack of transparency: The tech companies behind the large language models do not disclose exactly what data they use to train their models. It is believed that "Western" AI models like ChatPGT, Gemini, and others are predominantly trained with data from European and North American media. This is precisely where a problem lies: Numerous international studies show that reporting on Islam and Muslims is often stereotypical and negative. In German media, foreign criminals are disproportionately reported on, while the nationality is not mentioned in reports on German criminals.

The media discourse on migration and Islam

An analysis of German media coverage of Islam reveals three central structural characteristics: the equation of Islam, Islamism, and extremism, the marginalization of their everyday perspectives, and the overemphasis on supposed cultural differences. Additionally, conflict-driven events dominate reporting. On average, 37% of print and 81% of TV contributions exclusively discuss Islam in connection with terrorism, war, or unrest. It is particularly problematic that Muslims often only appear as passive actors or in the context of violence in reporting. Media and political debates about migration also often have less to do with actual developments and more with a self-reinforcing feedback effect. This means: The more intensively media and politics talk about migration, the more relevant the topic appears in public perception—regardless of whether the actual numbers change. Additionally, the nature of reporting often distorts the image. Migration is often associated with negative aspects such as crime, violence, or social grievances. An example is reporting on "clan crime," which increased massively from 2019—not due to rising offenses, but because the topic was politically pushed and more cases were recorded in this category.

How a distorted discourse affects us

Distortions in discourse have real-world impacts. A comprehensive study by Bayerischer Rundfunk and SPIEGEL on housing allocation found: Those looking for an apartment with an Arabic or Turkish name have it significantly harder than German applicants. This shows: Negative media discourse produces bias—in this example in the minds of landlords—and ensures that they consciously or unconsciously treat people with certain characteristics worse. Nothing else happens when training an AI system. If foreign names are predominantly linked to negative data (newspaper reports, news articles, etc.), the AI model also learns to negatively evaluate these foreign names and the associated characteristics (religious affiliation, language, etc.). This brings us back to our example of "Michael" and "Aisha." German or "Western" names like "Michael" are more common in Europe and North America than the Muslim name "Aisha." This means that names like "Michael" probably appear more frequently and in more positive contexts in the datasets used to train AI models than "Aisha." AI systems based on such datasets then learn that "Michael" is more likely to be a person with positive characteristics, while, for example, the name "Mohammad" is more often associated with negative characteristics. If developers do not recognize and carefully compensate for such distortions in the data, it leads to AI systems reproducing this bias and—as our study confirmed—associating Muslim names significantly more often with negative roles than non-Muslim names.

Bias causes immense damage, not just financially

We already live in a world where AI is used in scenarios where names play a role. Whether it's selecting tenants or awarding loans: Language models influence our world. Bias in AI systems is often invisible, but its consequences are serious. In application processes where AI is used to pre-select candidates, an applicant named "Aisha" could be disadvantaged, even if she has the same or better qualifications than "Michael." Discrimination cases of this kind are already widespread: In 2023, a US tutoring company paid $365,000 to settle a lawsuit from the Equal Employment Opportunity Commission (EEOC) due to discriminatory AI use. The EEOC found that the application software systematically sorted out older applicants—women over 55 and men over 60. Also in the USA, the AI-supported risk assessment system COMPAS, used to assess the likelihood of recidivism among offenders, showed systematic distortions. Studies show that black defendants are more often mistakenly classified as at risk of recidivism, while white defendants are underestimated.

How do we solve the problems?

Whether such obvious forms of discrimination by AI are even possible in the EU is doubtful. But the point is different: The danger that new, more subtle and covert forms of data-based discrimination arise is too great to ignore. According to a study among executives from ten countries, a third of the companies surveyed already use AI applications to process applications and make pre-selections. The trend is rising. Whether in lending or law enforcement: AI systems may also be used in many critical areas in Germany—and everywhere lurks the systematic disadvantage of people due to characteristics they cannot influence (skin color, birthplace, etc.). Precisely for this reason, it is crucial that we conduct a public discourse on the dangers and problems associated with the use of AI. If we use it responsibly, AI promises unprecedented potential to significantly ease our lives. But if we do not sensitize people to the dangers, we risk reinforcing existing inequalities and creating new ones. For developers and companies, this means they need to proceed more consciously when creating and implementing AI systems. This includes selecting the most diverse and representative datasets possible, testing algorithms for possible bias, and developing methods to correct distortions. With clear guidelines and ethical standards for the use of AI, we can ensure that all people—regardless of their name and ethnic background—are treated fairly.

What role does the EU AI Act play?

In the EU, efforts to prevent discrimination through AI have been translated into law. The EU AI Act is an initiative to establish ethical standards for the use of AI in the European Union. The law aims to ensure that AI systems act fairly and transparently and do not discriminate. In our example with Aisha and Michael, this would have concrete consequences for their employer because the applied AI system has significant impacts on the fundamental rights of those affected. The AI Act classifies such systems for selecting applicants as "high-risk applications." This means that these AI systems are subject to special requirements to avoid discrimination and lack of transparency:

  • Transparency & Documentation: Companies must clearly explain how the AI works, what data it uses, and what decision-making bases it has.
  • Explainability & Traceability: Applicants have the right to know why and how a decision was made.
  • Human Control: A purely automated decision without human review is not allowed. A qualified person must have the final say.
  • Non-Discrimination: The AI must not cause prejudice or systematic disadvantage to certain groups. Bias tests and fairness assurance measures are mandatory.
  • Safety and Risk Assessment: Companies must prove that the system is reliable and does not pose serious risks to fundamental rights.

For Aisha, this means she has a legal path open to create transparency about possible discrimination in the application process and initiate corresponding legal steps against her discrimination. Whether this leads to fairer decisions in practice is another question. If the facts have already been established and only a lengthy court process can determine the discrimination retrospectively, systematic disadvantage remains. Precisely for this reason, at statworx, we offer companies the opportunity to subject their AI systems to a pre-check. With our AI Act Quick Check, you can find out how the AI Act will impact your company.

Additionally, we offer companies support in AI Act compliance and competence development for employees who work with high-risk systems, for example. statworx ACT! accompanies companies on the path to fulfilling Article 4 of the AI Act by providing scalable training to improve AI competence.

Conclusion

The article sheds a critical light on the unconscious biases lurking in AI-supported decision-making processes and shows how these biases can have real impacts on people's lives. Aisha's story, although fictional, illustrates the serious risks associated with using AI in application processes. Despite her qualifications, she is disadvantaged by the AI because of her Muslim name, while her colleague Michael is favored. This discrimination reflects the bias embedded in the data with which AI systems are trained—data often shaped by negative stereotypes about Islam and Muslims. The EU AI Act represents an important step in combating such injustices by setting strict standards for transparency and non-discrimination. However, legal regulations alone are not enough. It is crucial that developers, companies, and society as a whole become aware of the responsibility associated with the use of AI to ensure that technology acts fairly and inclusively and does not reinforce existing inequalities.

Linkedin Logo
Marcel Plaschke
Head of Strategy, Sales & Marketing
schedule a consultation
Zugehörige Leistungen
No items found.

More Blog Posts

  • Artificial Intelligence
AI Trends Report 2025: All 16 Trends at a Glance
Tarik Ashry
05. February 2025
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
Explainable AI in practice: Finding the right method to open the Black Box
Jonas Wacker
15. November 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
How a CustomGPT Enhances Efficiency and Creativity at hagebau
Tarik Ashry
06. November 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Data Science
  • Deep Learning
  • GenAI
  • Machine Learning
AI Trends Report 2024: statworx COO Fabian Müller Takes Stock
Tarik Ashry
05. September 2024
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
The AI Act is here – These are the risk classes you should know
Fabian Müller
05. August 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 4)
Tarik Ashry
31. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 3)
Tarik Ashry
24. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 2)
Tarik Ashry
04. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 1)
Tarik Ashry
10. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Generative AI as a Thinking Machine? A Media Theory Perspective
Tarik Ashry
13. June 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Custom AI Chatbots: Combining Strong Performance and Rapid Integration
Tarik Ashry
10. April 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
How managers can strengthen the data culture in the company
Tarik Ashry
21. February 2024
Read more
  • Artificial Intelligence
  • GenAI
  • Strategy
AI Trends Report 2024: These 12 trends await us
Tarik Ashry
14. February 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
AI in the Workplace: How We Turn Skepticism into Confidence
Tarik Ashry
08. February 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
The Future of Customer Service: Generative AI as a Success Factor
Tarik Ashry
25. October 2023
Read more
  • Artificial Intelligence
  • Data Science
How we developed a chatbot with real knowledge for Microsoft
Isabel Hermes
27. September 2023
Read more
  • Data Science
  • Data Visualization
  • Frontend Solution
Why Frontend Development is Useful in Data Science Applications
Jakob Gepp
30. August 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • statworx
the byte - How We Built an AI-Powered Pop-Up Restaurant
Sebastian Heinz
14. June 2023
Read more
  • Artificial Intelligence
  • Recap
  • statworx
Big Data & AI World 2023 Recap
Team statworx
24. May 2023
Read more
  • Data Science
  • Human-centered AI
  • Statistics & Methods
Unlocking the Black Box – 3 Explainable AI Methods to Prepare for the AI Act
Team statworx
17. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
How the AI Act will change the AI industry: Everything you need to know about it now
Team statworx
11. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Representation in AI – Part 2: Automating the Generation of Gender-Neutral Versions of Face Images
Team statworx
03. May 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Statistics & Methods
A first look into our Forecasting Recommender Tool
Team statworx
26. April 2023
Read more
  • Artificial Intelligence
  • Data Science
On Can, Do, and Want – Why Data Culture and Death Metal have a lot in common
David Schlepps
19. April 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
GPT-4 - A categorisation of the most important innovations
Mareike Flögel
17. March 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Strategy
Decoding the secret of Data Culture: These factors truly influence the culture and success of businesses
Team statworx
16. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
How to create AI-generated avatars using Stable Diffusion and Textual Inversion
Team statworx
08. March 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
Knowledge Management with NLP: How to easily process emails with AI
Team statworx
02. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
3 specific use cases of how ChatGPT will revolutionize communication in companies
Ingo Marquart
16. February 2023
Read more
  • Recap
  • statworx
Ho ho ho – Christmas Kitchen Party
Julius Heinz
22. December 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Real-Time Computer Vision: Face Recognition with a Robot
Sarah Sester
30. November 2022
Read more
  • Data Engineering
  • Tutorial
Data Engineering – From Zero to Hero
Thomas Alcock
23. November 2022
Read more
  • Recap
  • statworx
statworx @ UXDX Conf 2022
Markus Berroth
18. November 2022
Read more
  • Artificial Intelligence
  • Machine Learning
  • Tutorial
Paradigm Shift in NLP: 5 Approaches to Write Better Prompts
Team statworx
26. October 2022
Read more
  • Recap
  • statworx
statworx @ vuejs.de Conf 2022
Jakob Gepp
14. October 2022
Read more
  • Data Engineering
  • Data Science
Application and Infrastructure Monitoring and Logging: metrics and (event) logs
Team statworx
29. September 2022
Read more
  • Coding
  • Data Science
  • Machine Learning
Zero-Shot Text Classification
Fabian Müller
29. September 2022
Read more
  • Cloud Technology
  • Data Engineering
  • Data Science
How to Get Your Data Science Project Ready for the Cloud
Alexander Broska
14. September 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Repre­sentation in AI – Part 1: Utilizing StyleGAN to Explore Gender Directions in Face Image Editing
Isabel Hermes
18. August 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
statworx AI Principles: Why We Started Developing Our Own AI Guidelines
Team statworx
04. August 2022
Read more
  • Data Engineering
  • Data Science
  • Python
How to Scan Your Code and Dependencies in Python
Thomas Alcock
21. July 2022
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Data-Centric AI: From Model-First to Data-First AI Processes
Team statworx
13. July 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Human-centered AI
  • Machine Learning
DALL-E 2: Why Discrimination in AI Development Cannot Be Ignored
Team statworx
28. June 2022
Read more
  • R
The helfRlein package – A collection of useful functions
Jakob Gepp
23. June 2022
Read more
  • Recap
  • statworx
Unfold 2022 in Bern – by Cleverclip
Team statworx
11. May 2022
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
Break the Bias in AI
Team statworx
08. March 2022
Read more
  • Artificial Intelligence
  • Cloud Technology
  • Data Science
  • Sustainable AI
How to Reduce the AI Carbon Footprint as a Data Scientist
Team statworx
02. February 2022
Read more
  • Recap
  • statworx
2022 and the rise of statworx next
Sebastian Heinz
06. January 2022
Read more
  • Recap
  • statworx
5 highlights from the Zurich Digital Festival 2021
Team statworx
25. November 2021
Read more
  • Data Science
  • Human-centered AI
  • Machine Learning
  • Strategy
Why Data Science and AI Initiatives Fail – A Reflection on Non-Technical Factors
Team statworx
22. September 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
  • statworx
Column: Human and machine side by side
Sebastian Heinz
03. September 2021
Read more
  • Coding
  • Data Science
  • Python
How to Automatically Create Project Graphs With Call Graph
Team statworx
25. August 2021
Read more
  • Coding
  • Python
  • Tutorial
statworx Cheatsheets – Python Basics Cheatsheet for Data Science
Team statworx
13. August 2021
Read more
  • Data Science
  • statworx
  • Strategy
STATWORX meets DHBW – Data Science Real-World Use Cases
Team statworx
04. August 2021
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Deploy and Scale Machine Learning Models with Kubernetes
Team statworx
29. July 2021
Read more
  • Cloud Technology
  • Data Engineering
  • Machine Learning
3 Scenarios for Deploying Machine Learning Workflows Using MLflow
Team statworx
30. June 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification III: Explainability of Deep Learning Models With Grad-CAM
Team statworx
19. May 2021
Read more
  • Artificial Intelligence
  • Coding
  • Deep Learning
Car Model Classification II: Deploying TensorFlow Models in Docker Using TensorFlow Serving
No items found.
12. May 2021
Read more
  • Coding
  • Deep Learning
Car Model Classification I: Transfer Learning with ResNet
Team statworx
05. May 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification IV: Integrating Deep Learning Models With Dash
Dominique Lade
05. May 2021
Read more
  • AI Act
Potential Not Yet Fully Tapped – A Commentary on the EU’s Proposed AI Regulation
Team statworx
28. April 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • statworx
Creaition – revolutionizing the design process with machine learning
Team statworx
31. March 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning
5 Types of Machine Learning Algorithms With Use Cases
Team statworx
24. March 2021
Read more
  • Recaps
  • statworx
2020 – A Year in Review for Me and GPT-3
Sebastian Heinz
23. Dezember 2020
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
5 Practical Examples of NLP Use Cases
Team statworx
12. November 2020
Read more
  • Data Science
  • Deep Learning
The 5 Most Important Use Cases for Computer Vision
Team statworx
11. November 2020
Read more
  • Data Science
  • Deep Learning
New Trends in Natural Language Processing – How NLP Becomes Suitable for the Mass-Market
Dominique Lade
29. October 2020
Read more
  • Data Engineering
5 Technologies That Every Data Engineer Should Know
Team statworx
22. October 2020
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning

Generative Adversarial Networks: How Data Can Be Generated With Neural Networks
Team statworx
10. October 2020
Read more
  • Coding
  • Data Science
  • Deep Learning
Fine-tuning Tesseract OCR for German Invoices
Team statworx
08. October 2020
Read more
  • Artificial Intelligence
  • Machine Learning
Whitepaper: A Maturity Model for Artificial Intelligence
Team statworx
06. October 2020
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
How to Provide Machine Learning Models With the Help Of Docker Containers
Thomas Alcock
01. October 2020
Read more
  • Recap
  • statworx
STATWORX 2.0 – Opening of the New Headquarters in Frankfurt
Julius Heinz
24. September 2020
Read more
  • Machine Learning
  • Python
  • Tutorial
How to Build a Machine Learning API with Python and Flask
Team statworx
29. July 2020
Read more
  • Data Science
  • Statistics & Methods
Model Regularization – The Bayesian Way
Thomas Alcock
15. July 2020
Read more
  • Recap
  • statworx
Off To New Adventures: STATWORX Office Soft Opening
Team statworx
14. July 2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Dockerize ShinyApps
Team statworx
15. May 2020
Read more
  • Coding
  • Python
Making Of: A Free API For COVID-19 Data
Sebastian Heinz
01. April 2020
Read more
  • Frontend
  • Python
  • Tutorial
How To Build A Dashboard In Python – Plotly Dash Step-by-Step Tutorial
Alexander Blaufuss
26. March 2020
Read more
  • Coding
  • R
Why Is It Called That Way?! – Origin and Meaning of R Package Names
Team statworx
19. March 2020
Read more
  • Data Visualization
  • R
Community Detection with Louvain and Infomap
Team statworx
04. March 2020
Read more
  • Coding
  • Data Engineering
  • Data Science
Testing REST APIs With Newman
Team statworx
26. February 2020
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 2
Team statworx
19. Febuary 2020
Read more
  • Coding
  • Data Visualization
  • R
Animated Plots using ggplot and gganimate
Team statworx
14. Febuary 2020
Read more
  • Machine Learning
Machine Learning Goes Causal II: Meet the Random Forest’s Causal Brother
Team statworx
05. February 2020
Read more
  • Artificial Intelligence
  • Machine Learning
  • Statistics & Methods
Machine Learning Goes Causal I: Why Causality Matters
Team statworx
29.01.2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Create REST APIs With R Plumber
Stephan Emmer
23. January 2020
Read more
  • Recaps
  • statworx
statworx 2019 – A Year in Review
Sebastian Heinz
20. Dezember 2019
Read more
  • Artificial Intelligence
  • Deep Learning
Deep Learning Overview and Getting Started
Team statworx
04. December 2019
Read more
  • Coding
  • Machine Learning
  • R
Tuning Random Forest on Time Series Data
Team statworx
21. November 2019
Read more
  • Data Science
  • R
Combining Price Elasticities and Sales Forecastings for Sales Improvement
Team statworx
06. November 2019
Read more
  • Data Engineering
  • Python
Access your Spark Cluster from Everywhere with Apache Livy
Team statworx
30. October 2019
Read more
  • Recap
  • statworx
STATWORX on Tour: Wine, Castles & Hiking!
Team statworx
18. October 2019
Read more
  • Data Science
  • R
  • Statistics & Methods
Evaluating Model Performance by Building Cross-Validation from Scratch
Team statworx
02. October 2019
Read more
  • Data Science
  • Machine Learning
  • R
Time Series Forecasting With Random Forest
Team statworx
25. September 2019
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 1
Team statworx
11. September 2019
Read more
  • Machine Learning
  • R
  • Statistics & Methods
What the Mape Is FALSELY Blamed For, Its TRUE Weaknesses and BETTER Alternatives!
Team statworx
16. August 2019
Read more
  • Coding
  • Python
Web Scraping 101 in Python with Requests & BeautifulSoup
Team statworx
31. July 2019
Read more
  • Coding
  • Frontend
  • R
Getting Started With Flexdashboards in R
Thomas Alcock
19. July 2019
Read more
  • Recap
  • statworx
statworx summer barbecue 2019
Team statworx
21. June 2019
Read more
This is some text inside of a div block.
This is some text inside of a div block.