Back to all Blog Posts

AI in the Workplace: How We Turn Skepticism into Confidence

  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
08. February 2024
·

Tarik Ashry
Team Marketing

The discussion surrounding the use of artificial intelligence (AI) in the workplace appears to be splitting society. A tension is emerging: Some see AI as a groundbreaking advancement, while others fear it as a dystopian nightmare. There seems to be little middle ground.

At statworx, our AI & Society working group has made it its mission to dive deep into this discourse and find answers to the pressing questions facing society. To do so, we conducted a non-representative survey—partly online and partly in downtown Frankfurt—with 132 participants. We wanted to find out: What do people outside the AI bubble think about artificial intelligence in the workplace? Where do they see the greatest potential, and what are their biggest fears? Our goal was not just to collect opinions but to understand people’s fears and hopes in order to develop solutions for a socially responsible use of AI. To support our findings, we also analyzed other relevant studies and surveys and derived recommendations based on their insights.

What People Think About AI

When it comes to the use of AI in the workplace, there is broad agreement on only one aspect: Companies and individuals who integrate AI into their daily work gain an advantage over others. Beyond that, opinions vary widely, creating a complex and sometimes contradictory picture drawn from studies, public sentiment, personal views, and emotions. Here are some key insights:

53% of our survey participants would like to use more AI applications in their studies and work. At the same time, 45% are unaware that they already use AI-powered services such as Google Maps and Spotify in their daily lives. This highlights a significant lack of awareness regarding what “artificial intelligence” actually includes—and what it does not.

Our survey also reveals that many people across different industries are worried about the future of AI. 55% say they are more concerned than excited about AI. More than half of the respondents even admitted to fearing the “general development” of AI. Similarly, 40% of respondents in a survey by the Allensbach Institute for Public Opinion Research said that generative AI unsettles them. Another study found that 58% of Germans consider AI “unappealing.” These results indicate that a significant portion of the population harbors vague fears and negative associations regarding AI.

When asked about AI’s impact on their own work, one-third of respondents expect a “rather strong” influence. However, nearly 60% believe AI will “have little or no effect” on their job. Less than a third expect AI to make their work more interesting, and only about a quarter believe AI will allow “more room for creativity.” These trends are supported by findings from an online survey conducted by the market research institute Bilendi, which focused on non-academic professionals with vocational training. One in four respondents stated that AI would primarily benefit companies, while employees themselves would not experience any real workload relief - instead, technological progress would simply increase the amount of work to be done. This suggests that AI’s impact on daily work is perceived more negatively than positively. However, only one-fifth of Bilendi’s respondents believe AI will eventually replace their job entirely. Likewise, 58% of our respondents do not believe AI will lead to higher unemployment, compared to 33% in the KIRA study, who worry about job losses.

What We Really Need to Prepare For

International comparison studies suggest that our survey respondents may be underestimating the impact of artificial intelligence. A 2023 Ipsos survey illustrates the low awareness in Germany regarding AI’s transformational potential:

  • 35% of German respondents believe it is likely that AI will change their current job within the next five years (ranking second to last among surveyed countries, with an average of 57%).
  • Only 19% think AI will replace their current job in the same timeframe, compared to a global average of 36%.
  • Just 23% believe that the increased use of AI will improve their job in the next 3-5 years, while the international average is 37%.
  • 40% expect AI to worsen the German labor market, while only 20% anticipate an improvement.

These results indicate that companies - and especially employees - feel unprepared for the challenges AI presents and lack a realistic understanding of the expected transformations. Educational opportunities and individual AI skills are lacking. These vague fears, combined with a lack of knowledge, are turning into concrete challenges for employers: How should companies handle differing employee perspectives on AI? How (if at all) should they address those who completely reject AI? And how should they manage those who are overly enthusiastic or even excessively eager? How can employees be empowered to use AI tools competently? Which departments and employees need which specific AI skills? And how should AI training be tailored to individual needs?

One thing is certain: Changes in generative AI are having a massive impact on the job market. On one hand, there is an increasing demand for professionals who combine data expertise with industry knowledge, leading to entirely new job roles, such as prompt engineers. On the other hand, AI-driven (partial) automation threatens mass layoffs in many industries. The recent announcement by BILD newspaper to cut staff due to ChatGPT is likely just a preview of what’s to come. Some estimates even suggest that up to 80% of jobs could be automated in the coming decades. The UN and other experts consider this unrealistic, but it underscores an important point: We don’t truly know where this journey will lead. This uncertainty arises because industries vary too greatly, job roles are multidimensional, and humans (for now) are not easily replaceable. Just because an AI system automates a specific task, it does not mean it can completely replace an entire job role. Even AI researchers agree: In a large-scale survey of over 2,700 AI experts, only 10% expected that AI could surpass humans in all tasks by 2027. However, half of the respondents believed that this technological breakthrough could happen by 2047. The only certainty is this: In the future, human workers will collaborate with AI across all industries.

Just a Lack of Knowledge? Understanding AI Skepticism

AI risks becoming a divisive societal issue if we fail to integrate it into the workplace in a socially responsible manner. How can we ensure that happens? Most studies point in the same direction: Education is a key factor in helping people understand AI’s legally permissible capabilities, make informed decisions about and with AI, and reduce fears surrounding the technology. Our survey confirms this: Even though they already use AI more frequently, executives still express a desire for more knowledge—as do 53% of respondents, who would like to use more AI in their professional environment. However, basic knowledge about the technology and its responsible use is not enough.

Many people also want to learn more about AI’s risks and find ways to empower themselves. This is strongly reflected in the qualitative responses from our study. One participant stated that “AI should be made understandable to all age groups so that no knowledge gap emerges.” Risk assessment is a critical issue for many, as another participant put it: “People are most interested in learning about AI risks—they are least interested in how AI actually works.” This heightened awareness of risk is also reflected in the KIRA study. In line with this, we found that when it comes to AI applications, respondents prioritize high security above all else, while fast availability is the least important factor. Interestingly, executives rate “high security” slightly lower and “fast availability” slightly higher than employees do.

In addition to concerns about direct AI risks, such as bias and discrimination, people also worry about dependency, loss of control, feeling overwhelmed, and fears of misuse or manipulation. A particularly prominent concern is AI-driven surveillance: 62% of our respondents and 54% of KIRA study participants share this fear. A similar trend appears with disinformation, which 51% of our respondents and 56% of KIRA participants associate with AI. Interestingly, however, 59% of our respondents do not believe AI poses a threat to humanity, whereas 58% of KIRA study participants are concerned about this possibility.

Who Are the AI Skeptics?

If the overwhelming amount of data and studies leaves you feeling uncertain, you are not alone. There is no clear picture. Even less can be inferred about how to specifically engage with AI skeptics. The discrepancies in responses reflect the complexity of AI perception and its impact on the labor market. The various perspectives and challenges highlight the need for a broad dialogue and participatory approach in shaping the future of AI in the workplace. But who do we need to engage with, and how?

A survey published in the MIT Sloan Management Review among 140 executives identifies three ideal types of AI-based decision-makers: skeptics, interactors, and delegators. Skeptics are unwilling to cede autonomy in decision-making to AI, while delegators are comfortable handing over responsibility to AI. Interactors take a middle path, adjusting their approach depending on the specific decision. The three decision-making styles reveal that the quality of an AI recommendation is only half of the equation when assessing AI-assisted decision-making in organizations. The human filter makes the difference, say Philip Meissner and Christoph Keding. Delegators tend to pass responsibility to others even without AI involvement.

A study by EY found that tech skeptics tend to be older, have lower incomes, and express dissatisfaction with their lives. They fear that future generations will be worse off and doubt that young people today will have a better life than their parents. Tech skeptics are concerned about their financial security, distrust the government, and are not convinced of technology’s benefits. While they use technology for basic tasks, they do not believe it will solve society’s problems. Although they possess fundamental digital skills, few see value in further developing them. Tech skeptics are generally opposed to data sharing, even when there is a clear purpose for it.

What Can We Learn from This?

Skepticism is a deeply human trait, often shaped by character and personality, and is not always receptive to purely logical arguments. Therefore, a cornerstone of any effort to address AI skepticism must be transparent, empathetic communication on equal footing that takes these concerns seriously.

A Forrester survey on AI in human resources identified four key groups that leaders should tailor their communication strategies to:

  • AI Skeptics: Most commonly found in the IT sector
  • AI Supporters: Typically between 26 and 35 years old and working in healthcare
  • AI Indifferent: Most likely between 36 and 45 years old
  • AI Enthusiasts: Often 18 to 25 years old, working in sales

Regardless of how well this classification applies to German society, creating personas can be a useful approach to developing appropriate messaging. Each of these groups reacts differently to various forms of communication. About half of AI supporters say that transparency regarding whether AI will lead to job losses in their own company would reduce their concerns and fears about AI in human resources. Only 18% of those who are indifferent to AI feel the same way. While more than half of AI skeptics stated that communication about how the company is using AI would alleviate their concerns and fears, only 22% of AI supporters share this view. 45% of cautious supporters and skeptics stated that they would be more likely to leave the company if their concerns about the use of AI in the HR department were not addressed.

How Can We Build AI Confidence?

We are still at the tip of the iceberg when it comes to AI adoption. As companies continue to expand their AI infrastructure, they must also ensure that their employees feel empowered to use AI in their respective roles. In other words, leadership must clearly demonstrate that they see their employees as partners, not just passengers, on the AI journey.

Our assessment is that more robust policies for responsible AI use in companies and teams are needed to turn concerns into confidence and prevent the unintended leakage of trade secrets or other sensitive data. This includes maximum transparency about the (planned) use of AI. The reality is that most employees do not truly understand how AI works, while many decision-makers believe they do. Upskilling helps bridge this gap. Targeted training for employees promotes a safe and productive adoption of AI. If employees also understand - because it is clearly demonstrated to them - how AI can improve their work, they will be more willing to embrace it. After all, most employees already hope that AI will help them access information more easily and increase their productivity. Even those who are skeptical are usually open to rational arguments. The key is how they are approached. Instead of assuming understanding and willingness, it can be helpful to engage them on an individual level: “When you have to make a major financial decision, do you rely solely on your gut feeling, or do you try to gather as much data and information as possible?” It must become clear that (in almost all typical cases) AI is a technology designed to help people make better, evidence-based decisions - not some dystopian scenario where humans become mere tools.

For this, targeted internal communication is important. Companies where tech and AI enthusiasts dominate may be best served by positioning AI as an exciting, revolutionary trend to their employees. If this audience values having a modern and high-tech employer, that should be reflected in communication. Other companies, where enthusiasm is lower and a more conservative, cautious mindset prevails, may find it more effective to present AI internally as a continuous improvement and evolution of existing, familiar technologies. They might even consider avoiding the AI label altogether and instead aligning new systems with familiar names and descriptions. There is no universal answer to which communication strategies work best in which company (and in which departments). However, the personas and their reasons for AI skepticism presented earlier provide valuable insights into how internal sentiment can be captured and used to develop an appropriate communication strategy.

About the AI & Society Working Group

As a working group with insights into the latest research, we engage in discussions with experts from business, society, and academia. Our work is not limited to analysis - we also take action. "KI Macht Schule" and Girls' Day are just a few examples of our efforts to involve society in the dialogue and make AI a tangible experience. The development of Responsible AI principles and workshops are additional measures aimed at driving innovation responsibly.

All results at a glance

Sources

Linkedin Logo
Marcel Plaschke
Head of Strategy, Sales & Marketing
schedule a consultation
Zugehörige Leistungen
No items found.

More Blog Posts

  • Artificial Intelligence
AI Trends Report 2025: All 16 Trends at a Glance
Tarik Ashry
05. February 2025
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
Explainable AI in practice: Finding the right method to open the Black Box
Jonas Wacker
15. November 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
How a CustomGPT Enhances Efficiency and Creativity at hagebau
Tarik Ashry
06. November 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Data Science
  • Deep Learning
  • GenAI
  • Machine Learning
AI Trends Report 2024: statworx COO Fabian Müller Takes Stock
Tarik Ashry
05. September 2024
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
The AI Act is here – These are the risk classes you should know
Fabian Müller
05. August 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 4)
Tarik Ashry
31. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 3)
Tarik Ashry
24. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 2)
Tarik Ashry
04. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 1)
Tarik Ashry
10. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Generative AI as a Thinking Machine? A Media Theory Perspective
Tarik Ashry
13. June 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Custom AI Chatbots: Combining Strong Performance and Rapid Integration
Tarik Ashry
10. April 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
How managers can strengthen the data culture in the company
Tarik Ashry
21. February 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
The Future of Customer Service: Generative AI as a Success Factor
Tarik Ashry
25. October 2023
Read more
  • Artificial Intelligence
  • Data Science
How we developed a chatbot with real knowledge for Microsoft
Isabel Hermes
27. September 2023
Read more
  • Data Science
  • Data Visualization
  • Frontend Solution
Why Frontend Development is Useful in Data Science Applications
Jakob Gepp
30. August 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • statworx
the byte - How We Built an AI-Powered Pop-Up Restaurant
Sebastian Heinz
14. June 2023
Read more
  • Artificial Intelligence
  • Recap
  • statworx
Big Data & AI World 2023 Recap
Team statworx
24. May 2023
Read more
  • Data Science
  • Human-centered AI
  • Statistics & Methods
Unlocking the Black Box – 3 Explainable AI Methods to Prepare for the AI Act
Team statworx
17. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
How the AI Act will change the AI industry: Everything you need to know about it now
Team statworx
11. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Representation in AI – Part 2: Automating the Generation of Gender-Neutral Versions of Face Images
Team statworx
03. May 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Statistics & Methods
A first look into our Forecasting Recommender Tool
Team statworx
26. April 2023
Read more
  • Artificial Intelligence
  • Data Science
On Can, Do, and Want – Why Data Culture and Death Metal have a lot in common
David Schlepps
19. April 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
GPT-4 - A categorisation of the most important innovations
Mareike Flögel
17. March 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Strategy
Decoding the secret of Data Culture: These factors truly influence the culture and success of businesses
Team statworx
16. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
How to create AI-generated avatars using Stable Diffusion and Textual Inversion
Team statworx
08. March 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
Knowledge Management with NLP: How to easily process emails with AI
Team statworx
02. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
3 specific use cases of how ChatGPT will revolutionize communication in companies
Ingo Marquart
16. February 2023
Read more
  • Recap
  • statworx
Ho ho ho – Christmas Kitchen Party
Julius Heinz
22. December 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Real-Time Computer Vision: Face Recognition with a Robot
Sarah Sester
30. November 2022
Read more
  • Data Engineering
  • Tutorial
Data Engineering – From Zero to Hero
Thomas Alcock
23. November 2022
Read more
  • Recap
  • statworx
statworx @ UXDX Conf 2022
Markus Berroth
18. November 2022
Read more
  • Artificial Intelligence
  • Machine Learning
  • Tutorial
Paradigm Shift in NLP: 5 Approaches to Write Better Prompts
Team statworx
26. October 2022
Read more
  • Recap
  • statworx
statworx @ vuejs.de Conf 2022
Jakob Gepp
14. October 2022
Read more
  • Data Engineering
  • Data Science
Application and Infrastructure Monitoring and Logging: metrics and (event) logs
Team statworx
29. September 2022
Read more
  • Coding
  • Data Science
  • Machine Learning
Zero-Shot Text Classification
Fabian Müller
29. September 2022
Read more
  • Cloud Technology
  • Data Engineering
  • Data Science
How to Get Your Data Science Project Ready for the Cloud
Alexander Broska
14. September 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Repre­sentation in AI – Part 1: Utilizing StyleGAN to Explore Gender Directions in Face Image Editing
Isabel Hermes
18. August 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
statworx AI Principles: Why We Started Developing Our Own AI Guidelines
Team statworx
04. August 2022
Read more
  • Data Engineering
  • Data Science
  • Python
How to Scan Your Code and Dependencies in Python
Thomas Alcock
21. July 2022
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Data-Centric AI: From Model-First to Data-First AI Processes
Team statworx
13. July 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Human-centered AI
  • Machine Learning
DALL-E 2: Why Discrimination in AI Development Cannot Be Ignored
Team statworx
28. June 2022
Read more
  • R
The helfRlein package – A collection of useful functions
Jakob Gepp
23. June 2022
Read more
  • Recap
  • statworx
Unfold 2022 in Bern – by Cleverclip
Team statworx
11. May 2022
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
Break the Bias in AI
Team statworx
08. March 2022
Read more
  • Artificial Intelligence
  • Cloud Technology
  • Data Science
  • Sustainable AI
How to Reduce the AI Carbon Footprint as a Data Scientist
Team statworx
02. February 2022
Read more
  • Recap
  • statworx
2022 and the rise of statworx next
Sebastian Heinz
06. January 2022
Read more
  • Recap
  • statworx
5 highlights from the Zurich Digital Festival 2021
Team statworx
25. November 2021
Read more
  • Data Science
  • Human-centered AI
  • Machine Learning
  • Strategy
Why Data Science and AI Initiatives Fail – A Reflection on Non-Technical Factors
Team statworx
22. September 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
  • statworx
Column: Human and machine side by side
Sebastian Heinz
03. September 2021
Read more
  • Coding
  • Data Science
  • Python
How to Automatically Create Project Graphs With Call Graph
Team statworx
25. August 2021
Read more
  • Coding
  • Python
  • Tutorial
statworx Cheatsheets – Python Basics Cheatsheet for Data Science
Team statworx
13. August 2021
Read more
  • Data Science
  • statworx
  • Strategy
STATWORX meets DHBW – Data Science Real-World Use Cases
Team statworx
04. August 2021
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Deploy and Scale Machine Learning Models with Kubernetes
Team statworx
29. July 2021
Read more
  • Cloud Technology
  • Data Engineering
  • Machine Learning
3 Scenarios for Deploying Machine Learning Workflows Using MLflow
Team statworx
30. June 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification III: Explainability of Deep Learning Models With Grad-CAM
Team statworx
19. May 2021
Read more
  • Artificial Intelligence
  • Coding
  • Deep Learning
Car Model Classification II: Deploying TensorFlow Models in Docker Using TensorFlow Serving
No items found.
12. May 2021
Read more
  • Coding
  • Deep Learning
Car Model Classification I: Transfer Learning with ResNet
Team statworx
05. May 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification IV: Integrating Deep Learning Models With Dash
Dominique Lade
05. May 2021
Read more
  • AI Act
Potential Not Yet Fully Tapped – A Commentary on the EU’s Proposed AI Regulation
Team statworx
28. April 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • statworx
Creaition – revolutionizing the design process with machine learning
Team statworx
31. March 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning
5 Types of Machine Learning Algorithms With Use Cases
Team statworx
24. March 2021
Read more
  • Recaps
  • statworx
2020 – A Year in Review for Me and GPT-3
Sebastian Heinz
23. Dezember 2020
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
5 Practical Examples of NLP Use Cases
Team statworx
12. November 2020
Read more
  • Data Science
  • Deep Learning
The 5 Most Important Use Cases for Computer Vision
Team statworx
11. November 2020
Read more
  • Data Science
  • Deep Learning
New Trends in Natural Language Processing – How NLP Becomes Suitable for the Mass-Market
Dominique Lade
29. October 2020
Read more
  • Data Engineering
5 Technologies That Every Data Engineer Should Know
Team statworx
22. October 2020
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning

Generative Adversarial Networks: How Data Can Be Generated With Neural Networks
Team statworx
10. October 2020
Read more
  • Coding
  • Data Science
  • Deep Learning
Fine-tuning Tesseract OCR for German Invoices
Team statworx
08. October 2020
Read more
  • Artificial Intelligence
  • Machine Learning
Whitepaper: A Maturity Model for Artificial Intelligence
Team statworx
06. October 2020
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
How to Provide Machine Learning Models With the Help Of Docker Containers
Thomas Alcock
01. October 2020
Read more
  • Recap
  • statworx
STATWORX 2.0 – Opening of the New Headquarters in Frankfurt
Julius Heinz
24. September 2020
Read more
  • Machine Learning
  • Python
  • Tutorial
How to Build a Machine Learning API with Python and Flask
Team statworx
29. July 2020
Read more
  • Data Science
  • Statistics & Methods
Model Regularization – The Bayesian Way
Thomas Alcock
15. July 2020
Read more
  • Recap
  • statworx
Off To New Adventures: STATWORX Office Soft Opening
Team statworx
14. July 2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Dockerize ShinyApps
Team statworx
15. May 2020
Read more
  • Coding
  • Python
Making Of: A Free API For COVID-19 Data
Sebastian Heinz
01. April 2020
Read more
  • Frontend
  • Python
  • Tutorial
How To Build A Dashboard In Python – Plotly Dash Step-by-Step Tutorial
Alexander Blaufuss
26. March 2020
Read more
  • Coding
  • R
Why Is It Called That Way?! – Origin and Meaning of R Package Names
Team statworx
19. March 2020
Read more
  • Data Visualization
  • R
Community Detection with Louvain and Infomap
Team statworx
04. March 2020
Read more
  • Coding
  • Data Engineering
  • Data Science
Testing REST APIs With Newman
Team statworx
26. February 2020
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 2
Team statworx
19. Febuary 2020
Read more
  • Coding
  • Data Visualization
  • R
Animated Plots using ggplot and gganimate
Team statworx
14. Febuary 2020
Read more
  • Machine Learning
Machine Learning Goes Causal II: Meet the Random Forest’s Causal Brother
Team statworx
05. February 2020
Read more
  • Artificial Intelligence
  • Machine Learning
  • Statistics & Methods
Machine Learning Goes Causal I: Why Causality Matters
Team statworx
29.01.2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Create REST APIs With R Plumber
Stephan Emmer
23. January 2020
Read more
  • Recaps
  • statworx
statworx 2019 – A Year in Review
Sebastian Heinz
20. Dezember 2019
Read more
  • Artificial Intelligence
  • Deep Learning
Deep Learning Overview and Getting Started
Team statworx
04. December 2019
Read more
  • Coding
  • Machine Learning
  • R
Tuning Random Forest on Time Series Data
Team statworx
21. November 2019
Read more
  • Data Science
  • R
Combining Price Elasticities and Sales Forecastings for Sales Improvement
Team statworx
06. November 2019
Read more
  • Data Engineering
  • Python
Access your Spark Cluster from Everywhere with Apache Livy
Team statworx
30. October 2019
Read more
  • Recap
  • statworx
STATWORX on Tour: Wine, Castles & Hiking!
Team statworx
18. October 2019
Read more
  • Data Science
  • R
  • Statistics & Methods
Evaluating Model Performance by Building Cross-Validation from Scratch
Team statworx
02. October 2019
Read more
  • Data Science
  • Machine Learning
  • R
Time Series Forecasting With Random Forest
Team statworx
25. September 2019
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 1
Team statworx
11. September 2019
Read more
  • Machine Learning
  • R
  • Statistics & Methods
What the Mape Is FALSELY Blamed For, Its TRUE Weaknesses and BETTER Alternatives!
Team statworx
16. August 2019
Read more
  • Coding
  • Python
Web Scraping 101 in Python with Requests & BeautifulSoup
Team statworx
31. July 2019
Read more
  • Coding
  • Frontend
  • R
Getting Started With Flexdashboards in R
Thomas Alcock
19. July 2019
Read more
  • Recap
  • statworx
statworx summer barbecue 2019
Team statworx
21. June 2019
Read more
  • Data Visualization
  • R
Interactive Network Visualization with R
Team statworx
12. June 2019
Read more
  • Deep Learning
  • Python
  • Tutorial
Using Reinforcement Learning to play Super Mario Bros on NES using TensorFlow
Sebastian Heinz
29. May 2019
Read more
This is some text inside of a div block.
This is some text inside of a div block.