Back to all Blog Posts

New Trends in Natural Language Processing – How NLP Becomes Suitable for the Mass-Market

  • Data Science
  • Deep Learning
29. October 2020
·

Dominique Lade
Team AI Development

NLP (Natural Language Processing) generally describes the computer-aided processing of human language. This includes both written and spoken language. The goals pursued with NLP can be classified into two superordinate categories: Understanding language and generating language. The technical challenge for both purposes is to transfer unstructured information in the form of text into a format that can be processed by a machine. In concrete terms, this means that text must be represented in a number format that the computer can understand. Just a few years ago, this was only possible for a few tech companies. These companies had three decisive advantages:

  • Access to huge amounts of unstructured text data
  • Experts who can develop cutting-edge technologies
  • Computing capacity to process the amount of unstructured data

In this article, we show different NLP case studies and explain how, within only a few years, the barriers to market entry have been lowered to such an extent that today every company can use NLP to solve their business problems.

In which areas can NLP be used?

Computer-aided speech processing is a very abstract term. The term becomes more understandable when it is broken down into its areas of application. For each subarea, a different, specialized model for speech processing is applied. It should be noted that tasks that are relatively easy for a human being, such as recognizing emotions, also tend to be easier for a machine in the NLP field. On the other hand, more complicated tasks, such as translating texts, tend to be more difficult for the computer to solve. The six most important NLP applications, and the business problems that can be solved with it, are discussed below.Sequence ClassificationThe classic example of NLP is sequence classification. The goal is to assign text sequences to one of several previously defined classes. An example of these classes are emotions (joyful, angry, exhilarated, etc.). A text is presented to the computer and it must decide independently which emotion the author wanted to express in this text. Other examples are assigning a text to a known author or classifying document types. When classifying sequences, it should be noted that a sequence can consist of a text of any length. A sequence can consist of a single word (sequence of letters), a sentence, a paragraph, or a complete document. An example of a shorter sequence would be a vacation rating.

Case Study 1: A travel portal wants to specifically target customers with a negative vacation experience in a marketing campaign. For this purpose, existing customer reviews are divided into three classes – positive, neutral, and negative. Each review is automatically assigned to one of these classes.

Longer documents could be, e.g., mailings of any kind.

Case Study 2: The logistics department of an international company processes different document types in different teams. For this purpose, postal items are currently selected and sorted manually. In the future, they will be automatically assigned to a category. Incoming invoices, delivery bills, and other inquiries could be defined as categories.

Question-Answer Models

For question-answer problems, the computer is provided with as many text corpora as possible. The goal is to give a content-correct answer to questions written by a person based on information from the text corpora. The difficulty of this task varies depending on how specific the required information is in the text. The simplest solution is the complete extraction of existing text passages. Based on this, the extracted information can be packaged into a grammatically correct answer. Most complex to implement are logical conclusions based on the existing information. For example, a given text could describe the structure of a company’s premises. Buildings A, B, and C are mentioned. A possible question could be, “How many buildings are there on the site? A logical conclusion of the computer would be that the site consists of 3 buildings without mentioning the number itself.

Case Study 3: A medium-sized company has been observing a continuous increase in customer inquiries for a long time. Many of these inquiries are requests for information. The company decides to develop a chatbot. Based on internal support documents, customers can now independently and automatically ask questions to the chatbot and have them answered.

Generating Texts

Based on a given text, the next matching word should be predicted as accurately as possible. The resulting text can be used to predict another word. This process can be repeated as often as desired to create texts of any length. Thereby it is possible to generate texts with any language subtleties. Thus, a particular accent or dialect can be modeled, but also a simple or more complex language can be used, depending on the target group. The greatest challenge is that text should be free of errors in terms of both content and language.

Case Study 4: A manufacturer of a document management system wants to make it easier to find documents. In the built-in search mask, the search term is automatically supplemented with additional matching words.

Recognition of Phrases

When recognizing phrases, also called Named Entity Recognition (NER), the goal is to assign one or more words in a sentence to a class. Grammatical units can be defined as possible phrases. A sentence is divided into subject, verb, object, etc. User-defined entities are often chosen instead of grammatical units. Usually, the entities are, e.g., places, natural persons, companies, or times. When a person has to decide in which category a phrase belongs, we automatically use rules (such as grammar rules). In NER, the computer should learn to use similar decision rules. However, these rules are not explicitly given; instead, the computer has to learn them independently.

Case Study 5: A hedge fund automatically analyzes the quarterly reports submitted to the SEC. The goal is to generate summaries of the company’s business activity automatically. Thus, the list of identities that have to be extracted consists of business type, division, director, etc.

Summaries

The task of creating a summary of a text is exactly like the task in a school lesson. The goal is to create a summary with all relevant content that looks as real as possible. Applicable spelling and grammar rules must be adhered to. The challenge here is to teach the computer to separate important and relevant content from unimportant content.

Case Study 6: By analyzing their website’s usage behavior, an online news agency has found out that fewer people read the articles entirely through to the end. To make it easier for readers to extract relevant information, they want to create automated summaries for existing and new articles. The length and language complexity of the summary should depend on the user profile.

Translations

In a text translation, the text is transferred from one language to another, in compliance with the appropriate spelling and grammar rules and without changing the content. The computer has similar problems with translating texts as a human being. The balance between content and grammatical correctness must always be maintained without moving away from the original text.

Case Study 7: A nationally acting supplier wants to expand its sales market internationally. For this purpose, all existing technical specifications must be translated into the language of the target markets. A particular challenge is the translation of technical, industry-specific vocabulary.

How have the NLP Models developed?

The history of NLP models can be divided into three eras: Naive models, static vector models, and dynamic vector models. In the early days of NLP models, attempts were made to determine the meaning of texts by counting words or pairs of words. This required a very intensive preparation of the texts. The actual calculation of the models, based on the counts, can be done very quickly (with today’s computers). However, any context is lost when counting the words. The next development step were static vector models. The idea behind these models is that each word is represented by a vector, i.e., a series of numbers. These number series are usually calculated by using deep learning models. They then serve as input for another model, e.g., another Deep Learning Model, which uses the number series to solve the actual task, e.g., the classification of the texts. By using the vectors, it was possible to understand the context of words better. This means that other words surrounding this word are also considered when calculating a vector for a word. However, the vectors for a word written the same way are still identical, independent of the actual meaning. In the example shown below, the vector for ‘park’ would be the same.

I don’t know how to parallel park. (park = from the verb “to park”) I’m taking my dog for a walk at the park. (park = open green area)

The calculation of the vectors, as well as the model, is very time and computationally intensive. However, due to the missing context of the vectors, the prediction is still very efficient. The latest generation of NLP models is similar to the second generation, but now vectors are calculated with respect to the word’s context. Thus, in the example above, a different vector would be calculated for the bench than for the bank. This makes both the calculation of the model and the prediction very computationally intensive.

Why has NLP become so relevant?

Google kicked off the “New-Area of NLP” with the so-called BERT model at the end of 2018 (click here LINK: https://github.com/google-research/bertto get to the official GitHub repository). Since then, monthly adaptations and further developments of the model have been published by universities, and companies such as Facebook and, of course, Google itself. The majority of these models are available to the broad masses free of charge –the use for commercial purposes is almost always permitted. The performance of this latest generation of NLP models is in many areas on par with, or already above, the results that can be achieved by humans. Research has developed data sets for various tasks and sub-areas of speech processing. These tasks were first solved by humans to create a reference value to be beaten by computers. Meanwhile, NLP models can provide almost human results in nearly all areas. It should be noted that the data sets are very generalistic. Each of these benchmark datasets tries to achieve as much coverage as possible in its subarea to make the best possible general statement about the performance. Business problems, on the other hand, are usually much more concrete. For example, a model may be very well able to capture the general mood of all kinds of texts and thus obtains a good, high rating in this area. A business problem could be to evaluate the mood of customer contributions in social networks or of incoming emails of customer complaints. From a human point of view, both tasks are very similar. For a machine, it can make a big difference, whether it is short, informal texts, such as posts from social media, or longer, formal texts, such as emails. An evaluation of the models for the business problem is essential.

How did NLP become so easy to use?

Until a few years ago, there were three fundamental problems in the development of artificial intelligence, and especially for the subarea of NLP, which made the development and adaptation of these models difficult. The problems were related to the allocation of resources in the three areas data, computing power, and human capital. The pre-training of models significantly mitigated all three issues. The large, relevant companies in NLP model development invest in these resources and provide these pre-trained models afterward, mostly free of charge. The models are already outstanding in the general understanding of texts but usually leave room for improvement for specific problems. However, the biggest part of resources is required for the first part, the generalistic representation of text. These pre-trained models can now be fine-tuned to specific business problems with relatively little effort. By fine-tuning, excellent results can be achieved with minimal effort and at a low cost.

Entry Barrier: Data Availability

As the complexity of the models’ increases, the need for data needed for training grows exponentially. The performance of these models is achieved by looking at the context of a word. Consequently, a model must see as many words in as many combinations as possible. Through the Internet, there is access to extensive text collections. Thus, the BERT model mentioned above was trained on various books with about 800 million words and the complete English platform Wikipedia, with about 2.5 billion words.

Entry Barrier: Computing Power

The increasing demand for data and the growing complexity of the model result in a higher need for computing power. Some relevant points can be observed here. The power of computers increases massively every year, and it is doubling about every two years. At the same time, computing power is becoming cheaper. Since the triumph of cloud providers, access to computing power has been democratized. Extremely high-performant and specialized computer clusters are now available not only to large companies but to everyone, with billing to the minute.

Entry Barrier: Talent Acquisition

In the early days of AI, it was necessary to either build up a competitive development team within the own organization or to purchase the complete development from specialized companies. As a result, it was necessary to invest a great deal of money in advance to put a finished AI product into operation after a development period that often lasted several years. Often such projects failed or did not add enough value. Financial investments with such a risk profile were usually only possible for large multinational companies. Most of the newly developed NLP models are available for free today, even for commercial purposes. Therefore, it is possible to get a proof of concept within weeks instead of months or years. The introduction time of a complete product has been reduced from years to months. Iterations in product development are now possible very quickly, with a low initial investment.

What challenges remain?

Many of the original problems have been defused or completely solved. These developments have, above all, immensely shortened the time needed to complete a feasibility study.

Model

Currently, pre-trained models are available from a variety of companies and providers. These are continually being developed, and often a newer, improved version is released after a few months. Also, several versions of the same model are released at the same time. These often differ in complexity or language. It is crucial to explore and evaluate models in the initial phase of an NLP project. The performance can be divided into two dimensions: The quality of the results and the execution speed. To evaluate the quality of model results is often challenging, depending on the task. When classifying emotions, it is usually possible to determine whether the model was right or wrong. Assessing summaries is much more difficult. It is vital in the initial phase of a project to determine a measure of quality that is technically feasible but also reflects the business problem. The second dimension of model performance is the speed of execution. This includes both the time required for training and prediction. It is very important to coordinate the model requirements with all project partners at an early stage. For example, a model that has to answer questions live within milliseconds has different properties than a model that calculates results once a day overnight.

Data

The topic of data is generally a double-edged sword in AI and especially in NLP. On the one hand, data is typically available, and computer systems can process it. By pre-training models, a large part of the work with data is taken off our hands. On the other hand, pre-trained models are always designed to work as good as possible in various tasks. Often the pre-trained models deliver good results without fine-tuning, but not outstanding results. The fine-tuning is usually done in two dimensions. The model must first be adapted to language peculiarities and subtleties – this can be a unique vocabulary, slang, or dialect. Thus, there is a big difference between articles from social media and instructions for production processes. The second dimension relates to the actual task at hand. To achieve outstanding performance, models must always be tailored to the business problem. A model that can translate differs significantly from a model that can classify emotions. For this fine-tuning, texts/data are needed to adapt to the target language and the target problem. The texts must be prepared and fed into the model. Depending on the complexity and quality of the data, this can still be a laborious process.

Computing power

The fact that computers are getting better and better and computing power is getting cheaper and cheaper is one of the main reasons for the adaptation of AI. As already mentioned several times, pre-trained models make it unnecessary to provide the lion’s share of the computing power by oneself. Computing power is only needed for data processing and fine-tuning the models. This is a fraction of the computing power needed for complete training from start to finish. Nevertheless, it is usually more than a standard computer could manage in a reasonable time. Therefore, cloud computing is generally used for fine-tuning. Cloud resources are usually billed by the minute and are therefore very cost-effective. However, the process of training using cloud computing differs significantly from training in a standard data center, which is why knowledge in this area must either be built up within the own organization or purchased from external service providers.

What can we expect from NLP in the Future?

Of the entire field of artificial intelligence, NLP is currently the most actively researched area. In the next months and years, some more interesting developments can be expected, and currently, two developments with very interesting practical implications are emerging. In the short to medium term, we expect the practical application of so-called zero-shot models. These models are trained for a particular task area like sequence classification. The novelty is that these models provide excellent results without having seen domain-specific data. Thus these models develop a kind of “general” intelligence. This makes the fine-tuning of models much easier or completely unnecessary. The next step to be expected are the so-called general purpose models. These models can solve any task on unseen data, eliminating the need for complete fine-tuning. The first experiments with these models seem to give outstanding results, but the models are extremely large and require very high computing power. Therefore, the commissioning of these models is currently extremely difficult and expensive. There are still almost no practical applications. We expect significant leaps in practicability and performance in the next few years.

Summary

The latest developments in the field of speech processing are both impressive and fast. Google gave the starting signal of the newest developments with the publication of the BERT model scarcely two years ago. Since then, in a weekly rhythm, new models are published by companies and universities worldwide. These models often improve the results of existing problems or enable existing resources to be used more efficiently. Problems which two years ago were considered unsolvable are now often very well solvable and affordable in terms of resources and development time. The time required to prepare a feasibility study has been extremely shortened.

Linkedin Logo
Marcel Plaschke
Head of Strategy, Sales & Marketing
schedule a consultation
Zugehörige Leistungen
No items found.

More Blog Posts

  • Artificial Intelligence
AI Trends Report 2025: All 16 Trends at a Glance
Tarik Ashry
05. February 2025
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
Explainable AI in practice: Finding the right method to open the Black Box
Jonas Wacker
15. November 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
How a CustomGPT Enhances Efficiency and Creativity at hagebau
Tarik Ashry
06. November 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Data Science
  • Deep Learning
  • GenAI
  • Machine Learning
AI Trends Report 2024: statworx COO Fabian Müller Takes Stock
Tarik Ashry
05. September 2024
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
The AI Act is here – These are the risk classes you should know
Fabian Müller
05. August 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 4)
Tarik Ashry
31. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 3)
Tarik Ashry
24. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 2)
Tarik Ashry
04. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 1)
Tarik Ashry
10. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Generative AI as a Thinking Machine? A Media Theory Perspective
Tarik Ashry
13. June 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Custom AI Chatbots: Combining Strong Performance and Rapid Integration
Tarik Ashry
10. April 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
How managers can strengthen the data culture in the company
Tarik Ashry
21. February 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
AI in the Workplace: How We Turn Skepticism into Confidence
Tarik Ashry
08. February 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
The Future of Customer Service: Generative AI as a Success Factor
Tarik Ashry
25. October 2023
Read more
  • Artificial Intelligence
  • Data Science
How we developed a chatbot with real knowledge for Microsoft
Isabel Hermes
27. September 2023
Read more
  • Data Science
  • Data Visualization
  • Frontend Solution
Why Frontend Development is Useful in Data Science Applications
Jakob Gepp
30. August 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • statworx
the byte - How We Built an AI-Powered Pop-Up Restaurant
Sebastian Heinz
14. June 2023
Read more
  • Artificial Intelligence
  • Recap
  • statworx
Big Data & AI World 2023 Recap
Team statworx
24. May 2023
Read more
  • Data Science
  • Human-centered AI
  • Statistics & Methods
Unlocking the Black Box – 3 Explainable AI Methods to Prepare for the AI Act
Team statworx
17. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
How the AI Act will change the AI industry: Everything you need to know about it now
Team statworx
11. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Representation in AI – Part 2: Automating the Generation of Gender-Neutral Versions of Face Images
Team statworx
03. May 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Statistics & Methods
A first look into our Forecasting Recommender Tool
Team statworx
26. April 2023
Read more
  • Artificial Intelligence
  • Data Science
On Can, Do, and Want – Why Data Culture and Death Metal have a lot in common
David Schlepps
19. April 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
GPT-4 - A categorisation of the most important innovations
Mareike Flögel
17. March 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Strategy
Decoding the secret of Data Culture: These factors truly influence the culture and success of businesses
Team statworx
16. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
How to create AI-generated avatars using Stable Diffusion and Textual Inversion
Team statworx
08. March 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
Knowledge Management with NLP: How to easily process emails with AI
Team statworx
02. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
3 specific use cases of how ChatGPT will revolutionize communication in companies
Ingo Marquart
16. February 2023
Read more
  • Recap
  • statworx
Ho ho ho – Christmas Kitchen Party
Julius Heinz
22. December 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Real-Time Computer Vision: Face Recognition with a Robot
Sarah Sester
30. November 2022
Read more
  • Data Engineering
  • Tutorial
Data Engineering – From Zero to Hero
Thomas Alcock
23. November 2022
Read more
  • Recap
  • statworx
statworx @ UXDX Conf 2022
Markus Berroth
18. November 2022
Read more
  • Artificial Intelligence
  • Machine Learning
  • Tutorial
Paradigm Shift in NLP: 5 Approaches to Write Better Prompts
Team statworx
26. October 2022
Read more
  • Recap
  • statworx
statworx @ vuejs.de Conf 2022
Jakob Gepp
14. October 2022
Read more
  • Data Engineering
  • Data Science
Application and Infrastructure Monitoring and Logging: metrics and (event) logs
Team statworx
29. September 2022
Read more
  • Coding
  • Data Science
  • Machine Learning
Zero-Shot Text Classification
Fabian Müller
29. September 2022
Read more
  • Cloud Technology
  • Data Engineering
  • Data Science
How to Get Your Data Science Project Ready for the Cloud
Alexander Broska
14. September 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Repre­sentation in AI – Part 1: Utilizing StyleGAN to Explore Gender Directions in Face Image Editing
Isabel Hermes
18. August 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
statworx AI Principles: Why We Started Developing Our Own AI Guidelines
Team statworx
04. August 2022
Read more
  • Data Engineering
  • Data Science
  • Python
How to Scan Your Code and Dependencies in Python
Thomas Alcock
21. July 2022
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Data-Centric AI: From Model-First to Data-First AI Processes
Team statworx
13. July 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Human-centered AI
  • Machine Learning
DALL-E 2: Why Discrimination in AI Development Cannot Be Ignored
Team statworx
28. June 2022
Read more
  • R
The helfRlein package – A collection of useful functions
Jakob Gepp
23. June 2022
Read more
  • Recap
  • statworx
Unfold 2022 in Bern – by Cleverclip
Team statworx
11. May 2022
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
Break the Bias in AI
Team statworx
08. March 2022
Read more
  • Artificial Intelligence
  • Cloud Technology
  • Data Science
  • Sustainable AI
How to Reduce the AI Carbon Footprint as a Data Scientist
Team statworx
02. February 2022
Read more
  • Recap
  • statworx
2022 and the rise of statworx next
Sebastian Heinz
06. January 2022
Read more
  • Recap
  • statworx
5 highlights from the Zurich Digital Festival 2021
Team statworx
25. November 2021
Read more
  • Data Science
  • Human-centered AI
  • Machine Learning
  • Strategy
Why Data Science and AI Initiatives Fail – A Reflection on Non-Technical Factors
Team statworx
22. September 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
  • statworx
Column: Human and machine side by side
Sebastian Heinz
03. September 2021
Read more
  • Coding
  • Data Science
  • Python
How to Automatically Create Project Graphs With Call Graph
Team statworx
25. August 2021
Read more
  • Coding
  • Python
  • Tutorial
statworx Cheatsheets – Python Basics Cheatsheet for Data Science
Team statworx
13. August 2021
Read more
  • Data Science
  • statworx
  • Strategy
STATWORX meets DHBW – Data Science Real-World Use Cases
Team statworx
04. August 2021
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Deploy and Scale Machine Learning Models with Kubernetes
Team statworx
29. July 2021
Read more
  • Cloud Technology
  • Data Engineering
  • Machine Learning
3 Scenarios for Deploying Machine Learning Workflows Using MLflow
Team statworx
30. June 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification III: Explainability of Deep Learning Models With Grad-CAM
Team statworx
19. May 2021
Read more
  • Artificial Intelligence
  • Coding
  • Deep Learning
Car Model Classification II: Deploying TensorFlow Models in Docker Using TensorFlow Serving
No items found.
12. May 2021
Read more
  • Coding
  • Deep Learning
Car Model Classification I: Transfer Learning with ResNet
Team statworx
05. May 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification IV: Integrating Deep Learning Models With Dash
Dominique Lade
05. May 2021
Read more
  • AI Act
Potential Not Yet Fully Tapped – A Commentary on the EU’s Proposed AI Regulation
Team statworx
28. April 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • statworx
Creaition – revolutionizing the design process with machine learning
Team statworx
31. March 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning
5 Types of Machine Learning Algorithms With Use Cases
Team statworx
24. March 2021
Read more
  • Recaps
  • statworx
2020 – A Year in Review for Me and GPT-3
Sebastian Heinz
23. Dezember 2020
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
5 Practical Examples of NLP Use Cases
Team statworx
12. November 2020
Read more
  • Data Science
  • Deep Learning
The 5 Most Important Use Cases for Computer Vision
Team statworx
11. November 2020
Read more
  • Data Engineering
5 Technologies That Every Data Engineer Should Know
Team statworx
22. October 2020
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning

Generative Adversarial Networks: How Data Can Be Generated With Neural Networks
Team statworx
10. October 2020
Read more
  • Coding
  • Data Science
  • Deep Learning
Fine-tuning Tesseract OCR for German Invoices
Team statworx
08. October 2020
Read more
  • Artificial Intelligence
  • Machine Learning
Whitepaper: A Maturity Model for Artificial Intelligence
Team statworx
06. October 2020
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
How to Provide Machine Learning Models With the Help Of Docker Containers
Thomas Alcock
01. October 2020
Read more
  • Recap
  • statworx
STATWORX 2.0 – Opening of the New Headquarters in Frankfurt
Julius Heinz
24. September 2020
Read more
  • Machine Learning
  • Python
  • Tutorial
How to Build a Machine Learning API with Python and Flask
Team statworx
29. July 2020
Read more
  • Data Science
  • Statistics & Methods
Model Regularization – The Bayesian Way
Thomas Alcock
15. July 2020
Read more
  • Recap
  • statworx
Off To New Adventures: STATWORX Office Soft Opening
Team statworx
14. July 2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Dockerize ShinyApps
Team statworx
15. May 2020
Read more
  • Coding
  • Python
Making Of: A Free API For COVID-19 Data
Sebastian Heinz
01. April 2020
Read more
  • Frontend
  • Python
  • Tutorial
How To Build A Dashboard In Python – Plotly Dash Step-by-Step Tutorial
Alexander Blaufuss
26. March 2020
Read more
  • Coding
  • R
Why Is It Called That Way?! – Origin and Meaning of R Package Names
Team statworx
19. March 2020
Read more
  • Data Visualization
  • R
Community Detection with Louvain and Infomap
Team statworx
04. March 2020
Read more
  • Coding
  • Data Engineering
  • Data Science
Testing REST APIs With Newman
Team statworx
26. February 2020
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 2
Team statworx
19. Febuary 2020
Read more
  • Coding
  • Data Visualization
  • R
Animated Plots using ggplot and gganimate
Team statworx
14. Febuary 2020
Read more
  • Machine Learning
Machine Learning Goes Causal II: Meet the Random Forest’s Causal Brother
Team statworx
05. February 2020
Read more
  • Artificial Intelligence
  • Machine Learning
  • Statistics & Methods
Machine Learning Goes Causal I: Why Causality Matters
Team statworx
29.01.2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Create REST APIs With R Plumber
Stephan Emmer
23. January 2020
Read more
  • Recaps
  • statworx
statworx 2019 – A Year in Review
Sebastian Heinz
20. Dezember 2019
Read more
  • Artificial Intelligence
  • Deep Learning
Deep Learning Overview and Getting Started
Team statworx
04. December 2019
Read more
  • Coding
  • Machine Learning
  • R
Tuning Random Forest on Time Series Data
Team statworx
21. November 2019
Read more
  • Data Science
  • R
Combining Price Elasticities and Sales Forecastings for Sales Improvement
Team statworx
06. November 2019
Read more
  • Data Engineering
  • Python
Access your Spark Cluster from Everywhere with Apache Livy
Team statworx
30. October 2019
Read more
  • Recap
  • statworx
STATWORX on Tour: Wine, Castles & Hiking!
Team statworx
18. October 2019
Read more
  • Data Science
  • R
  • Statistics & Methods
Evaluating Model Performance by Building Cross-Validation from Scratch
Team statworx
02. October 2019
Read more
  • Data Science
  • Machine Learning
  • R
Time Series Forecasting With Random Forest
Team statworx
25. September 2019
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 1
Team statworx
11. September 2019
Read more
  • Machine Learning
  • R
  • Statistics & Methods
What the Mape Is FALSELY Blamed For, Its TRUE Weaknesses and BETTER Alternatives!
Team statworx
16. August 2019
Read more
  • Coding
  • Python
Web Scraping 101 in Python with Requests & BeautifulSoup
Team statworx
31. July 2019
Read more
  • Coding
  • Frontend
  • R
Getting Started With Flexdashboards in R
Thomas Alcock
19. July 2019
Read more
  • Recap
  • statworx
statworx summer barbecue 2019
Team statworx
21. June 2019
Read more
  • Data Visualization
  • R
Interactive Network Visualization with R
Team statworx
12. June 2019
Read more
  • Deep Learning
  • Python
  • Tutorial
Using Reinforcement Learning to play Super Mario Bros on NES using TensorFlow
Sebastian Heinz
29. May 2019
Read more
This is some text inside of a div block.
This is some text inside of a div block.