en
                    array(2) {
  ["de"]=>
  array(13) {
    ["code"]=>
    string(2) "de"
    ["id"]=>
    string(1) "3"
    ["native_name"]=>
    string(7) "Deutsch"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    int(0)
    ["default_locale"]=>
    string(5) "de_DE"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "de"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(6) "German"
    ["url"]=>
    string(113) "https://www.statworx.com/content-hub/blog/machine-learning-modelle-mit-hilfe-von-docker-containern-bereitstellen/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/de.png"
    ["language_code"]=>
    string(2) "de"
  }
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(119) "https://www.statworx.com/en/content-hub/blog/how-to-provide-machine-learning-models-with-the-help-of-docker-containers/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact
Content Hub
Blog Post

How to Provide Machine Learning Models With the Help Of Docker Containers

  • Expert Thomas Alcock
  • Date 01. October 2020
  • Topic Data EngineeringData ScienceMachine Learning
  • Format Blog
  • Category Technology
How to Provide Machine Learning Models With the Help Of Docker Containers

Introduction

Artificial intelligence (AI) is no longer a vision of the future for German companies. According to a survey by Deloitte of around 2,700 AI experts from nine countries, over 90 percent of those surveyed say that their company uses or plans to use technologies from one of the areas of Machine Learning (ML), Deep Learning, Natural Language Processing (NLP) and Computer Vision. This high percentage cannot be explained solely by the fact that the companies have recognized the potential of AI. Instead, there are also significantly more standardized solutions available for the use of these technologies. This development has led to the fact that the entry barrier has been lowered more and more in recent years.

For example, the three major cloud providers – Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure – offer standardized solutions for certain problems (e.g., object recognition on images, translation of texts, and automated machine learning). So far, not all problems can be solved with the help of such standardized applications. There can be various reasons for this: The most common reason is that the available standard solutions do not fit the desired problem. For example, in the field of NLP, the classification of entire texts is often available as a standard solution. If, on the other hand, a classification is not to take place on the text-level but the word-level, other models are required for this purpose, which are not always available as part of standard solutions. And even if these are available, the possible categories are usually predefined and cannot be further adapted. So, a service built for the classification of words into the categories of place, person, and time cannot be used to classify words in the categories of customer, product, and price. Many companies, therefore, continue to rely on developing their own ML models. Since the development of models often takes place on local computers, it must be ensured that these models are not only available to the developer. Once a model has been developed, a significant challenge is to make the model available to different users, since only then will the model add value for the company.

ML & AI projects in the company have their own challenges in both development and deployment. While development often fails due to the lack of suitable data availability, deployment can fail because a model is not compatible with the production environment. For example, machine learning models are mostly developed with open source languages or new ML frameworks (e.g., Dataiku or H2O), while an operational production environment often works with proprietary software that has been tested and proven over many years. The close integration of these two worlds often presents both components with significant challenges. Therefore, it is essential to link the development of ML models with the work of IT Operations. This process is called MLOps because data scientists work together with IT to make models productively usable.

MLOps is an ML development culture and practice whose goal is to link the development of ML systems (Dev) and the operation of ML systems (Ops). In practice, MLOps means focusing on automation and monitoring. This principle extends to all steps of ML system configuration, such as integration, testing, sharing, deployment, and infrastructure management. The code of a model is one of many other components, as illustrated in Figure 1. The figure shows other steps of the MLOps process in addition to the ML code and illustrates that the ML code itself is a relatively small part of the overall process.

Figure 1: Important components of the MLOps process

Further aspects of MLOps are e.g., the continuous provision and quality check of the data, or the testing of the model and, if necessary, the debugging. Docker containers have emerged as a core technology for the provision of specially developed ML models and are therefore presented in this paper.

Why Docker Container?

The challenge in providing ML models is that a model is written in a specific version of a programming language. This language is usually not available in the production environment and therefore has to be first installed. Besides, the model has its libraries, runtimes, and other technical dependencies, which also have to be installed in the production environment. Docker solves this problem via so-called containers, in which applications, including all their components, can be packaged in isolation and made available as separate services. These containers contain all components that the application or ML model needs to run, including code, libraries, runtimes, and system tools. Containers can therefore be used to provide their own models and algorithms in any environment without worrying about missing or incompatible libraries leading to errors.

Figure 2: Comparison of Docker Containers and virtual machines

Before Docker’s triumphant success, virtual machines were long the tool of choice to deliver applications and ML models in isolation. However, Docker has proven to have several advantages over virtual machines. These include improved resource utilization, scalability, and faster deployment of new software. In the following, the three points will be examined in more detail.

Improved resource utilization

Figure 2 schematically compares how applications can run in Docker Containers and virtual machines. Virtual machines have their own guest operating system on which different applications run. Virtualizing the guest operating system at the hardware level requires a lot of computing power and memory. Therefore, fewer applications can run simultaneously on a virtual machine while maintaining the same efficiency.

On the other hand, Docker Containers share the host operating system and do not require a separate operating system. Therefore, applications in Docker Containers boot faster and use less processing power and memory due to the host’s shared operating system. This lower resource utilization makes it possible to run several applications in parallel on a server, which improves the utilization rate of a server.

Scalability

Containers offer a further advantage in the area of scaling: If an ML model is to be used more frequently within the company, the application must be able to handle the additional requests. Fortunately, ML models with Docker can be easily scaled by starting additional containers with the same application. Especially Kubernetes, an open-source technology for container orchestration and scalable web services delivery, is suitable for flexible scaling due to its compatibility with Docker. With Kubernetes, web services can be scaled up or down flexibly and automatically based on the current workload.

Deployment of new software

Another advantage is that containers can be pushed seamlessly from local developing machines to production machines. Therefore, they are easy to exchange, for example, when a new version of the model is to be provided. The isolation of the code and all dependencies in a container also leads to a more stable environment in which the model can be operated. As a result, errors due to, for example, incorrect versions of individual libraries occur less frequently and can be corrected more effectively.

The model is provided within a container as a web service that other users and applications can access via common Internet protocols (e.g., HTTP). In this way, the model can be accessed as a web service by other systems and users without the need for them to meet specific technical requirements. Thus, it is unnecessary to install libraries or the model’s programming language to make the model usable.

In addition to Docker, other container technologies such as rkt and Mesos, whereby Docker, with its user-friendly operation and detailed documentation, make it easy for new developers to get started. Due to the large user base, templates exist for many standard applications that can be run in containers with little effort. At the same time, these free templates serve as a basis for developing your own applications.

Not least because of these advantages, Docker is now considered best practice in the MLOps process. The process of model development increasingly resembles the software development process, not least because of Docker. This becomes clear by the fact that container-based applications are supported by standard tools for the continuous integration and provision (CI/CD) of web services.

What role do Docker Containers play in the MLOps pipeline?

As already mentioned, MLOps is a complex process of continuous provision of ML models. The central components of such a system are illustrated in figure 1. The MLOps process is very similar to the DevOps process because the development of machine learning systems is also a form of software development. Standard concepts from the DevOps area, such as continuous integration of new code and provision of new software, can be found in the MLOps process. New ML-specific components such as continuous model training and model and data validation are added.

It is considered best practice to embed the development of ML models in an MLOps pipeline. The MLOps pipeline includes all steps from the provision and transformation of data, model training to the continuous provision of finished models on production servers. The code for each step in the pipeline is packed in a docker container and the pipeline starts the containers in a defined order. Here, Docker Containers show their strength. By isolating the code within individual containers, code changes can be continuously incorporated at the pipeline’s appropriate points without replacing the entire pipeline. Therefore the costs for pipeline maintenance are relatively low. The major cloud providers (GCP, AWS, and Microsoft Azure) also offer services that allow Docker Containers to be automatically built, deployed, and hosted as web services. To make container scaling easier and as flexible as possible, cloud providers also offer fully managed Kubernetes products. For the use of ML models in the enterprise, this flexibility means cost savings, as an ML application is simply downscaled in case the usage rate drops. Similarly, higher demand can be ensured by providing additional containers without having to stop the container with the model. Users of the application will not experience any unnecessary downtime.

Conclusion

For the development of machine learning models and MLOps pipelines, docker containers are a core technology. The advantages are portability, modularization, and isolation of model code, low maintenance when integrated into pipelines, faster deployment of new versions of the model and scalability via serverless cloud products for container deployment. At STATWORX, we have recognized the potential of Docker Containers and are actively using them. With this knowledge, we support our customers in the realization of their machine learning and AI projects. Do you want to use Docker in your MLOps pipeline? Our Academy offers remote training on Data Science with Docker as well as free webinars on MLOps and Docker. Thomas Alcock Thomas Alcock Thomas Alcock

Learn more!

As one of the leading companies in the field of data science, machine learning, and AI, we guide you towards a data-driven future. Learn more about statworx and our motivation.
About us