Slow install / upgrade through Helm (for Kubernetes) - docker

Our application consists of circa 20 modules. Each module contains a (Helm) chart with several deployments, services and jobs. Some of those jobs are defined as Helm pre-install and pre-upgrade hooks. Altogether there are probably about 120 yaml files, which eventualy result in about 50 running pods.
During development we are running Docker for Windows version 2.0.0.0-beta-1-win75 with Docker 18.09.0-ce-beta1 and Kubernetes 1.10.3. To simplify management of our Kubernetes yaml files we use Helm 2.11.0. Docker for Windows is configured to use 2 CPU cores (of 4) and 8GB RAM (of 24GB).
When creating the application environment for the first time, it takes more that 20 minutes to become available. This seems far to slow; we are probably making an important mistake somewhere. We have tried to improve the (re)start time, but to no avail. Any help or insights to improve the situation would be greatly appreciated.
A simplified version of our startup script:
#!/bin/bash
# Start some infrastructure
helm upgrade --force --install modules/infrastructure/chart
# Start ~20 modules in parallel
helm upgrade --force --install modules/module01/chart &
[...]
helm upgrade --force --install modules/module20/chart &
await_modules()
Executing the same startup script again later to 'restart' the application still takes about 5 minutes. As far as I know, unchanged objects are not modified at all by Kubernetes. Only the circa 40 hooks are run by Helm.
Running a single hook manually with docker run is fast (~3 seconds). Running that same hook through Helm and Kubernetes regularly takes 15 seconds or more.
Some things we have discovered and tried are listed below.
Linux staging environment
Our staging environment consists of Ubuntu with native Docker. Kubernetes is installed through minikube with --vm-driver none.
Contrary to our local development environment, the staging environment retrieves the application code through a (deprecated) gitRepo volume for almost every deployment and job. Understandibly, this only seems to worsen the problem. Starting the environment for the first time takes over 25 minutes, restarting it takes about 20 minutes.
We tried replacing the gitRepo volume with a sidecar container that retrieves the application code as a TAR. Although we have not modified the whole application, initial tests indicate this is not particularly faster than the gitRepo volume.
This situation can probably be improved with an alternative type of volume that enables sharing of code between deployements and jobs. We would rather not introduce more complexity, though, so we have not explored this avenue any further.
Docker run time
Executing a single empty alpine container through docker run alpine echo "test" takes roughly 2 seconds. This seems to be overhead of the setup on Windows. That same command takes less 0.5 seconds on our Linux staging environment.
Docker volume sharing
Most of the containers - including the hooks - share code with the host through a hostPath. The command docker run -v <host path>:<container path> alpine echo "test" takes 3 seconds to run. Using volumes seems to increase runtime with aproximately 1 second.
Parallel or sequential
Sequential execution of the commands in the startup script does not improve startup time. Neither does it drastically worsen.
IO bound?
Windows taskmanager indicates that IO is at 100% when executing the startup script. Our hooks and application code are not IO intensive at all. So the IO load seems to originate from Docker, Kubernetes or Helm. We have tried to find the bottleneck, but were unable to pinpoint the cause.
Reducing IO through ramdisk
To test the premise of being IO bound further, we exchanged /var/lib/docker with a ramdisk in our Linux staging environment. Starting the application with this configuration was not significantly faster.

To compare Kubernetes with Docker, you need to consider that Kubernetes will run more or less the same Docker command on a final step. Before that happens many things are happening.
The authentication and authorization processes, creating objects in etcd, locating correct nodes for pods scheduling them and provisioning storage and many more.
Helm itself also adds an overhead to the process depending on size of chart.
I recommend reading One year using Kubernetes in production: Lessons learned. Author goes into explaining what have they achieved by switching to Kubernetes as well differences in overhead:
Cost calculation
Looking at costs, there are two sides to the story. To run Kubernetes, an etcd cluster is required, as well as a master node. While these are not necessarily expensive components to run, this overhead can be relatively expensive when it comes to very small deployments. For these types of deployments, it’s probably best to use a hosted solution such as Google's Container Service.
For larger deployments, it’s easy to save a lot on server costs. The overhead of running etcd and a master node aren’t significant in these deployments. Kubernetes makes it very easy to run many containers on the same hosts, making maximum use of the available resources. This reduces the number of required servers, which directly saves you money. When running Kubernetes sounds great, but the ops side of running such a cluster seems less attractive, there are a number of hosted services to look at, including Cloud RTI, which is what my team is working on.

Related

Run a Docker service multiple times in parallel to take advantage of a computer with multiple CPU cores

I have a small Python application which is packed into a compose file as an app and a db service.
The job of the app service is to run some spatial computations using data from the db (PostgreSQL) and writing some results in that same database along with some files on disk. For the latter point, I'm using a bind mount as a volume, so that the file are saved on the host machine.
The problem I'm facing is that, based on a sample dataset, I estimated the time to finish the computations on all the records of the database to roughly 1 year...
I also noticed that the Python scripts of the app are only using one CPU core at a time. This is fine, because I'm not used to parallel programming, and also because I rely on a third party software to run some analysis, and that piece of software is also mono-threaded.
On the other hand, I have access to a multi CPU-cores machine (60x). I noticed that each time I start my compose file, only one CPU core is active.
Hence my naive question; could I take advantage of the dockerization to run the same app service as many time there are available CPU on that machine (or a bit less maybe)?
Please notice that the db service can only be there once and shared to these multiple same app services.
If yes, how to do that properly and efficiently?
I was thinking of "copy-pasting" the app service 50 times in the compose file, giving it each time another name but this is probably awfully ugly(!). There should be better ways of doing that... From the host machine maybe? Any hints are appreciated. I'm not a docker expert.
In short, this is possible by using the --scale option of docker-compose up:
docker-compose up --scale app=50 app
Doc: https://docs.docker.com/compose/reference/up/
This will start 50 instances of the app service.
Of course, the application must be designed to be run in parallel if it accesses a unique database in order to avoid troubles.
Versionning information on Ubuntu 18.04 (`5.4.0-81-generic x86_64 GNU/Linux`):
$ docker-compose --version
dockedocker-compose version 1.27.4, build 40524192
$ docker --version
Docker version 20.10.8, build 3967b7d

How to use docker in the development phase of a devops life cycle?

I have a couple of questions related to the usage of Docker in a development phase.
I am going to propose three different scenarios of how I think Docker could be used in a development environment. Let's imagine that we are creating a REST API in Java and Spring Boot. For this I will need a MySQL database.
The first scenario is to have a docker-compose for development with the MySQL container and a production docker-compose with MySQL and the Java application (jar) in another container. To develop I launch the docker-compose-dev.yml to start only the database. The application is launched and debugged using the IDE, for example, IntelliJ Idea. Any changes made to the code, the IDE will recognize and relaunch the application by applying the changes.
The second scenario is to have, for both the development and production environment, a docker-compose with the database and application containers. That way, every time I make a change in the code, I have to rebuild the image so that the changes are loaded in the image and the containers are lauched again. This scenario may be the most typical and used for development with Docker, but it seems very slow due to the need to rebuild the image every time there is a change.
The third scenario consists of the mixture of the previous two. Two docker-compose. The development docker-compose contains both containers, but with mechanisms that allow a live reload of the application, mapping volumes and using, for example, Spring Dev Tools. In this way, the containers are launched and, in case of any change in the files, the application container will detect that there is a change and will be relaunched. For production, a docker-compose would be created simply with both containers, but without the functionality of live reload. This would be the ideal scenario, in my opinion, but I think it is very dependent on the technologies used since not all allow live reload.
The questions are as follows.
Which of these scenarios is the most typical when using Docker for phase?
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
The doubts and the scenarios that I raise came up after I raised the problem that scenario 2 has. With each change in the code, having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Thanks in advance for your time.
NOTE: It may be a question subject to opinion, but it would be nice to know how developers usually deal with these problems.
Disclaimer: this is my own opinion on the subject as asked by Mr. Mars. Even though I did my best to back my answer with actual sources, it's mostly based on my own experience and a bit of common sense
Which of these scenarios is the most typical when using Docker for development?
I have seen all 3 scenarios iin several projects, each of them with their advantages and drawbacks. However I think scenario 3 with a Docker Compose allowing for dynamic code reload is the most advantageous in term of flexibility and consistency:
Dev and Prod Docker Compose are close matches, meaning Dev environment is as close as possible to Prod environment
You do not have to rebuild the image constantly when developping, but it's easy to do when you need to
Lots of technologies support such scenario, such as Spring Dev Tools as you mentionned, but also Python Flask, etc.
You can easily leverage Docker Compose extends a.k.a configuration sharing mechanism (also possible with scenario 2)
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
Scenario 1 is quite common, but the IDE environment would probably be different than the one from the Docker container (and it would be difficult to maintain a match of version for each libs, dependencies, etc. from IDE environment to Docker environment). It would also probably require to go through an intermediate step between Dev and Production to actually test the Docker image built after Dev is working before going to Production.
In my own experience doing this is great when you do not want to deal too much with Docker when actually doing dev and/or the language or technology you use is not adapted for dynamic reload as described in scenario 3. But in the end it only adds a drift between your environments and more complexity between Dev and Prod deployment method.
having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Beside the scenarios you describe, you have ways to decently (even drastically) reduce image build time by leveraging Docker build cache and designing your Dockerfile. For example, a Python application would typically copy code as the last (or almost last) step of the build to avoid invalidating the cache, and for Java app it would be possible to split code so as to avoid compiling the entire application everytime a bit of code changes - that would depend on your actual setup.
I personally use a workflow roughly matching scenario 3 such as:
a docker-compose.yml file corresponding to my Production environment
a docker-compose.dev.yml which will override some aspect of my main Docker Compose file such as mouting code from my machine, adding dev specific flags to commands, etc. - it would be run such as
docker-compose -f docker-compose.yml -f docker-compose.dev.yml
but it would also be possible to have a docker-compose.override.yml as Docker Compose uses by default for override
in some situation I would have to use other overrides for specific situations such as docker-compose.ci.yml on my CI, but usually the main Docker Compose file is enough to describe my Prod environment (and if that's not the case, docker-compose.prod.yml does the trick)
I've seen them all used in different scenarios. There are some gotchas to avoid:
Applications inside of a container shouldn't depend on something running outside of a container on the host. So all of your dependencies should be containerized first.
File permissions with host volumes can be complicated depending on your version of docker. Some of the newer Docker Desktop installs automatically handle uid mappings, but if you develop directly on Linux you'll need to ensure the containers run as the same uid as your host user.
Avoid making changing inside the container if that isn't mapped into a host volume, since those changes will be lost when the container is recreated.
Looking at each of the options, here's my assessment of each:
Containerizing just the DB: This works well when developers already have a development environment for the language of choice, and there's no risk of external dependencies creeping in, e.g. a developer upgrading their JDK install to a newer version than the image is built with. It follows the idea of containerizing the dependencies first, while also giving developers the familiar IDE integration with their application.
Rebuilding the Image for Every Change: This tends to be the least ideal for developer workflow, but the quickest to implement when you're not familiar with the tooling. I'll give a 4th option that I consider an improvement to this.
Everything in a container, volume mounts, and live reloading: This is the most complicated to implement, and requires the language itself to support things like live reloading. However, when they do, it is nearly seamless for the developers and gets them up to speed on a new project quickly without needing to install any other tooling to get started.
Rebuild the app in the container with volume mounts: This is a halfway point between 2 and 3. When you don't have live reloading, you likely need to recompile or restart the interpreter to see any change. Rather than rebuilding the image, I put the recompile step in the entrypoint of a development image. I'll mount the code into the container, and run a full JDK instead of just a JRE (or whatever compiler is needed). I use named volumes for any dependency caches so they don't need to download on every restart. Then the method to see the changes is to restart that one container. The steps are identical to a compiled binary outside of a container, stop the old service, recompile, and restart the service, but now it happens inside of a container that should have the same tools used when building the production image.
For option 4, I tend to use a multi-stage build that has stages for build, develop, and release. The build stage pulls in the code and compiles it, the develop stage is the same base image as build but with an entrypoint that does the compile/run, and the release stage copies the result of the build stage into a minimal runtime. Developers then have a compose file for development that creates the development image and runs that with volume mounts and any debugging ports opened.
First of all, docker-compose is just for development and also testing phase, not for production. Example:
With a minimal and basic docker-compose, all your containers will run in the same machine? For development purposes it is ok, but in production, put all the apps in just one machine is a risk
Official link https://docs.docker.com/compose/production/
We will assume
01 java api
01 mysql database
01 web application that needs the api
all of these applications are already in production
Quick Answer
If you need to fix or add new feature to the java api, I advice you to use an ide like eclipse or IntelliJ Idea. Why?
Because java needs compilation.
Compile inside a docker container will take more time due to maven dependencies
IDE has code auto completion
etc
In this development phase, Docker helps you with one of its most powerful features: "Bring the production containers to your localhost". Yeah, in this case, docker-compose.yml is the best option because with one file, you could start everything you need : mysql database and web app but not your java api. Open your java api with your favorite ide.
Anyway if you want to use docker to "develop", you just need the Dockerfile and perform a docker build ... when you need to run your source code in your localhost
Basic Devops Life cycle with docker
Developer push source code changes using git
Your continuous integration (C.I) platform detect this change and perform
docker build ... (In this step, unit test are triggered)
docker push to your private hub. Container is uploaded in this step and will be used to deployments on other servers.
docker run or container deploy to the next environment : testing
Human testers ,selenium or another automation start their work
If no errors are detected, your C.I perform perform a final deploy of the uploaded container to your production environment. No docker build are required, just deploy or docker run.
Some Tips
Docker features are awesome but sometimes add too much complexity. So stop using volumes , hard disk dependency, logs, or complex configurations. If you use volumes, what will happen when your containers are in different hosts?
Java and Nodejs are a stable languages and your rest api or web apps does not need crazy configurations. Just maven compilation and java -jar ... or npm install and npm run start.
For logs you could use https://www.graylog.org/, google stasckdriver or another log management.
And like Heroku, stop using hard disk dependency as much as possible. In heroku platform disk are disposable, it means disappear when app is restarted. So instead of local file storage, you could use another file storage service with a lot of functionalities.
With this approaches, your containers can be deployed anywhere in a simple way
I'm using something similar to your 3rd scenario for my web dev, but it is Node-based. So I have 3 docker-compose files (actually 4, one is base and having all common stuff for others) for dev, staging and production environments.
Staging docker-compose config is similar to production config excluding SSL, ports and other things that may not allow to use it locally.
I have a separate container for each service (like DB, queue), and for dev, I also have additional dev DB and queue containers mostly for running auto-tests. In dev environment, all source are mounted into containers, so it allows to use IDE/editor of choice outside the container, and see changes inside.
I use supervisor to manage my workers inside a container with workers and have some commands to restart my workers manually when I need this. Maybe you can have something similar to recompile/restart your Java app. Or if you have an idea of how to organize app source code changes detection and your app auto-reloading, than could be the best variant. By the way, you gave me an idea to research something similar suitable for my case.
For staging and production environment, my source code is included inside the corresponding container using production Dockerfile. And I have some commands to restart all stuff using an environment I need, and this typically includes rebuilding containers, but because of Docker cache, it doesn't take much time (about 20 seconds). And taking into account that switching between environments is not a too frequent operation I feel quite comfortable with this.
Production docker-compose config is used only during deployment because it enables SSL, proper ports and has some additional production stuff.
Update for details on backend app restarting using Supervisor:
This is how I use it in my projects:
A part of my Dockerfile with installing Supervisor:
FROM node:10.15.2-stretch-slim
RUN apt-get update && apt-get install -y \
# Supervisor
supervisor \
...
...
# Configs for services/workers managed by supervisor
COPY some/path/worker-configs/*.conf /etc/supervisor/conf.d/
This is an example of one of Supervisor configs for a worker:
[program:myWorkerName]
command=/usr/local/bin/node /app/workers/my-worker.js
user=root
numprocs=1
stopsignal=INT
autostart=true
autorestart=true
startretries=10
In this example in your case command should run your Java app.
And this is an example of command aliases for convenient managing Supervisor from outside of containers. I'm using Makefile as a universal runner of all commands, but this can be something else.
# Used to run all workers
su-start:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl start all
# Used to stop all workers
su-stop:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl stop all
# Used to restart all workers
su-restart:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl restart all
# Used to check status of all workers
su-status:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl status
As I described above these Supervisor commands need to be run manually, but I think it is possible to implement maybe another Node-based worker or some watcher outside of a container with workers that will detect file system changes for sources directory and run these commands automatically. I think it is possible to implement something like this using Java as well like this or this.
On the other hand, it needs to be done carefully to avoid constant restarting workers on every little change.

Docker Swarm CPU overload on deploy with Spring Boot containers

I have created a number of Spring Boot application, which all work like magic in isolation or when started up one of the other manually.
My challenge is that I want to deploy a stack with all the services in a Docker Swarm.
Initially I didn't understand what was going on, as it seemed like all my containers were hanging.
Turns out running a single Spring Boot application spikes up my CPU utilization to max it out for a good couple of seconds (20s+ to start up).
Now the issue is that Docker Swarm is launching 10 of these containers simultaneously and my load average goes above 80 and the system grinds to a halt. The container HEALTHCHECKS starts timing out and eventually Docker restarts them. This is an endless cycle and may or may not stabilize and if it does stabilize it takes a minimum of 30 minutes. So much for micro services vs big fat Java EE applications :(
Is there any way to convince Docker to rollout the containers one by one? I'm sure this will help a lot.
There is a rolling update parameter - https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/ - but is does not seem applicable to startup deployment.
Your help will be greatly appreciated.
I've also tried systemd (which isn't ideal for distributed micro services). It worked slightly better than Docker, but have the same issue when deploying all the applications at once.
Initially I wanted to try Kubernetes, but I've got enough on my plate and if I can get away with Docker Swarm, that would be awesome.
Thanks!

Best Practices for Cron on Docker

I've transitioned to using docker with cron for some time but I'm not sure my setup is optimal. I have one cron container that runs about 12 different scripts. I can edit the schedule of the scripts but in order to deploy a new version of the software running (some scripts which run for about 1/2 day) I have to create a new container to run some of the scripts while others finish.
I'm considering either running one container per script (the containers will share everything in the image but the crontab). But this will still make it hard to coordinate updates to multiple containers sharing some of the same code.
The other alternative I'm considering is running cron on the host machine and each command would be a docker run command. Doing this would let me update the next run image by using an environment variable in the crontab.
Does anybody have any experience with either of these two solutions? Are there any other solutions that could help?
If you are just running docker standalone (single host) and need to run a bunch of cron jobs without thinking too much about their impact on the host, then making it simple running them on the host works just fine.
It would make sense to run them in docker if you benefit from docker features like limiting memory and cpu usage (so they don't do anything disruptive). If you also use a log driver that writes container logs to some external logging service so you can easily monitor the jobs.. then that's another good reason to do it. The last (but obvious) advantage is that deploying new software using a docker image instead of messing around on the host is often a winner.
It's a lot cleaner to make one single image containing all the code you need. Then you trigger docker run commands from the host's cron daemon and override the command/entrypoint. The container will then die and delete itself after the job is done (you might need to capture the container output to logs on the host depending on what logging driver is configured). Try not to send in config values or parameters you change often so you keep your cron setup as static as possible. It can get messy if a new image also means you have to edit your cron data on the host.
When you use docker run like this you don't have to worry when updating images while jobs are running. Just make sure you tag them with for example latest so that the next job will use the new image.
Having 12 containers running in the background with their own cron daemon also wastes some memory, but the worst part is that cron doesn't use the environment variables from the parent process, so if you are injecting config with env vars you'll have to hack around that mess (write them do disk when the container starts and such).
If you worry about jobs running parallel there are tons of task scheduling services out there you can use, but that might be overkill for a single docker standalone host.

Docker, what is it and what is the purpose

I've heard about Docker some days ago and wanted to go across.
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Can it replace a virtual machine dedicated to development?
What is the purpose, in simple words, of using Docker in companies? The main advantage?
VM: Using virtual machine (VM) software, for example, Ubuntu can be installed inside a Windows. And they would both run at the same time. It is like building a PC, with its core components like CPU, RAM, Disks, Network Cards etc, within an operating system and assemble them to work as if it was a real PC. This way, the virtual PC becomes a "guest" inside an actual PC which with its operating system, which is called a host.
Container: It's same as above but instead of using an entire operating system, it cut down the "unnecessary" components of the virtual OS to create a minimal version of it. This lead to the creation of LXC (Linux Containers). It therefore should be faster and more efficient than VMs.
Docker: A docker container, unlike a virtual machine and container, does not require or include a separate operating system. Instead, it relies on the Linux kernel's functionality and uses resource isolation.
Purpose of Docker: Its primary focus is to automate the deployment of applications inside software containers and the automation of operating system level virtualization on Linux. It's more lightweight than standard Containers and boots up in seconds.
(Notice that there's no Guest OS required in case of Docker)
[ Note, this answer focuses on Linux containers and may not fully apply to other operating systems. ]
What is a container ?
It's an App: A container is a way to run applications that are isolated from each other. Rather than virtualizing the hardware to run multiple operating systems, containers rely on virtualizing the operating system to run multiple applications. This means you can run more containers on the same hardware than VMs because you only have one copy of the OS running, and you do not need to preallocate the memory and CPU cores for each instance of your app. Just like any other app, when a container needs the CPU or Memory, it allocates them, and then frees them up when done, allowing other apps to use those same limited resources later.
They leverage kernel namespaces: Each container by default will receive an environment where the following are namespaced:
Mount: filesystems, / in the container will be different from / on the host.
PID: process id's, pid 1 in the container is your launched application, this pid will be different when viewed from the host.
Network: containers run with their own loopback interface (127.0.0.1) and a private IP by default. Docker uses technologies like Linux bridge networks to connect multiple containers together in their own private lan.
IPC: interprocess communication
UTS: this includes the hostname
User: you can optionally shift all the user id's to be offset from that of the host
Each of these namespaces also prevent a container from seeing things like the filesystem or processes on the host, or in other containers, unless you explicitly remove that isolation.
And other linux security tools: Containers also utilize other security features like SELinux, AppArmor, Capabilities, and Seccomp to limit users inside the container, including the root user, from being able to escape the container or negatively impact the host.
Package your apps with their dependencies for portability: Packaging an application into a container involves assembling not only the application itself, but all dependencies needed to run that application, into a portable image. This image is the base filesystem used to create a container. Because we are only isolating the application, this filesystem does not include the kernel and other OS utilities needed to virtualize an entire operating system. Therefore, an image for a container should be significantly smaller than an image for an equivalent virtual machine, making it faster to deploy to nodes across the network. As a result, containers have become a popular option for deploying applications into the cloud and remote data centers.
Can it replace a virtual machine dedicated to development ?
It depends: If your development environment is running Linux, and you either do not need access to hardware devices, or it is acceptable to have direct access to the physical hardware, then you'll find a migration to a Linux container fairly straight forward. The ideal target for a docker container are applications like web based API's (e.g. a REST app), which you access via the network.
What is the purpose, in simple words, of using Docker in companies ? The main advantage ?
Dev or Ops: Docker is typically brought into an environment in one of two paths. Developers looking for a way to more rapidly develop and locally test their application, and operations looking to run more workload on less hardware than would be possible with virtual machines.
Or Devops: One of the ideal targets is to leverage Docker immediately from the CI/CD deployment tool, compiling the application and immediately building an image that is deployed to development, CI, prod, etc. Containers often reduce the time to move the application from the code check-in until it's available for testing, making developers more efficient. And when designed properly, the same image that was tested and approved by the developers and CI tools can be deployed in production. Since that image includes all the application dependencies, the risk of something breaking in production that worked in development are significantly reduced.
Scalability: One last key benefit of containers that I'll mention is that they are designed for horizontal scalability in mind. When you have stateless apps under heavy load, containers are much easier and faster to scale out due to their smaller image size and reduced overhead. For this reason you see containers being used by many of the larger web based companies, like Google and Netflix.
Same questions were hitting my head some days ago and what i found after getting into it, let's understand in very simple words.
Why one would think about docker and containers when everything seems fine with current process of application architecture and development !!
Let's take an example that we are developing an application using nodeJs , MongoDB, Redis, RabbitMQ etc services [you can think of any other services].
Now we face these following things as problems in application development and shipping process if we forget about existence of docker or other alternatives of containerizing applications.
Compatibility of services(nodeJs, mongoDB, Redis, RabbitMQ etc.) with OS(even after finding compatible versions with OS, if something unexpected happens related to versions then we need to relook the compatibility again and fix that).
If two system components requires a library/dependency with different versions in application in OS(That need a relook every time in case of an unexpected behaviour of application due to library and dependency version issue).
Most importantly , If new person joins the team, we find it very difficult to setup the new environment, person has to follow large set of instructions and run hundreds of commands to finally setup the environment And it takes time and effort.
People have to make sure that they are using right version of OS and check compatibilities of services with OS.And each developer has to follow this each time while setting up.
We also have different environment like dev, test and production.If One developer is comfortable using one OS and other is comfortable with other OS And in this case, we can't guarantee that our application will behave in same way in these two different situations.
All of these make our life difficult in process of developing , testing and shipping the applications.
So we need something which handles compatibility issue and allows us to make changes and modifications in any system component without affecting other components.
Now we think about docker because it's purpose is to
containerise the applications and automate the deployment of applications and ship them very easily.
How docker solves above issues-
We can run each service component(nodeJs, MongoDB, Redis, RabbitMQ) in different containers with its own dependencies and libraries in the same OS but with different environments.
We have to just run docker configuration once then all our team developers can get started with simple docker run command, we have saved lot of time and efforts here:).
So containers are isolated environments with all dependencies and
libraries bundled together with their own process and networking
interfaces and mounts.
All containers use the same OS resources
therefore they take less time to boot up and utilise the CPU
efficiently with less hardware costs.
I hope this would be helpful.
Why use docker:
Docker makes it really easy to install and running software without worrying about setup or dependencies. Docker is really made it easy and really straight forward for you to install and run software on any given computer not just your computer but on web servers as well or any cloud based computing platform. For example when I went to install redis in my computer by using bellow command
wget http://download.redis.io/redis-stable.tar.gz
I got error,
Now I could definitely go and troubleshoot this install that program and then try installing redis again, and I kind of get into endless cycle of trying to do all bellow troubleshooting as you I am installing and running software.
Now let me show you how easy it is to run read as if you are making use of Docker instead. just run the command docker run -it redis, this command will install docker without any error.
What docker is:
To understand what is docker you have to know about docker Ecosystem.
Docker client, server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis this which is what the image that you just downloaded was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
Let me try to provide as simple answers as possible:
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Simply put: a package containing software. More specifically, an application and all its dependencies bundled together. A regular, non-dockerised application environment is hooked directly to the OS, whereas a Docker container is an OS abstraction layer.
And a container differs from an image in that a container is a runtime instance of an image - similar to how objects are runtime instances of classes in case you're familiar with OOP.
Can it replace a virtual machine dedicated to development?
Both VMs and Docker containers are virtualisation techniques, in that they provide abstraction on top of system infrastructure.
A VM runs a full “guest” operating system with virtual access to host resources through a hypervisor. This means that the VM often provides the environment with more resources than it actually needs In general, VMs provide an environment with more resources than most applications need. Therefore, containers are a lighter-weight technique. The two solve different problems.
What is the purpose, in simple words, of using Docker in companies?
The main advantage?
Containerisation goes hand-in-hand with microservices. The smaller services that make up the larger application are often tested and run in Docker containers. This makes continuous testing easier.
Also, because Docker containers are read-only they enforce a key DevOps principle: production services should remain unaltered
Some general benefits of using them:
Great isolation of services
Great manageability as containers contain everything the app needs
Encapsulation of implementation technology (in the containers)
Efficient resource utilisation (due to light-weight os virtualisation) in comparison to VMs
Fast deployment
If you don't have any prior experience with Docker this answer will cover the basics needed as a developer.
Docker has become a standard tool for DevOps as it is an effective application to improve operational efficiencies. When you look at why Docker was created and why it is very popular, it is mostly for its ability to reduce the amount of time it takes to set up the environments where applications run and are developed.
Just look at how long it takes to set up an environment where you have React as the frontend, a node and express API for backend, which also needs Mongo. And that's just to start. Then when your team grows and you have multiple developers working on the same front and backend and therefore they need to set up the same resources in their local environment for testing purposes, how can you guarantee every developer will run the same environment resources, let alone the same versions? All of these scenarios play well into Docker's strengths where it's value comes from setting containers with specific settings, environments and even versions of resources. Simply type a few commands to have Docker set up, install, and run your resources automatically.
Let's briefly go over the main components. A container is basically where your application or specific resource is located. For example, you could have the Mongo database in one container, then the frontend React application, and finally your node express server in the third container.
Then you have an image, which is from what the container is built. The images contains all the information that a container needs to build a container exactly the same way across any systems. It's like a recipe.
Then you have volumes, which holds the data of your containers. So if your applications are on containers, which are static and unchanging, the data that change is on the volumes.
And finally, the pieces that allow all these items to speak is networking. Yes, that sounds simple, but understand that each container in Docker have no idea of the existence of each container. They're fully isolated. So unless we set up networking in Docker, they won't have any idea how to connect to one and another.
There are really good answers above which I found really helpful.
Below I had drafted a simpler answer:
Reasons to dockerize my web application?
a. One OS for multiple applications ( Resources are shared )
b. Resource manangement ( CPU / RAM) is efficient.
c. Serverless Implementation made easier -Yes, AWS ECS with Fargate, But serverless can be achieved with Lamdba
d. Infra As Code - Agree, but IaC can be achieved via Terraforms
e. "It works in my machine" Issue
Still, below questions are open when choosing dockerization
A simple spring boot application
a. Jar file with size ~50MB
b. creates a Docker Image ~500MB
c. Cant I simply choose a small ec2 instance for my microservices.
Financial Benefits (reducing the individual instance cost) ?
a. No need to pay for individual OS subscription
b. Is there any monetary benefit like the below implementation?
c. let say select t3.2xlarge ( 8 core / 32 GB) and start 4-5 docker images ?

Resources