Docker run command overhead - docker

I'm curious about the amount of overhead (time taken to start running, assuming I've already pulled the docker image) Docker gives when doing a docker run opposed to me just writing a script that installs the same things that the docker would. From my experience, docker run seems to always execute instantly and is ready to go, but I could imagine some more complicated dockers might have some additional overhead? I'm thinking about using something like YARN to bring up services on the fly with a docker, but was wondering if it might come up quicker without a Docker. Any thoughts on this?
Note: I'm not concerned about performance after the docker is up right now, I'm concerned about time taking to bring up the service.

Docker is pretty quick to start, but there is some things to consider.
The quickest way to test the overhead is using the time executable and running this command:
docker run --rm -it ubunbu /bin/bash echo test
Which gives you something like this:
$ time docker run --rm -it ubuntu echo test
test
real 0m0.936s
user 0m0.161s
sys 0m0.008s
What you can read from this, is that the cpu only took 0.16 sec to run that command, but it took a little less than a sec in real time, which includes (disk I/O, other process)
But in general, do not worry about performance if you are using containers, they main reason you want to use them for is consistency.

Related

Use nohup to run a long process in docker at a remote server

I used to run a long training process on a remote server with GPU capabilities. Now my work schedule changes, so I can't have my computer connected to a network all the time till I finish the process. I found that nohup is the solution for me. But I don't know how to keep invoke the process correctly related my situation.
I use ssh to connect to the remote server.
I have to use docker to access to GPU.
Then I start the process in the docker.
If I start the process with nohup in docker, I can't really leave docker, right. So, do I use nohup at each step?
Edit:
I need the terminal output of the process at step 3, because I need that information to carry out the rest of the work. Consider, step 3 is training a neural network. So, the training log tells me the accuracy of different models at different iterations. I use that information to do the testing.
Following #David Maze's suggestion, I did this (a slightly different approach as I was not familiar with docker a whole lot)
Logged in to the remote server.
Configured the docker script to have remote workdir.
...
WORKDIR /workspace
...
After building the docker container, run docker with mount option to mount the local project to docker workdir. When running docker, I used nohup. Since I don't need interactive mode I ignored the -it flag.
nohup docker run --gpus all -v $(pwd)/path-to-project-root:/workspace/ docker-image:tag bash -c "command1; command2" > project.out 2>&1 &
To test this, I logged out from the server and see the content of project.out later. It contained the expected output.

Is there an easy way to automatically run a script whenever I (re)start a container?

I have built a Docker image, copied a script into the image, and automatically execute it when I run the image, thanks to this Dockerfile command:
ENTRYPOINT ["/path/to/script/my_script.sh"]
(I had to give it chmod rights in a RUN command to actually make it run)
Now, I'm quite new to Docker, so I'm not sure if what I want to do is even good practice:
My basic idea is that I would rather not always have to create a new container whenever I want to run this script, but to instead find a way to re-execute this script whenever I (re)start the same container.
So, instead of having to type docker run my_image, accomplishing the same via docker (re)start container_from_image.
Is there an easy way to do this, and does it even make sense from a resource parsimony perspective?
docker run is fairly cheap, and the typical Docker model is generally that you always start from a "clean slate" and set things up from there. A Docker container doesn't have the same set of pre-start/post-start/... hooks that, for instance, a systemd job does; there is only the ENTRYPOINT/CMD mechanism. The way you have things now is normal.
Also remember that you need to delete and recreate containers for a variety of routine changes, with the most important long-term being that you have to delete a container to change the underlying image (because the installed software or the base Linux distribution has a critical bug you need a fix for). I feel like a workflow built around docker build/run/stop/rm is the "most Dockery" and fits well with the immutable-infrastructure pattern. Repeated docker stop/start as a workflow feels like you're trying to keep this specific container alive, and in most cases that shouldn't matter.
From a technical point of view you can think of the container environment and its filesystem, and the main process inside the container. docker run is actually docker create plus docker start. I've never noticed the "create" half of this taking substantial time, but if you're doing something like starting a JVM or loading a large dataset on startup, the "start" half will be slow whether or not it's coupled with creating a new container.
For chmod issue you can do something like this
COPY . /path/to/script/my_script.sh
RUN chmod 777 -R /path/to/script/my_script.sh
For rerun script issue
The ENTRYPOINT specifies a command that will always be executed when the container starts.
It can be either
docker run container_from_image
or
docker start container_from_image
So whenever your container start your ENTRYPOINT command will be executed.
You can refer this for more detail

Training routine via docker image nvcr.io/nvidia/torch is 41% slower

I train a DNN via NVidia docker image nvcr.io/nvidia/torch. Everything works fine except that it is far slower (+41%) than the training time when executed on my machine. One batch execution takes around 410ms instead of 290ms on bare metal.
My nvidia-docker run command:
nvidia-docker run -it --network=host --ipc=host -v /mnt/data1:/mnt/data1 my-custom-image bash
my-custom-image is based on nvcr.io/nvidia/torch. I only add my training scripts (.lua) and install luajit.
All results are written in /mnt/data1 and not inside the container itself.
Is it normal or am I doing something wrong? How can I investigate where wasted time comes from?
Update: I double checked and nothing is written inside the container during the training. All data are written on /mnt/data1.
Update2: I tried the inference routine inside the container, and it doesn't take more time than bare metal setup.

Can Docker Engine start containers in parallel

If I have scripts issueing docker run commands in parallel, the docker engine appears to handle these commands in series. Since runing a minimal container image with "docker run" takes around 100ms to start does this mean issueing commands in parallel to run 1000 containers will take the docker engine 100ms x 1000 = 100 s or nearly 2 minutes? Is there some reason why the docker engine is serial instead of parallel? How do people get around this?
How do people get around this?
a/ They don't start 1000 containers at the same time
b/ if they do, they might use a cluster management system like docker swarm to manage the all process
c/ they do run 1000 containers, in advance in order to take into account the starting time.
Truly parallelize docker run command could be tricky considering some of those command might depend on other containers to be created/started first (like a docker run --volumes-from=xxx)

Running quick programs inside a docker container

My web-application uses graphicsmagick to resize images. Resizing an image will usually take about 500ms. To simpilfy the setup and wall it off I wanted to move the graphicsmagick call inside a docker container and use docker run to execute it. However, running inside a container adds an additional ~300ms, which is not really acceptable for my use case.
To reduce the overhead of starting a container one could run an endless program (sth. like docker run tail -f /dev/null) and then use docker exec to execute the actual call to graphicsmagick inside the running container. However this seems like a big hack.
How would one fix this problem "correctly" with docker or is it just not the right fit here?
This sounds like a good candidate for a microservice, a long-lived server (implemented in your language of choice) listening on a port for resizing requests that uses graphicsmagick under the hood.
Pros:
Implementation details of the resizer are kept inside the container.
Scalability - you can spin up more containers and put them behind a load balancer.
Cons:
You have to implement a server
Pro or Con:
You've started down the road to a microservices architecture.
Using docker exec may be possible, but it's really not intended to be used that way. Cut out the middleman and talk to your container directly.
The best ways is:
create custom deb/rpm package
use one from system repos
But if you like more docker approach, the best visible way is to (example below):
Start "daemon":
docker run --name ubuntu -d -v /path/to/host/folder:/path/to/guest/folder ubuntu:14.04 sleep infinity
Execute command:
docker exec ubuntu <any needed command>
Where:
"ubuntu" - name of the container
-d - de-attach container
-v - mount volume host -> container
sleep infinity - do nothing, used as entry point and is a way better than read operation.
Use mounted volume in case, if you working with files, if not - do not mount volume and just use pipes.

Resources