Docker provides docker:dind, a docker-in-docker image that works out the headaches for running a docker instance within another docker instance. Per the instructions, the command to start an internal instance looks like
docker run --memory=30g --rm --privileged -d --network dind-network --network-alias docker -e DOCKER_TLS_CERTDIR=/certs -v dind-certs-ca:/certs/ca -v dind-certs-client:/certs/client dindx1
Out of curiosity, I wanted to see how far I could recursively create docker-in-docker. To prevent the re-downloading of the image within each nested instance, I built a template image and carry it through each iteration (MRE below):
docker build -t dindx0 .
docker run $(RUN_ARGS) --name dindx0 dindx0
docker commit dindx0 dindx1
docker save dindx1 | gzip > dindx1.tar.gz
docker kill dindx0
docker cp dindx1.tar.gz dindx1:/project
docker exec dindx1 docker load -i dindx1.tar.gz
This works up to exactly 32 instances and fails with the curious error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380:
starting container process caused: process_linux.go:402: getting the final child's pid
from pipe caused: EOF: unknown.
Any thoughts on how I can push past this limitation?
Related
The docker run cmd docs show an example of how to specify several (but not all) gpus:
docker run -it --rm --gpus '"device=0,2"' nvidia-smi
I'd like to set the --gpus to use those indicated by the environment variable CUDA_VISIBLE_DEVICES.
I tried the obvious
docker run --rm -it --env CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES --gpus '"device=$CUDA_VISIBLE_DEVICES"' some_repo:some_tag /bin/bash
But this gives the error:
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: $CUDA_VISIBLE_DEVICES: unknown device: unknown.
Note: currently CUDA_VISIBLE_DEVICES=0,1
I saw a github issue about this, but the solution is a bit messy and didn't work for me.
What is a good way to use CUDA_VISIBLE_DEVICES to set --gpus argument of docker run cmd?
The single quotes in '"device=$CUDA_VISIBLE_DEVICES"' prevent the expansion of the variable. Try without the single quotes.
docker run --rm -it \
--env CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES \
--gpus device=$CUDA_VISIBLE_DEVICES some_repo:some_tag /bin/bash
I want to use Docker to manage multiple Python versions (I recently got a Mac with Apple Silicon and I use old Python environment).
Since I need to read Python scripts on Docker and save the output files (for later use outside the Docker environment), I tried to mount a folder (on my Mac) following this post.
However, it shows this error:
$ docker run --name dpython -it python-docker -v $(pwd):/tmp /bin/bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-v": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
It works without -v $(pwd):/tmp. I tried to specify different folders such as ~/ and /Users/ but they didn't work.
You must specify the volume before the image name:
$ docker run --name dpython -it -v $(pwd):/tmp python-docker /bin/bash
I am trying to attach a directory of static assets to my docker instance after it has been built. When I do something like this
docker run -it app /bin/bash
The container runs perfectly fine. However, if I do something like this:
docker run -it app -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
This also reproduces it:
docker run -it node:12.18-alpine3.12 -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
It spits out the version of Node v12.18.4 I am using and immediately dies. Where am I going wrong? I am using docker with wsl2 on windows 10. Is it due to filesystem incompatibility?
edit: whoops it's spitting out the node version and not the alpine version
To debug my issue I tried running a bare-bones alpine container:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh
Which gave a slightly more useful error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-v\": executable file not found in $PATH": unknown.
From this I realized that docker was trying to run -v as a starting command. I decided to change the order around, things started working.
TL;DR The -v argument and its corresponding parameter must be placed before the container name when performing a docker run command. i.e. the following works
docker run -it -v "${PWD}/assets:/usr/app" alpine:3.12 /bin/sh
but this doesn't:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh
I am getting an error message 'Error from daemon: container is not running". Why? I started the container in detached mode, so it should be running? I tried the -it flags for interactivity but that did not work. I also tried sleeping docker but that did not work.
sh "docker run -d --name mongocontainer19"
sh "docker exec mongocontainer19 mongo mongodump"
The --name gives container names, which is mongocontainer19 in your case. So, you didn't put the image name there.
The syntax is
$ docker run [OPTIONS] IMAGE
So the command should be like$ docker run -d --name mongocontainer19 MyRedisIMAGE
--name <Your_container_alias> will be considered as an option of the command. -d or -p xx:xx are options as well.
I'm trying to mount a volume on a container so that I can access files on the server I'm running the container. Using the command
docker run -i -t 188b2a20dfbf -v /home/user/shared_files:/data ubuntu /bin/bash
results in the error
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:296: starting container process caused "exec: \"-v\":
executable file not found in $PATH": unknown.
I'm not sure what to do here. Basically, I need to be able to access a script and some data files from the host server.
The docker command line is order sensitive. After the image name, everything passed is a command that runs inside the container. For docker, the first thing that doesn't match an expected argument after the run command is assumed to be the image name:
docker run -i -t 188b2a20dfbf -v /home/user/shared_files:/data ubuntu /bin/bash
That tries to run a -v command inside your image 188b2a20dfbf because -t takes no value.
docker run -i -t -v /home/user/shared_files:/data 188b2a20dfbf /bin/bash
That would run bash in that same image 188b2a20dfbf.
If you wanted to run your command inside ubuntu instead (it's not clear from your example which you were trying to do), then remove the 188b2a20dfbf image name from the command:
docker run -i -t -v /home/user/shared_files:/data ubuntu /bin/bash
Apparently, on line 296 on your .go script you is referring to something that can't be found. Check your environment variables to see if they contain the path to that file, if the file is included in the volume at all, etc.
188b2a20dfbf passed to -t is not right. -t is used to get a pseudo-TTY terminal for the container:
$ docker run --help
...
-t, --tty Allocate a pseudo-TTY
Run docker run -i -t -v /home/user/shared_files:/data ubuntu /bin/bash. It works for me:
$ echo "test123" > shared_files
$ docker run -i -t -v $(pwd)/shared_files:/data ubuntu /bin/bash
root#4b426995e373:/# cat /data
test123