The docker run cmd docs show an example of how to specify several (but not all) gpus:
docker run -it --rm --gpus '"device=0,2"' nvidia-smi
I'd like to set the --gpus to use those indicated by the environment variable CUDA_VISIBLE_DEVICES.
I tried the obvious
docker run --rm -it --env CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES --gpus '"device=$CUDA_VISIBLE_DEVICES"' some_repo:some_tag /bin/bash
But this gives the error:
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: $CUDA_VISIBLE_DEVICES: unknown device: unknown.
Note: currently CUDA_VISIBLE_DEVICES=0,1
I saw a github issue about this, but the solution is a bit messy and didn't work for me.
What is a good way to use CUDA_VISIBLE_DEVICES to set --gpus argument of docker run cmd?
The single quotes in '"device=$CUDA_VISIBLE_DEVICES"' prevent the expansion of the variable. Try without the single quotes.
docker run --rm -it \
--env CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES \
--gpus device=$CUDA_VISIBLE_DEVICES some_repo:some_tag /bin/bash
Docker provides docker:dind, a docker-in-docker image that works out the headaches for running a docker instance within another docker instance. Per the instructions, the command to start an internal instance looks like
docker run --memory=30g --rm --privileged -d --network dind-network --network-alias docker -e DOCKER_TLS_CERTDIR=/certs -v dind-certs-ca:/certs/ca -v dind-certs-client:/certs/client dindx1
Out of curiosity, I wanted to see how far I could recursively create docker-in-docker. To prevent the re-downloading of the image within each nested instance, I built a template image and carry it through each iteration (MRE below):
docker build -t dindx0 .
docker run $(RUN_ARGS) --name dindx0 dindx0
docker commit dindx0 dindx1
docker save dindx1 | gzip > dindx1.tar.gz
docker kill dindx0
docker cp dindx1.tar.gz dindx1:/project
docker exec dindx1 docker load -i dindx1.tar.gz
This works up to exactly 32 instances and fails with the curious error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380:
starting container process caused: process_linux.go:402: getting the final child's pid
from pipe caused: EOF: unknown.
Any thoughts on how I can push past this limitation?
I am trying to attach a directory of static assets to my docker instance after it has been built. When I do something like this
docker run -it app /bin/bash
The container runs perfectly fine. However, if I do something like this:
docker run -it app -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
This also reproduces it:
docker run -it node:12.18-alpine3.12 -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
It spits out the version of Node v12.18.4 I am using and immediately dies. Where am I going wrong? I am using docker with wsl2 on windows 10. Is it due to filesystem incompatibility?
edit: whoops it's spitting out the node version and not the alpine version
To debug my issue I tried running a bare-bones alpine container:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh
Which gave a slightly more useful error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-v\": executable file not found in $PATH": unknown.
From this I realized that docker was trying to run -v as a starting command. I decided to change the order around, things started working.
TL;DR The -v argument and its corresponding parameter must be placed before the container name when performing a docker run command. i.e. the following works
docker run -it -v "${PWD}/assets:/usr/app" alpine:3.12 /bin/sh
but this doesn't:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh
Docker is not started even if the subsequent command is executed.
docker pull incendonet/centos7-mono-apache
Even if you check with docker ps, it does not exist.
Please tell me the cause.
Docker will be started after you run below command :
docker run -it -d image-name
docker run -it -d incendonet/centos7-mono-apache
docker pull command just fetches image from docker hub to your server/local machine. But to run it you need to use docker run.
Once it is running then it will be shown in your docker ps command and you can use below command to get into container's shell :
docker exec -it <container-id> /bin/bash
I have dockerized an app which has ffmpeg installed in it via libav-tools. The app launches without problem, yet the problem occured when fluent-ffmpeg npm module tried to execute ffmpeg command, which was not found. When I wanted to check the version of the ffmpeg and the linux distro set up in the image, I used sudo docker exec -it c44f29d30753 "lsb_release -a" command, but it gave the following error: OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"lsb_release -a\": executable file not found in $PATH": unknown
Then I realized that it gives me the same error with all the commands that I try to run inside the image or the container.
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"ffmpeg -a\": executable file not found in $PATH": unknown
This is my Dockerfile:
FROM ubuntu:xenial
FROM node
RUN apt-get -y update
RUN apt-get --yes install libav-tools
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
RUN npm run build
ENV NODE_ENV production
EXPOSE 8000
CMD ["npm", "run", "start:prod"]
I would kindly ask for your help. Thank you very much!
This happened to me on windows. See below for any of the commands that match your case.
NOTE
You will need to run the commands that match your case below using the correct shell in your container i.e. /bin/bash or /bin/sh. Using sh instead of bash or vice versa will also give you this error. So, confirm that you are using the right shell, or just try both shells and see the one that works.
For these examples, I will be using sh
On Windows CMD (not switching to bash):
docker exec -it <container-id> /bin/sh
On Windows CMD (after switching to bash):
docker exec -it <container-id> //bin//sh
or
winpty docker exec -it <container-id> //bin//sh
On Git Bash:
winpty docker exec -it <container-id> //bin//sh
For Windows users, the reason is documented in the ReleaseNotes file of Git and it is well explained here - Bash in Git for Windows: Weirdness... :
The cause is to do with trying to ensure that posix paths end up being
passed to the git utilities properly. For this reason, Git for Windows
includes a modified MSYS layer that affects command arguments.
Linux
docker exec -it <container-id> /bin/sh
docker exec -it <containerId> sh
I had this due to a simple ordering mistake on my end. I called
[WRONG] docker run <image> <arguments> <command>
When I should have used
docker run <arguments> <image> <command>
Same resolution on similar question: https://stackoverflow.com/a/50762266/6278
If #papigee does solution doesn't work, maybe you don't have the permissions.
I tried #papigee solution but does't work without sudo.
I did :
sudo docker exec -it <container id or name> /bin/sh
Get rid of your quotes around your command. When you quote it, docker tries to run the full string "lsb_release -a" as a command, which doesn't exist. Instead, you want to run the command lsb_release with an argument -a, and no quotes.
sudo docker exec -it c44f29d30753 lsb_release -a
Note, everything after the container name is the command and arguments to run inside the container, docker will not process any of that as options to the docker command.
For others with this error, the debugging steps I'd recommend:
Verify the order of your arguments. Everything after the container name/id is a command to run. So you don't want docker exec $cid -it /bin/sh because that will try to run the command -it in the $cid container. Instead you want docker exec -it $cid /bin/sh
Look at the command that is failing, everything in the quotes after the exec error (e.g. lsb_release -a in "exec: \"lsb_release -a\") is the binary trying to be run. Make sure that binary exists in your image. E.g. if you are using alpine or busybox, bash may not exist, but /bin/sh does. And that binary is the full string, e.g. you would be able to run something like ls "/usr/bin/lsb_release -a" and see a file with the space and -a in the filename.
If you're using Windows with Git bash and see a long path prefixed on that command trying to be run, that's Git bash trying to do some automatic conversions of /path/to/binary, you can disable that by doubling the first slash, e.g. //bin/sh.
If the command you're running is a script in the container, check the first line of that script, containing the #!/path/to/interpreter, make sure that interpreter exists in the image, at that path, and that the script is saved with linux linefeeds (lf, not cr+lf, you won't want the \r showing in the file when read in linux because that becomes part of the command it's looking to execute).
If you don't have a full path to the binary in the command you're running, check the value of $PATH in the image, and verify the binary exists within one of those directories. E.g. you can docker exec -it $cid /bin/sh and echo $PATH and type some_command to verify some_command is found in your path.
If your command is not an executable, but rather a shell builtin, you'll need to execute it with a shell instead of directly. That can be done with docker exec -it $cid /bin/sh -c "your_shell_builtin"
I solved this with this commands:
Run the container:
docker run -d <image-name>
List containers:
docker ps -a
Use the container ID:
docker exec -it <container-id> /bin/sh
I was running into this issue and it turned out that I needed to do this:
docker run ${image_name} bash -c "${command}"
You can use another shell to execute the same command:
Error I get when i execute:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb \bin\bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"binsh\": executable file not found in $PATH": unknown
Solution:
When I execute it with below command, using bash shell it works:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb bash
root#<container-ID>:/#
What I did to solve was simply:
Run docker ps -a
Check for the command of the container (mine started with /bin/sh)
Run docker-compose exec < name_of_service > /bin/sh (if that is what started your command
This is for solving when using docker compose
I was running a container in a docker-compose.
entrypoint:
- ls
worked, but
entrypoint:
- ls tests
did not.
It's because the arguments have to be on separate lines.. 🤦♂
entrypoint:
- ls
- tests
This has happened to me. My issue was caused when I didn't mount Docker file system correctly, so I configured the Disk Image Location and re-bind File sharing mount, and this now worked correctly.
For reference, I use Docker Desktop in Windows.
In my case i saved the docker image and instead of load-ing it on the other machine i imported it which are very different and lead me to an error similar to this.
you have to run like below:
docker exec sh -c 'echo "$ENV_NAME"'
I had windows line endings in a shell script. change to LF dos2unix
If you got this error when using the docker run command, you may have made a simple syntax error.
Example
Incorrect:
docker run myimage -p 3838:3838
docker: Error response from daemon: failed to create shim: OCI runtime create
failed: container_linux.go:380: starting container process caused:
exec: "-p": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
Correct (options go before image name):
docker run -p 3838:3838 myimage