On my windows notebook I try to share a folder with my docker container, but I am getting a weird error message that does not tell me anything.
docker run -p 9999:9999 -it msmint/msmint:latest -v c:/Users/swacker/MINT:/data/
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-v": executable file not found in $PATH: unknown.
The /data folder in the docker file is present and I gried different formats for the windows path such as C:\User\swacker\MINT.
It complains that an executable file is not found, but I don't know which one.
Options for docker need to go before the image name. Anything after the image name is a command for the container and replaces any defined CMD in the image.
So you need to do
docker run -p 9999:9999 -it -v c:\Users\swacker\MINT:/data/ msmint/msmint:latest
I've changed the slashes in your Windows path to backslashes.
I want to use Docker to manage multiple Python versions (I recently got a Mac with Apple Silicon and I use old Python environment).
Since I need to read Python scripts on Docker and save the output files (for later use outside the Docker environment), I tried to mount a folder (on my Mac) following this post.
However, it shows this error:
$ docker run --name dpython -it python-docker -v $(pwd):/tmp /bin/bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-v": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
It works without -v $(pwd):/tmp. I tried to specify different folders such as ~/ and /Users/ but they didn't work.
You must specify the volume before the image name:
$ docker run --name dpython -it -v $(pwd):/tmp python-docker /bin/bash
I am trying to attach a directory of static assets to my docker instance after it has been built. When I do something like this
docker run -it app /bin/bash
The container runs perfectly fine. However, if I do something like this:
docker run -it app -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
This also reproduces it:
docker run -it node:12.18-alpine3.12 -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
It spits out the version of Node v12.18.4 I am using and immediately dies. Where am I going wrong? I am using docker with wsl2 on windows 10. Is it due to filesystem incompatibility?
edit: whoops it's spitting out the node version and not the alpine version
To debug my issue I tried running a bare-bones alpine container:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh
Which gave a slightly more useful error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-v\": executable file not found in $PATH": unknown.
From this I realized that docker was trying to run -v as a starting command. I decided to change the order around, things started working.
TL;DR The -v argument and its corresponding parameter must be placed before the container name when performing a docker run command. i.e. the following works
docker run -it -v "${PWD}/assets:/usr/app" alpine:3.12 /bin/sh
but this doesn't:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh
I can't get the -v option to work with Docker. My host is Linux Mint and my image is using Ubuntu:latest.
sudo docker run -it opencv:latest -v /home/rr/Desktop/mytest:/src
It gives the error
docker: Error response from daemon: OCI runtime create failed:
container_linux.go.345: starting container process caused
"exec: \"-v\": executable file not found in $PATH": unknown.
I have tried different things. Both mounting to a folder in the image that exists and one that does not, but it is the same error either way.
The usage of docker run is:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So your options need to go before the image name, including the -v option. If you put it after the image name, it sees it as the command to run instead of an option. Try:
sudo docker run -it -v /home/rr/Desktop/mytest:/src opencv:latest
I have dockerized an app which has ffmpeg installed in it via libav-tools. The app launches without problem, yet the problem occured when fluent-ffmpeg npm module tried to execute ffmpeg command, which was not found. When I wanted to check the version of the ffmpeg and the linux distro set up in the image, I used sudo docker exec -it c44f29d30753 "lsb_release -a" command, but it gave the following error: OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"lsb_release -a\": executable file not found in $PATH": unknown
Then I realized that it gives me the same error with all the commands that I try to run inside the image or the container.
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"ffmpeg -a\": executable file not found in $PATH": unknown
This is my Dockerfile:
FROM ubuntu:xenial
FROM node
RUN apt-get -y update
RUN apt-get --yes install libav-tools
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
RUN npm run build
ENV NODE_ENV production
EXPOSE 8000
CMD ["npm", "run", "start:prod"]
I would kindly ask for your help. Thank you very much!
This happened to me on windows. See below for any of the commands that match your case.
NOTE
You will need to run the commands that match your case below using the correct shell in your container i.e. /bin/bash or /bin/sh. Using sh instead of bash or vice versa will also give you this error. So, confirm that you are using the right shell, or just try both shells and see the one that works.
For these examples, I will be using sh
On Windows CMD (not switching to bash):
docker exec -it <container-id> /bin/sh
On Windows CMD (after switching to bash):
docker exec -it <container-id> //bin//sh
or
winpty docker exec -it <container-id> //bin//sh
On Git Bash:
winpty docker exec -it <container-id> //bin//sh
For Windows users, the reason is documented in the ReleaseNotes file of Git and it is well explained here - Bash in Git for Windows: Weirdness... :
The cause is to do with trying to ensure that posix paths end up being
passed to the git utilities properly. For this reason, Git for Windows
includes a modified MSYS layer that affects command arguments.
Linux
docker exec -it <container-id> /bin/sh
docker exec -it <containerId> sh
I had this due to a simple ordering mistake on my end. I called
[WRONG] docker run <image> <arguments> <command>
When I should have used
docker run <arguments> <image> <command>
Same resolution on similar question: https://stackoverflow.com/a/50762266/6278
If #papigee does solution doesn't work, maybe you don't have the permissions.
I tried #papigee solution but does't work without sudo.
I did :
sudo docker exec -it <container id or name> /bin/sh
Get rid of your quotes around your command. When you quote it, docker tries to run the full string "lsb_release -a" as a command, which doesn't exist. Instead, you want to run the command lsb_release with an argument -a, and no quotes.
sudo docker exec -it c44f29d30753 lsb_release -a
Note, everything after the container name is the command and arguments to run inside the container, docker will not process any of that as options to the docker command.
For others with this error, the debugging steps I'd recommend:
Verify the order of your arguments. Everything after the container name/id is a command to run. So you don't want docker exec $cid -it /bin/sh because that will try to run the command -it in the $cid container. Instead you want docker exec -it $cid /bin/sh
Look at the command that is failing, everything in the quotes after the exec error (e.g. lsb_release -a in "exec: \"lsb_release -a\") is the binary trying to be run. Make sure that binary exists in your image. E.g. if you are using alpine or busybox, bash may not exist, but /bin/sh does. And that binary is the full string, e.g. you would be able to run something like ls "/usr/bin/lsb_release -a" and see a file with the space and -a in the filename.
If you're using Windows with Git bash and see a long path prefixed on that command trying to be run, that's Git bash trying to do some automatic conversions of /path/to/binary, you can disable that by doubling the first slash, e.g. //bin/sh.
If the command you're running is a script in the container, check the first line of that script, containing the #!/path/to/interpreter, make sure that interpreter exists in the image, at that path, and that the script is saved with linux linefeeds (lf, not cr+lf, you won't want the \r showing in the file when read in linux because that becomes part of the command it's looking to execute).
If you don't have a full path to the binary in the command you're running, check the value of $PATH in the image, and verify the binary exists within one of those directories. E.g. you can docker exec -it $cid /bin/sh and echo $PATH and type some_command to verify some_command is found in your path.
If your command is not an executable, but rather a shell builtin, you'll need to execute it with a shell instead of directly. That can be done with docker exec -it $cid /bin/sh -c "your_shell_builtin"
I solved this with this commands:
Run the container:
docker run -d <image-name>
List containers:
docker ps -a
Use the container ID:
docker exec -it <container-id> /bin/sh
I was running into this issue and it turned out that I needed to do this:
docker run ${image_name} bash -c "${command}"
You can use another shell to execute the same command:
Error I get when i execute:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb \bin\bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"binsh\": executable file not found in $PATH": unknown
Solution:
When I execute it with below command, using bash shell it works:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb bash
root#<container-ID>:/#
What I did to solve was simply:
Run docker ps -a
Check for the command of the container (mine started with /bin/sh)
Run docker-compose exec < name_of_service > /bin/sh (if that is what started your command
This is for solving when using docker compose
I was running a container in a docker-compose.
entrypoint:
- ls
worked, but
entrypoint:
- ls tests
did not.
It's because the arguments have to be on separate lines.. 🤦♂
entrypoint:
- ls
- tests
This has happened to me. My issue was caused when I didn't mount Docker file system correctly, so I configured the Disk Image Location and re-bind File sharing mount, and this now worked correctly.
For reference, I use Docker Desktop in Windows.
In my case i saved the docker image and instead of load-ing it on the other machine i imported it which are very different and lead me to an error similar to this.
you have to run like below:
docker exec sh -c 'echo "$ENV_NAME"'
I had windows line endings in a shell script. change to LF dos2unix
If you got this error when using the docker run command, you may have made a simple syntax error.
Example
Incorrect:
docker run myimage -p 3838:3838
docker: Error response from daemon: failed to create shim: OCI runtime create
failed: container_linux.go:380: starting container process caused:
exec: "-p": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
Correct (options go before image name):
docker run -p 3838:3838 myimage