docker run [9] System error: exec format error - docker

I created Dockerfile to build my image called aii.
FROM docker.io/centos:latest
#Set parameters
ENV BinDir /usr/local/bin
ENV RunFile start-aii.sh
ADD ${RunFile} ${BinDir}
#Some other stuff
...
CMD ${RunFile}
When I run the image with the following command:
docker run -it -v <some-volume-mapping> aii
it's works great (default operation of running CMD command of start-aii.sh).
Now, if I try to override this default behavior and to run the image with the same script implicitly (and add another arg) I'm getting the following error:
docker run -it -v <some-volume-mapping> aii start-aii.sh kafka
exec format error
docker: Error response from daemon: Cannot start container b3f4f3bde04d862eb8bc619ea55b7061ce78ace8f1984a12f6ec681877d7d926: [9] System error: exec format error.
I also tried: only script (without argument)
docker run -it -v <some-volume-mapping> aii start-aii.sh
and full path to script
docker run -it -v <some-volume-mapping> aii /usr/local/bin/start-aii.sh
but the same error appear.
Another info:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2488a4dd7014 aii "start-aii.sh kafka" 3 seconds ago Created tiny_payne
Any suggestions?
Thanks

Had the same issue, fixed it by adding #!/bin/sh at the top of the file instead of having other comments.

Try to start bash before using your script, and use the --rm flag in order to destroy the instance once the job is ended, like that :
docker run -it --rm -v <some-volume-mapping> aii /bin/bash /usr/local/bin/start-aii.sh

If you created the file start-aii.sh in the Windows editor. After that, this file was added to the docker image. You should check this file in a Linux editor, e.g. nano. In my case there were non-printable characters at the beginning of the file. I removed them and my script ran successfully.

Related

Running a container with a Docker bind-mount causes container to return Node version and exit

I am trying to attach a directory of static assets to my docker instance after it has been built. When I do something like this
docker run -it app /bin/bash
The container runs perfectly fine. However, if I do something like this:
docker run -it app -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
This also reproduces it:
docker run -it node:12.18-alpine3.12 -v "${PWD}/assets:/path/to/empty/directory" /bin/bash
It spits out the version of Node v12.18.4 I am using and immediately dies. Where am I going wrong? I am using docker with wsl2 on windows 10. Is it due to filesystem incompatibility?
edit: whoops it's spitting out the node version and not the alpine version
To debug my issue I tried running a bare-bones alpine container:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh
Which gave a slightly more useful error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-v\": executable file not found in $PATH": unknown.
From this I realized that docker was trying to run -v as a starting command. I decided to change the order around, things started working.
TL;DR The -v argument and its corresponding parameter must be placed before the container name when performing a docker run command. i.e. the following works
docker run -it -v "${PWD}/assets:/usr/app" alpine:3.12 /bin/sh
but this doesn't:
docker run -it alpine:3.12 -v "${PWD}/assets:/usr/app" /bin/sh

Error response from daemon: Mount denied - Error got while running docker application, which was working last night

Suddenly my docker run stopped working last night, which was working before. docker build is working fine, but I get the below error when trying to run the container.
Command
docker run -it --rm -p 9001:4200 -v ${pwd}/src:/app/src angularclient
Error message
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error
response from daemon: Mount denied: The source path
"E:/Karthik/angular/src" doesn't exist and is not known to Docker. See
'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I tried running the following command in the power shell:
refreshenv
set MSYS_NO_PATHCONV=1
set COMPOSE_CONVERT_WINDOWS_PATHS=1
try this:
docker run -it --rm -p 9001:4200 -v E:/Karthik/angular/src:/app/src angularclient
It seems that you can't use ${pwd} and ./ on win cmd and Git Bash. You can only use absolute paths.
Add this on your ~/.bash_profile:
export MSYS_NO_PATHCONV=1
Add / to prefix of path as below.
docker run -it --rm -p 9001:4200 -v /${pwd}/src:/app/src angularclient
Ensure the drive is shared in Docker settings "Shared Drives".
Create the full path if it doesn't already exist.
Add trailing / to the path.

Update PATH in Centos Docker image (alternative to Dockerfile ENV)

I'm provisioning docker Centos image with Packer and using bash scripts instead of Dockerfile to configure image (this seems to be the Packer way). What I can't seem to figure out is how to update PATH variable so that my custom binaries can be executed like this:
docker run -i -t <container> my_binary
I have tried putting .sh file in /etc/profile.d/ folder and also writing directly to /etc/environment but none of that seems to take effect.
I'm suspecting it has something to do with what shell Docker uses when executing commands in a disposable container. I thought it was Bourne Shell but as mentioned earlier neither /etc/profile.d/ nor /etc/environment approach worked.
UPDATE:
As I understand now, it is not possible to change environment variables in a running container due to reasons explained in #tgogos answer. However I don't believe this is an issue in my case since after Packer is done provisioning the image, it commits it and uploads to Docker Hub. More accurate example would be as follows:
$ docker run -itd --name test centos:6
$ docker exec -it test /bin/bash
[root#006a9c3195b6 /]# echo 'echo SUCCESS' > /root/test.sh
[root#006a9c3195b6 /]# chmod +x /root/test.sh
[root#006a9c3195b6 /]# echo 'export PATH=/root:$PATH' > /etc/profile.d/my_settings.sh
[root#006a9c3195b6 /]# echo 'PATH=/root:$PATH' > /etc/environment
[root#006a9c3195b6 /]# exit
$ docker commit test test-image:1
$ docker exec -it test-image:1 test.sh
Expecting to see SUCCESS printed but getting
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"test.sh\": executable file not found in $PATH": unknown
UPDATE 2
I have updated PATH in ~/.bashrc which lets me execute following:
$ docker run -it test-image:1 /bin/bash
[root#8f821c7b9b82 /]# test.sh
SUCCESS
However running docker run -it test-image:1 test.sh still results in
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: ...
I can confirm that my image CMD is set to "/bin/bash". So can someone explain why running docker run -it test-image:1 test.sh doesn't source ~/.bashrc?
A few good points are mentioned at:
How to set an environment variable in a running docker container (also check the link to the relevant github issue).
and Docker - Updating Environment Variables of a Container
where #BMitch mentions:
Destroy your container and start a new one up with the new environment variable using docker run -e .... It's identical to changing an environment variable on a running process, you stop it and restart with a new value passed in.
and in the comments section, he adds:
Docker doesn't provide a way to modify an environment variable in a running container because the OS doesn't provide a way to modify an environment variable in a running process. You need to destroy and recreate.
update: (see the comments section)
You can use
docker commit --change "ENV PATH=your_new_path_here" test test-image:1
/etc/profile is only read by bash when invoked by a login shell.
For more information about which files are read by bash on startup see this article.
EDIT: If you change the last line in your example to:
docker exec -it test bash -lc test.sh it works as you expect.

Error when trying to create container with mounted volume

I'm trying to mount a volume on a container so that I can access files on the server I'm running the container. Using the command
docker run -i -t 188b2a20dfbf -v /home/user/shared_files:/data ubuntu /bin/bash
results in the error
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:296: starting container process caused "exec: \"-v\":
executable file not found in $PATH": unknown.
I'm not sure what to do here. Basically, I need to be able to access a script and some data files from the host server.
The docker command line is order sensitive. After the image name, everything passed is a command that runs inside the container. For docker, the first thing that doesn't match an expected argument after the run command is assumed to be the image name:
docker run -i -t 188b2a20dfbf -v /home/user/shared_files:/data ubuntu /bin/bash
That tries to run a -v command inside your image 188b2a20dfbf because -t takes no value.
docker run -i -t -v /home/user/shared_files:/data 188b2a20dfbf /bin/bash
That would run bash in that same image 188b2a20dfbf.
If you wanted to run your command inside ubuntu instead (it's not clear from your example which you were trying to do), then remove the 188b2a20dfbf image name from the command:
docker run -i -t -v /home/user/shared_files:/data ubuntu /bin/bash
Apparently, on line 296 on your .go script you is referring to something that can't be found. Check your environment variables to see if they contain the path to that file, if the file is included in the volume at all, etc.
188b2a20dfbf passed to -t is not right. -t is used to get a pseudo-TTY terminal for the container:
$ docker run --help
...
-t, --tty Allocate a pseudo-TTY
Run docker run -i -t -v /home/user/shared_files:/data ubuntu /bin/bash. It works for me:
$ echo "test123" > shared_files
$ docker run -i -t -v $(pwd)/shared_files:/data ubuntu /bin/bash
root#4b426995e373:/# cat /data
test123

OCI runtime exec failed: exec failed: (...) executable file not found in $PATH": unknown

I have dockerized an app which has ffmpeg installed in it via libav-tools. The app launches without problem, yet the problem occured when fluent-ffmpeg npm module tried to execute ffmpeg command, which was not found. When I wanted to check the version of the ffmpeg and the linux distro set up in the image, I used sudo docker exec -it c44f29d30753 "lsb_release -a" command, but it gave the following error: OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"lsb_release -a\": executable file not found in $PATH": unknown
Then I realized that it gives me the same error with all the commands that I try to run inside the image or the container.
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"ffmpeg -a\": executable file not found in $PATH": unknown
This is my Dockerfile:
FROM ubuntu:xenial
FROM node
RUN apt-get -y update
RUN apt-get --yes install libav-tools
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
RUN npm run build
ENV NODE_ENV production
EXPOSE 8000
CMD ["npm", "run", "start:prod"]
I would kindly ask for your help. Thank you very much!
This happened to me on windows. See below for any of the commands that match your case.
NOTE
You will need to run the commands that match your case below using the correct shell in your container i.e. /bin/bash or /bin/sh. Using sh instead of bash or vice versa will also give you this error. So, confirm that you are using the right shell, or just try both shells and see the one that works.
For these examples, I will be using sh
On Windows CMD (not switching to bash):
docker exec -it <container-id> /bin/sh
On Windows CMD (after switching to bash):
docker exec -it <container-id> //bin//sh
or
winpty docker exec -it <container-id> //bin//sh
On Git Bash:
winpty docker exec -it <container-id> //bin//sh
For Windows users, the reason is documented in the ReleaseNotes file of Git and it is well explained here - Bash in Git for Windows: Weirdness... :
The cause is to do with trying to ensure that posix paths end up being
passed to the git utilities properly. For this reason, Git for Windows
includes a modified MSYS layer that affects command arguments.
Linux
docker exec -it <container-id> /bin/sh
docker exec -it <containerId> sh
I had this due to a simple ordering mistake on my end. I called
[WRONG] docker run <image> <arguments> <command>
When I should have used
docker run <arguments> <image> <command>
Same resolution on similar question: https://stackoverflow.com/a/50762266/6278
If #papigee does solution doesn't work, maybe you don't have the permissions.
I tried #papigee solution but does't work without sudo.
I did :
sudo docker exec -it <container id or name> /bin/sh
Get rid of your quotes around your command. When you quote it, docker tries to run the full string "lsb_release -a" as a command, which doesn't exist. Instead, you want to run the command lsb_release with an argument -a, and no quotes.
sudo docker exec -it c44f29d30753 lsb_release -a
Note, everything after the container name is the command and arguments to run inside the container, docker will not process any of that as options to the docker command.
For others with this error, the debugging steps I'd recommend:
Verify the order of your arguments. Everything after the container name/id is a command to run. So you don't want docker exec $cid -it /bin/sh because that will try to run the command -it in the $cid container. Instead you want docker exec -it $cid /bin/sh
Look at the command that is failing, everything in the quotes after the exec error (e.g. lsb_release -a in "exec: \"lsb_release -a\") is the binary trying to be run. Make sure that binary exists in your image. E.g. if you are using alpine or busybox, bash may not exist, but /bin/sh does. And that binary is the full string, e.g. you would be able to run something like ls "/usr/bin/lsb_release -a" and see a file with the space and -a in the filename.
If you're using Windows with Git bash and see a long path prefixed on that command trying to be run, that's Git bash trying to do some automatic conversions of /path/to/binary, you can disable that by doubling the first slash, e.g. //bin/sh.
If the command you're running is a script in the container, check the first line of that script, containing the #!/path/to/interpreter, make sure that interpreter exists in the image, at that path, and that the script is saved with linux linefeeds (lf, not cr+lf, you won't want the \r showing in the file when read in linux because that becomes part of the command it's looking to execute).
If you don't have a full path to the binary in the command you're running, check the value of $PATH in the image, and verify the binary exists within one of those directories. E.g. you can docker exec -it $cid /bin/sh and echo $PATH and type some_command to verify some_command is found in your path.
If your command is not an executable, but rather a shell builtin, you'll need to execute it with a shell instead of directly. That can be done with docker exec -it $cid /bin/sh -c "your_shell_builtin"
I solved this with this commands:
Run the container:
docker run -d <image-name>
List containers:
docker ps -a
Use the container ID:
docker exec -it <container-id> /bin/sh
I was running into this issue and it turned out that I needed to do this:
docker run ${image_name} bash -c "${command}"
You can use another shell to execute the same command:
Error I get when i execute:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb \bin\bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"binsh\": executable file not found in $PATH": unknown
Solution:
When I execute it with below command, using bash shell it works:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb bash
root#<container-ID>:/#
What I did to solve was simply:
Run docker ps -a
Check for the command of the container (mine started with /bin/sh)
Run docker-compose exec < name_of_service > /bin/sh (if that is what started your command
This is for solving when using docker compose
I was running a container in a docker-compose.
entrypoint:
- ls
worked, but
entrypoint:
- ls tests
did not.
It's because the arguments have to be on separate lines.. 🤦‍♂
entrypoint:
- ls
- tests
This has happened to me. My issue was caused when I didn't mount Docker file system correctly, so I configured the Disk Image Location and re-bind File sharing mount, and this now worked correctly.
For reference, I use Docker Desktop in Windows.
In my case i saved the docker image and instead of load-ing it on the other machine i imported it which are very different and lead me to an error similar to this.
you have to run like below:
docker exec sh -c 'echo "$ENV_NAME"'
I had windows line endings in a shell script. change to LF dos2unix
If you got this error when using the docker run command, you may have made a simple syntax error.
Example
Incorrect:
docker run myimage -p 3838:3838
docker: Error response from daemon: failed to create shim: OCI runtime create
failed: container_linux.go:380: starting container process caused:
exec: "-p": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
Correct (options go before image name):
docker run -p 3838:3838 myimage

Resources