/bin/sh: 1: /go/src/test.sh: not found - docker

I am trying to build this dockerfile, the file is copied successfully but I keep getting the following error:
docker build --no-cache=true -f Dockerfile-Gobase .
Sending build
context to Docker daemon 34MB
Step 1/3 : FROM golang:1.11.2 ---> df6ac9d1bf64
Step 2/3 : COPY ./test.sh /go/src/ ---> 38a538f0289d
Step 3/3 : RUN (ls -l /go/src/ && cd /go/src/ && /go/src/test.sh)
---> Running in 089de53d11f0
total 4
-rwxr-xr-x 1 root root 34 Jan 24 03:22 test.sh
/bin/sh: 1: /go/src/test.sh: not found
The command '/bin/sh -c (ls -l /go/src/ && cd /go/src/ &&
/go/src/test.sh)' returned a non-zero code: 127
These are the file codes
Dockerfile-Gobase
FROM golang:1.11.2
COPY ./test.sh /go/src/
RUN (ls -l /go/src/ && cd /go/src/ && /go/src/test.sh)
test.sh
#!/bin/sh
echo "hello world"

You eliminated the first cause by checking that the script exists in the container with an ls. This also eliminated Linux file permissions.
Next possible cause is that your interpreter isn't in the container, but showing the script we see that it's #!/bin/sh. And /bin/sh is included with your base image.
What's left that I can think of are windows line feeds in the file, a missing library somewhere, or perhaps security tools like SE Linux/AppArmor with a strict configuration. In this case, it looks like windows line feeds were there cause. You just need to configure your editor to output with Linux style line feeds. Otherwise Linux is looking for /bin/sh\R to run (where \R is the carriage return), and that command does not exist.
This is included in my DockerCon 18 talk which includes lots of other tips you may find useful when starting out.

Related

Forked docker image not building

I am trying to fork this docker image so that if anything changes on the original it won't affect me.
I have forked the repo corresponding to that image to my own repo.
I have cloned the repo and am trying to build it:
docker build . -t davcal/gcc-cross-x86_64-elf
I am getting this error:
+ cd /usr/local/src
+ ./build-binutils.sh 2.31.1
/bin/sh: 1: ./build-binutils.sh: not found
The command '/bin/sh -c set -x && cd /usr/local/src && ./build-binutils.sh ${BINUTILS_VERSION} && ./build-gcc.sh ${GCC_VERSION}' returned a non-zero code: 127
What makes no sense to me is that if I use the original image, it builds successfully:
FROM randomdude/gcc-cross-x86_64-elf
...
Maybe Docker Hub stores a pre-built image?
How do I fix this?
Note: I am using Windows. This shouldn't make a difference since the error originates within the container.
Edit
I tried patching the Dockerfile to chmod executable permissions to the sh files in case that was causing problems on Windows. Unfortunately, the exact same error occurs.
RUN set -x \
&& chmod +x /usr/local/src/build-binutils.sh \
&& chmod +x /usr/local/src/build-gcc.sh \
&& cd /usr/local/src \
&& ./build-binutils.sh ${BINUTILS_VERSION} \
&& ./build-gcc.sh ${GCC_VERSION}
Edit 2
Following this method, I inspected the container to see if the sh files actually exist. Here is the output.
I ran docker run --rm -it c53693f11514 bash, including the hash of the intermediate container of the previous successful step of the Dockerfile.
This is the output showing that the files do exist:
root#9b8a64ac2090:/# cd usr/local/src
root#9b8a64ac2090:/usr/local/src# ls
binutils-2.31.1 build-binutils.sh build-gcc.sh gcc-8.2.0
From the described symptoms, file exists, is a shell script, and works on other machines, the "file not found" error is most likely from Winidows linefeeds being added to the file. When the Linux kernel processes a shell script, it looks at the first line, the #!/bin/sh or similar, and then finds that interpreter to run the shell script. If that interpreter isn't found, you'll get a "file not found" error.
In this case, the file it's looking for won't be /bin/sh, but instead /bin/sh\r or /bin/sh^M depending on how you want to represent the carriage return character. You can fix that for single files with a tool like dos2unix but in general, you'll want to fix git itself since there are likely other files that have had their linefeeds corrupted. For details on adjusting the behavior of git, see this post.

RUN command throws "not found"

I have Docker file:
FROM ubuntu:18.04
COPY mylib/src /usr/src
WORKDIR /usr/src
RUN chmod +x configure.sh
RUN ls -l # it display all files, included configure.sh
RUN ./configure.sh # error there
Echo:
RUN ls -l
---> Running in d9ba6b10ed2a
total 604
...
-rwxr-xr-x 1 root root 91 Oct 28 07:30 configure.sh
...
RUN ./configure.sh
---> Running in 2e3e8fdca28e
/bin/sh: 1: ./configure.sh: not found
The command '/bin/sh -c ./configure.sh' returned a non-zero code: 127
File configure.sh exists, but an error occurs: not found
I have this problem only on my Windows PC.
Okaaaay... Problem was in Windows-style line separator. I change CRLF to LF in my configure.sh and it works!

Can see a file in Docker container, but cannot access it

I'm new to Docker and ran into the following problem:
In my Dodckerfile I have these lines:
ADD dir/archive.tgz /dir/
RUN tar -xzf /dir/archive2.tar.gz -C /dir/
RUN ls -l /dir/
RUN ls -l /dir/dir1/
The first ls prints out files correctly and I can see that dir1 was created inside dir by the archive, with permissions drwxr-xr-x. But the second ls gives me:
ls: "cannot access /dir/dir1/: No such file or directory"
I thought that if the Docker can see a file, it can access it. Do I need to do some special magic here?
I thought that if the Docker can see a file, it can access it.
In a way you are right, but also missing a piece of info. Those RUN commands are not necessarily sequentially executed since docker operates in layers, and your third RUN command is executed while your first might be skipped. In order to preserve proper execution order you need to put them in same RUN command as such so they end up on the same layer (and are updated together):
RUN tar -xzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir1/
This is common issue, most often when this is put in Dockerfile:
RUN apt-get update
RUN apt-get install some-package
Instead of this:
RUN apt-get update && \
apt-get install some-package
Note: This is in line with best practices for usage of RUN command in Dockerfile, documented here: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run and avoids possible confusion with caches/layes...
To recreate your problem here is small test to resemble similar setup to yours, depending on actual directory structure in your archive this may differ:
Dummy archive 2 with dir/dir1/somefile.txt created:
mkdir -p ~/test-sowf/dir/dir1 && cd ~/test-sowf && echo "Yay" | tee --append dir/dir1/somefile.txt && tar cvzf archive2.tar.gz dir && rm -rf dir
Dockerfile created in ~/test-sowf with following content
from ubuntu:latest
COPY archive2.tar.gz /dir/
RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir/dir1/
Build command like so:
docker build -t test-sowf .
Gives following result:
Sending build context to Docker daemon 5.632kB
Step 1/3 : from ubuntu:latest
---> 452a96d81c30
Step 2/3 : COPY archive2.tar.gz /dir/
---> Using cache
---> 852ef4f706d3
Step 3/3 : RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && ls -l /dir/ && ls -l /dir/dir/dir1/
---> Running in b2ab281190a2
dir/
dir/dir1/
dir/dir1/somefile.txt
total 8
-rw-r--r-- 1 root root 177 May 10 15:43 archive2.tar.gz
drwxr-xr-x 3 1000 1000 4096 May 10 15:43 dir
total 4
-rw-r--r-- 1 1000 1000 4 May 10 15:43 somefile.txt
Removing intermediate container b2ab281190a2
---> 05b7dfe52e36
Successfully built 05b7dfe52e36
Successfully tagged test-sowf:latest
Note that extracted files are with 1000:1000 as opposed to root:root for the archive, so unless you are not running from some other user (non root) you should not have problems with user, but, depending on your archive you might run into path problems (/dir/dir/dir1 as shown here).
test that file is correct, and contains 'Yay' inside:
docker run --rm --name test-sowf test-sowf:latest cat /dir/dir/dir1/somefile.txt
clean the test mess afterwards (deliberatelynot using rm -rf but cleaning individual files):
docker rmi test-sowf && cd && rm ~/test-sowf/archive2.tar.gz && rm ~/test-sowf/Dockerfile && rmdir ~/test-sowf
For those using docker-compose:
Sometimes when you volume mount a folder/file from one container to another before it exists, it can have weird permissions after it's created
For example if one container is certbot and another is your webserver, certbot will take time to generate the /etc/letsencrypt folder and its contents
From the webserver you might be able to see the folder or its contents with an ls, but not open them. You can see the behavior with a cat * and you'll get back
cat: <files in question>: No such file or directory
One solution is generating the folder at build time with a RUN mkdir -p /directory/of/choice in your dockerfile for the container generating the folder/files. Then the folder will exist and docker will happily mount it to your other container or host machine the way you want it to

Why won't my docker-entrypoint.sh execute?

My ENTRYPOINT script doesn't execute and throws standard_init_linux.go:175: exec user process caused "no such file or directory". Why so?
Doesn't Work
$ docker build -t gilani/trollo . && docker run gilani/trollo
Sending build context to Docker daemon 126 kB
Step 1 : FROM vault:latest
---> 1f127f53f8b5
Step 2 : MAINTAINER Amin Shah Gilani <gilani#payload.tech>
---> Using cache
---> 86b885ca1c81
Step 3 : COPY vaultConfig.json /vault/config
---> Using cache
---> 1a2be2fa3acd
Step 4 : COPY ./docker-entrypoint.sh /
---> Using cache
---> 0eb7c1c992f1
Step 5 : RUN chmod +x /docker-entrypoint.sh
---> Running in 251395c4790f
---> 46aa0fbc9637
Removing intermediate container 251395c4790f
Step 6 : ENTRYPOINT /docker-entrypoint.sh
---> Running in 7434f052178f
---> eca040859bfe
Removing intermediate container 7434f052178f
Successfully built eca040859bfe
standard_init_linux.go:175: exec user process caused "no such file or directory"
Dockerfile:
FROM vault:latest
MAINTAINER Amin Shah Gilani <gilani#payload.tech>
COPY vaultConfig.json /vault/config
COPY ./docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
docker-entrypoint.sh:
#!/bin/bash
echo 'Hello World!'
Works
$ docker build -t gilani/trollo . && docker run gilani/trollo
Sending build context to Docker daemon 126 kB
Step 1 : FROM vault:latest
---> 1f127f53f8b5
Step 2 : MAINTAINER Amin Shah Gilani <gilani#payload.tech>
---> Using cache
---> 86b885ca1c81
Step 3 : COPY vaultConfig.json /vault/config
---> Using cache
---> 1a2be2fa3acd
Step 4 : ENTRYPOINT echo 'hello world'
---> Using cache
---> ef5792a1f252
Successfully built ef5792a1f252
'hello world'
Dockerfile:
FROM vault:latest
MAINTAINER Amin Shah Gilani <gilani#payload.tech>
COPY vaultConfig.json /vault/config
ENTRYPOINT ["echo", "'hello world'"]
I was tearing my hair out with an issue very similar to this. In my case /bin/bash DID exist. But actually the problem was Windows line endings.
In my case the git repository had an entry point script with Unix line endings (\n). But when the repository was checked out on a windows machine, git decided to try and be clever and replace the line endings in the files with windows line endings (\r\n).
This meant that the shebang didn't work because instead of looking for /bin/bash, it was looking for /bin/bash\r.
The solution for me was to disable git's automatic conversion:
git config --global core.autocrlf input
Then check out the repository again and rebuild.
Some more helpful info here:
How to change line-ending settings
and here
http://willi.am/blog/2016/08/11/docker-for-windows-dealing-with-windows-line-endings/
the vault:latest image does not contain /bin/bash which you try to call with your shebang #!/bin/bash. You should either change that to #!/bin/sh or completely remove the shebang from your script.
Another possibility:
Check that the file is not saved with Windows line endings (CRLF). If it is, save it with Unix line endings (LF) and it will be found.
Without seeing your image, my initial idea is that you don't have /bin/bash in your image. Changing the first line of your docker-entrypoint.sh to:
#!/bin/sh
will likely resolve it.
I struggled for hours because I haven't noticed anywhere explained that you need to copy the file in the location where the VM can access the file, preferably globally like so:
COPY docker-entrypoint.sh /usr/local/bin/
(I had thought it should just be automatically accessible since it's part of the dockerfile context)
Gosh I struggled for 2–3 hours!!
Thanks to #Ryan Allen
For my case it was CRLF problem. I am working on puppet manifests over ATOM for jenkins setup.
Make sure if you are using ATOM or any other IDE on windows, when you take your file ( especially .sh) to unix, convert it to unix format. It worked like magic once converted.
Here is what I added in my puppet file:
exec {'dos2unix':
path => ['/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/opt/puppetlabs/bin'],
command => 'dos2unix /dockerwork/puppet/jenkins/files/*',
subscribe => File['/dockerwork/puppet/jenkins/files/init.sh'],
}
I came here with a similar issue while troubleshooting my attempt to build a Dockerfile "entry point" (entrypoint.sh) bash shell script (to be executed within the .NET Core SDK 2.2 image). The start of the script had the line #!/bin/bash, and during execution of docker-compose up (after successfully building with docker-compose build, the logging reported web_1 | ./entrypoint.sh: line 1: #!/bin/bash: No such file or directory.
Checking the file with VS Code, I noticed it was reporting the following encoding:
UTF-8 with BOM
Clicking on this, I would get the option to Save with encoding:
I chose to save as UTF-8 (utf8), which resolved the issue.
NOTE: I also found this SO article What's the difference between UTF-8 and UTF-8 without BOM?
My case was that the alpine image I was using didn't come with bash at all...
RUN apk-install bash did the trick obviously
Another reason this error comes up is if your Windows User password changes.
In my case my entrypoint.sh line endings were LF but I was still getting the error. Our admin has a mandatory password reset about every month or so. Whenever this happens I would run into the error. If this is your reason you may need to reset your credentials in docker settings under "Shared Drives".
Unselect the drive and apply. Then reselect the drive and apply. It will prompt you for your password.
This problem is to do with line endings and I solved it with the solution below
Convert the DOS file to a unix format. This removes any wired line endings.
dos2unix - is available in alpine as well as other Linux distributions.
I used it like so: RUN apk add dos2unix && dos2unix /entry.sh
Sorry for hacking -- this is not a response to the question, but a description of a different problem and it's solution that has the same symptoms.
I had
ENTRYPOINT ["/usr/bin/curl", "-X", "POST", "http://myservice:8000", \
"-H", "Content-Type: application/json", \
"-d", '{"id": "test"}' \
]
I was getting the error:
/bin/bash: [/usr/bin/curl,: No such file or directory
It turns out it's the single quotes that messed it up. Docker documentation has a note:
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
Blockquote
Solution -- use double quotes instead of single and escape nested double quotes:
ENTRYPOINT ["/usr/bin/curl", "-X", "POST", "http://myservice:8000", \
"-H", "Content-Type: application/json", \
"-d", "{\"id\": \"test\"}" \
]
None of the solutions worked for me but I was able to solve the error by setting WORKDIR to the same directory that contained the entrypoint script. No amount of cd'ing would work but somehow WORKDIR solved it

Multi command with docker in a script

With docker I would like to offer a vm to each client to compile and execute a C program in only one file.
For that, I share a folder with the docker and the host thanks to a dockerfile and the command "ADD".
My folder is like that:
folder/id_user/script.sh
folder/id_user/code.c
In script.sh:
gcc ./compil/code.c -o ./compil/code && ./compil/code
My problem is in the doc we can read this for ADD:
All new files and directories are created with mode 0755, uid and gid 0.
But when I launch "ls" on the file I have:
ls -l compil/8f41dacd-8775-483e-8093-09a8712e82b1/
total 8
-rw-r--r-- 1 1000 1000 51 Feb 11 10:52 code.c
-rw-r--r-- 1 1000 1000 54 Feb 11 10:52 script.sh
So I can't execute the script.sh. Do you know why?
Maybe you wonder why proceed like that.
It's because if I do:
sudo docker run ubuntu/C pwd && pwd
result:
/
/srv/website
So we can see the first command is in the VM but not the second. I understand it might be normal for docker.
If you have any suggestion I'm pleased to listen it.
Thanks !
You can set up the correct mode by RUN command with chmod:
# Dockerfile
...
ADD script.sh /root/script.sh
RUN chmod +x /root/script.sh
...
The second question, you should use CMD command - && approach does work in Dockerfile, try to put this line at the end of your Dockerfile:
CMD pwd && pwd
then docker build . and you will see:
root#test:/home/test/# docker run <image>
/
/
Either that your you can do:
RUN /bin/sh /root/script.sh
to achieve the same result

Resources