How do I run a Bash script in an Alpine Docker container? - docker

I have a directory containing only two files, Dockerfile and sayhello.sh:
.
├── Dockerfile
└── sayhello.sh
The Dockerfile reads
FROM alpine
COPY sayhello.sh sayhello.sh
CMD ["sayhello.sh"]
and sayhello.sh contains simply
echo hello
The Dockerfile builds successfully:
kurtpeek#Sophiemaries-MacBook-Pro ~/d/s/trybash> docker build --tag trybash .
Sending build context to Docker daemon 3.072 kB
Step 1/3 : FROM alpine
---> 665ffb03bfae
Step 2/3 : COPY sayhello.sh sayhello.sh
---> Using cache
---> fe41f2497715
Step 3/3 : CMD sayhello.sh
---> Using cache
---> dfcc26c78541
Successfully built dfcc26c78541
However, if I try to run it I get an executable file not found in $PATH error:
kurtpeek#Sophiemaries-MacBook-Pro ~/d/s/trybash> docker run trybash
container_linux.go:247: starting container process caused "exec: \"sayhello.sh\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"sayhello.sh\": executable file not found in $PATH".
ERRO[0001] error getting events from daemon: net/http: request canceled
What is causing this? I recall running scripts in debian:jessie-based images in a similar manner. So perhaps it is Alpine-specific?

Alpine comes with ash as the default shell instead of bash.
So you can
Have a shebang defining /bin/bash as the first line of your sayhello.sh, so your file sayhello.sh will begin with bin/sh
#!/bin/sh
Install Bash in your Alpine image, as you seem to expect Bash is present, with such a line in your Dockerfile:
RUN apk add --no-cache --upgrade bash

This answer is completely right and works fine.
There is another way. You can run a Bash script in an Alpine-based Docker container.
You need to change CMD like below:
CMD ["sh", "sayhello.sh"]
And this works too.

Remember to grant execution permission for all scripts.
FROM alpine
COPY sayhello.sh /sayhello.sh
RUN chmod +x /sayhello.sh
CMD ["/sayhello.sh"]

By using the CMD, Docker is searching the sayhello.sh file in the PATH, BUT you copied it in / which is not in the PATH.
So use an absolute path to the script you want to execute:
CMD ["/sayhello.sh"]
BTW, as #user2915097 said, be careful that Alpine doesn't have Bash by default in case of your script using it in the shebang.

Related

tar -xvf failing in dockerfile

I want to create my own Docker image using the following Dockerfile
FROM scratch
COPY apache-cassandra-3.11.6-bin.tar.gz .
RUN tar -xzf apache-cassandra-3.11.6-bin.tar.gz
When I run the command, the tar instruction fails. Why? I am on Windows 10.
C:\Users\manuc\Documents\manu\cassandra_image_test>docker build -f CassandraImageDockerFile.txt -t manucassandra .
Sending build context to Docker daemon 184.8MB
Step 1/3 : FROM scratch
--->
Step 2/3 : COPY apache-cassandra-3.11.6-bin.tar.gz .
---> Using cache
---> deda426d6948
Step 3/3 : RUN tar -xzf apache-cassandra-3.11.6-bin.tar.gz .
---> Running in 3edfb1031c06
OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
Could it be that because I am not using an existing image, tar is not present in the image and thus tar is failing?
stat /bin/sh: no such file or directory
You started from a scratch image that has nothing. Then you told Docker to RUN a command, but you have no shell.
So you can't run any command, much less tar.
Why not start with alpine?
It looks like, whatever base image "scratch" is, it doesn't have /bin/sh, let alone tar or anything else. I would consider starting from a known OS image, or even the Cassandra official image on DockerHub. https://hub.docker.com/_/cassandra
A Dockerfile that starts FROM scratch starts from a base image that has absolutely nothing at all in it. It is totally empty. There is not a set of base tools or libraries or anything else.

Dockerfile cannot find copied folder

This is quite strange.
I have a structure like this
app/
CLI/
someOtherFolder/
thingIwantToRun.py
tests.Dockerfile
Dockerfile
README.md
gunicorn.conf
This is what my Dockerfile looks like
FROM python:3.6
WORKDIR /app
COPY ./requirements.txt /.requirements.txt
# Install any needed packages specified in requirements.txt
RUN pip install -r /.requirements.txt
COPY gunicorn.conf /gunicorn.conf
COPY . /app
EXPOSE 8000
RUN ls
ENV FLASK_ENV=development
CMD ["python ./someOtherFolder/thingIwantToRun.py"]
This gives me this error when I start the container -
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"ls ./someOtherFolder\": stat ls ./someOtherFolder: no such file or directory": unknown.
When I change the CMD command into something else which doesn't fail and I jump into the container I see that my folder is indeed there.
When I add a RUN ls into my Dockerfile, I can still see my folder.
If it exists, why can't I run it?
UPDATE -
If I move thingIWantToRun.py into the top level folder and change my Docker CMD to
CMD [python thingIWantToRun.py]
I see the same issue. However, I can ssh into the container and verify that the file is there.
The problem is how you are running the CMD command. It is something like this:
CMD ["executable", "param1", "param2"]
ref: https://docs.docker.com/engine/reference/builder/#cmd
In that sense actual command should be
CMD ["python", "./someOtherFolder/thingIwantToRun.py"]
Docker tries to find the executable part (first item of the array) and run it, and passes rest of the array items (param1, param2) to it. If you look closer to the error is prints
... process caused "exec: \"ls ./someOtherFolder\": stat ls ./someOtherFolder: no such file or directory"
It says that ls ./someOtherFolder is not a file or directory and it can't exec it! Which is the first item of the array, the executable!
Here ls should be first item and ./someOtherFolder should be second item of array for CMD command.
You need to use the CMD command something like this:
CMD ["python", "./someOtherFolder/thingIwantToRun.py"]

ERRO[0001] error waiting for container: context canceled

Getting error while running docker image. It seems to look like the problem is on my pc.
I'm using MacOS 10.13.6.
I have followed steps to create a docker image.
Sanjeet:server-api sanjeet$ docker build -t apicontainer .
Sending build context to Docker daemon 24.01MB
Step 1/2 : FROM alpine:3.6
---> da579b235e92
Step 2/2 : CMD ["/bin/bash"]
---> Running in f43fa95302d4
Removing intermediate container f43fa95302d4
---> 64d0b47af4df
Successfully built 64d0b47af4df
Successfully tagged apicontainer:latest
Sanjeet:server-api sanjeet$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
apicontainer latest 64d0b47af4df 3 minutes ago 4.03MB
alpine 3.6 da579b235e92 2 weeks ago 4.03MB
Sanjeet:server-api sanjeet$ docker run -it apicontainer
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown.
Sanjeet:server-api sanjeet$ ERRO[0001] error waiting for container: context canceled
Inside Dockerfile
FROM alpine:3.6
CMD ["/bin/bash"]
alpine does not include bash by default.
If you want to include bash, you should add RUN apk add --no-cache bash in your Dockerfile.
Issues while running/re-initialize Docker
ERRO[0000] error waiting for the container: context canceled
Steps to Stop the Running Docker Container Forcefully
docker ps
//copy the CONTAINER ID of the running process, ex: CONTAINER ID: 37146513b713
docker kill 37146513b713
docker rm 37146513b713
alpine does not provide glibc. alpine is that small because it uses a stripped down version of libstdc called musl.libc.org.
So we'll check statically linked dependencies using ldd command.
$ docker run -it <image name> /bin/sh
$ cd /go/bin
$ ldd scratch
Check the linked static files, do they exist on that version of alpine? If the do not from, the binaries perspective, it did not find the file-- and will report File not found.
The following step depends on what binaries were missing, you can look it up on the internet on how to install them.
Add RUN apk add --no-cache libc6-compat to your Dockerfile to add libstdc in some Golang alpine image based Dockerfiles.
In you case the solution is to either
disable CGO : use CGO_ENABLED=0 while building
or add
RUN apk add --no-cache libc6-compat
to your Dockerfile
or do not use golang:alpine
I get this error when the network doesn't exist.
docker: Error response from daemon: network xxx not found.
ERRO[0000] error waiting for container: context canceled
It's quite easy to miss the output line after a long docker command and before the red error message.

OCI runtime create failed: container_linux.go:296 - no such file or directory

End of my Dockerfile:
ENTRYPOINT ["ls /etc"]
Terminal:
...Rest of the building above is fine
Step 8/8 : ENTRYPOINT ["ls /etc"]
---> Using cache
---> ea1f33b8ab22
Successfully built ea1f33b8ab22
Successfully tagged redis:latest
k#Karls ~/dev/docker_redis (master) $ docker run -d -p 6379:6379 --name red redis
71d75058b94f088ef872b08a115bc12cece288b53fe26d67960fe139953ed5c4
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"ls /etc\": stat ls /etc: no such file or directory": unknown.
For some reason, it won't find the directory /etc. I did a pwd and the current working directory is /. I also did a ls / on the entrypoint and that displayed the /etc directory fine.
OCI runtime create failed: container_linux.go:296
In my experience this is an error with the docker daemon itself, not the container you are trying to run. Try deleting all containers, restarting the daemon. I think we also had to clean up the docker networks.
I appear to be having the same issue. Here is what I am doing.
Dockerfile
FROM gcc:7.2.0
COPY src/ /usr/src/myapp
WORKDIR /usr/src/myapp
RUN set -x gcc -o myapp main.c
CMD ["./myapp"]
Build
$ docker build -t test .
Sending build context to Docker daemon 3.584kB
Step 1/6 : FROM gcc:7.2.0
...
---> 3ec35c7d2396
Successfully built 3ec35c7d2396
Successfully tagged test:latest
SECURITY WARNING: You are building a Docker image from Windows against a
non-Windows Docker host. All files and directories added to build context
will have '-rwxr-xr-x' permissions. It is recommended to double check and
reset permissions for sensitive files and directories.
Run
$ docker run -it test
D:\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create
failed: container_linux.go:296: starting container process caused "exec:
\"./myapp\": stat ./myapp: no such file or directory": unknown.
Changed CMD to ENTRYPOINT and removed the set -x seemed to resolve the problem. Though we are still unsure what the cause was or if this will also work for you.
Make sure that /etc exists or is created as the main.c wasn't compiling.
Dockerfile
FROM gcc:7.2.0
COPY src/ /usr/src/myapp
WORKDIR /usr/src/myapp
RUN gcc -o myapp main.c
ENTRYPOINT ["./myapp"]
On OSX, I fixed it by clearing the volume data manually. Close docker, and remove everything in ~/Library/Containers/com.docker.docker
I've expirienced the same issue after updating my Windows credentials, try following: Docker settings > Shared Drives > Reset credentials > Select drives again > Apply and re-enter your credentials. This solved the problem for me multiple times
The command you are trying to execute inside the container does not exist. In this case ls /etc does not exist in the image. There's a /bin/ls binary, but not a /bin/"ls /etc" binary, which itself would be invalid since the name of a file on the filesystem cannot include a /, though it can include a space.
Of course what you wanted to run was ls with the argument /etc, and for that, you need to separate each argument if you run with the exec syntax.
ENTRYPOINT ["ls", "/etc"]
Or if you wanted to allow a shell to parse the string, same way as if you were at a bash prompt inside the container running ls /etc on the command line, then switch to the string syntax that runs a shell:
ENTRYPOINT ls /etc

getting permission denied in docker run

I am trying using Docker using Dockerfile.
My Dockerfile as follows, where I am using debian linux system.
FROM debian:jessie
ENV DEBIAN_FRONTEND noninteractive
ARG AIRFLOW_VERSION=1.7.1.3
ENV AIRFLOW_HOME /usr/local/airflow
..
..
COPY script/entrypoint.sh /entrypoint.sh
COPY config/airflow.cfg ${AIRFLOW_HOME}/airflow.cfg
..
..
USER airflow
WORKDIR ${AIRFLOW_HOME}
ENTRYPOINT ["/entrypoint.sh"]
So when I run docker build -t test ., it build without problem.
However, when I run docker run -p 8080:8080 test.
It throws following error:
container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied".
What is I am doing wrong ?
You need to change the permission of the bash file by chmod +x entrypoint.sh before calling ENTRYPOINT. So change your code to the following:
USER airflow
WORKDIR ${AIRFLOW_HOME}
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Rebuild the image and run the container, it should work.
Since COPY copies files including their metadata, you can also simply change the permissions of the file in the host machine (the one building the Docker image):
$ chmod +x entrypoint.sh
Then, when running docker build -t test . the copied file will have the execution permission and docker run -p 8080:8080 test should work.
Obs.: I'm not advocating this as best practice, but still, it works.
In your terminal, run "chmod +x entrypoint.sh"
or if the entrypoint.sh file is in a folder, run "chmod +x folder_name/entrypoint.sh"
I changed the location of the entrypoint in the dockerfolder and rebuild & it worked!

Resources