Here is my Dockerfile:
FROM debian
MAINTAINER Andrew Ford<andrew.ford#gg.com>
RUN apt-get update
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
and here is my entrypoint.sh (same directory as Dockerfile)
#!/bin/bash
echo Hello
then I ran:
docker build --no-cache=true -t test/dockerfile-sayhello .
and when I ran:
docker run test/dockerfile-sayhello
it returns:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: Container command '/entrypoint.sh' not found or does not exist..
I have tried googling around to try to see if I have made any obvious mistake, but so far I haven't been able to identify it. Maybe some of you can help
Edit: also ran chmod +x entrypoint.sh to give permission
I just tried with the following Dockerfile (adding the chmod)
FROM debian
MAINTAINER Andrew Ford<andrew.ford#gg.com>
RUN apt-get update
COPY entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
And it works as expected.
It is not like issue 20789:
I tried to transform a Dockerfile setup from phusion/baseimage to gliderslabs/alpine. Turns out that those shell scripts use bash -- of course! Simply change to sh, as bash is not present, resulting in the above error..
The latest debian image should include bash since its default CMD is /bin/bash.
Related
I want to preface this in saying that I am very new to docker and have just got my feet wet with using it. In my Docker file that I run to build the container I install a program that sets some env variables. Here is my Docker file for context.
FROM python:3.8-slim-buster
COPY . /app
RUN apt-get update
RUN apt-get install wget -y
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/install_mvGenTL_Acquire.sh
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/mvGenTL_Acquire-x86_64_ABI2-2.40.0.tgz
RUN chmod +x ./install_mvGenTL_Acquire.sh
RUN ./install_mvGenTL_Acquire.sh -u
RUN apt-get install -y python3-opencv
RUN pip3 install USSCameraTools
WORKDIR /app
CMD python3 main.py
After executing the build docker command the program "mvGenTL_Acquire.sh" sets env inside the container. I need these variables to be set when executing the run docker command. But when checking the env variables after running the image it is not set. I know I can pass them in directly but would like to use the ones that are set from the install in the build.
Any help would be greatly appreciated, thanks!
For running a bash script when your container is creating:
make an script.sh file:
#!/bin/bash
your commands here
If you are using an alpine image, you must use #!/bin/sh instead of #!/bin/bash on the first line of your bash file.
Now in your Dockerfile copy your bash file in the container and use the ENTRYPOINT instruction for running this file when the container is creating:
.
.
.
COPY script.sh /
RUN chmod +x /script.sh
.
.
.
ENTRYPOINT ["/script.sh"]
Notice that in the ENTRYPOINT instruction use your bash file address in your image.
Now when you create a container, the script.sh file will be executed.
I am trying to use docker to run kdb/q. But I get a "No such file or directory" error
Dockerfile:
FROM ubuntu
COPY ./ /root_dir/
WORKDIR root_dir
ENV QHOME=/root_dir/bin/q
RUN ["chmod", "+x", "/root_dir/bin/q/l32/q"]
CMD ["/bin/bash"]
I am opening a bash command prompt just so I can take a look at it but eventually this would just be run with the q command directly
File Layout:
- root_dir
- bin
- q
- q.k
- s.k
- l32
- q
Build:
sudo docker build -t dfile -f Dockerfile .
Run:
sudo docker run -it dfile
Gives me a bash command prompt, and trying to launch q:
root#5e4b86578916:/root_dir# /root_dir/bin/q/l32/q
Gives
bash: /root_dir/bin/q/l32/q: No such file or directory
However I can see it there:
root#5e4b86578916:/root_dir# ls /root_dir/bin/q/l32/
q
How can I launch q/any executable from here?
NB: I am running q locally on Ubuntu with the same command, if I set QHOME to the same (local) location using export then give the full path to the executable I enter into a valid q session
Seems that it needs libc6-i386 to run (64 bit/32 bit conversion)
FROM ubuntu
COPY ./ /root_dir/
WORKDIR root_dir
RUN apt-get update && apt-get install libc6-i386
ENV QHOME=/root_dir/bin/q
RUN chmod +x /root_dir/bin/q/l32/q
CMD /root_dir/bin/q/l32/q
Now works as expected. The only docs I can find are not very illuminating https://packages.ubuntu.com/focal/libc6-i386
See kdb install notes for how to run 32-bit kdb+ on 64-bit linux.
I am currently trying to deal with a deployment to a kubernetes cluster. The deployment keeps failing with the response
Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied"
I have tried to change the permissions on the file which seem to succeed as if I ls -l I get -rwxr-xr-x as the permissions for the file.
I have tried placing the chmod command both in the dockerfile itself and prior to the image being built and uploaded but neither seems to make any difference.
Any ideas why I am still getting the error?
dockerfile below
FROM node:10.15.0
CMD []
ENV NODE_PATH /opt/node_modules
# Add kraken files
RUN mkdir -p /opt/kraken
ADD . /opt/kraken/
# RUN chown -R node /opt/
WORKDIR /opt/kraken
RUN npm install && \
npm run build && \
npm prune --production
# Add the entrypoint
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER node
ENTRYPOINT ["/entrypoint.sh"]
This error is not about entrypoint error but command inside. Always start scripts with "sh script.sh" either entrypoint or cmd. In this case it would be: ENTRYPOINT ["sh", "entrypoint.sh"]
I created a github action with a Dockerfile and entrypoint.sh file. I run command 'chmod +x' in my computer and push to github repository. I did not RUN 'chmod +x' in Dockerfile. It works.
tray docker exec -it /bin/sh
How do I execute an executable twice in a docker container?
For instance I need to run my application twice, the first time to initialize some stuff, and the second time to listen to a given port defined in an environment variables.
The commands from a shell would be something like this:
[j3d#gonzo test]$ kontrol -initial
[j3d#gongo test]$ kontrol
started... listening on port 6000...
Here below is my Dockerfile:
FROM golang:1.8.3 as builder
RUN go get -u github.com/golang/dep/cmd/dep
RUN go get -d github.com/koding/kite
WORKDIR ${GOPATH}/src/github.com/koding/kite
RUN ${GOPATH}/bin/dep ensure
RUN go install ./kontrol/kontrol
RUN mv ${GOPATH}/bin/kontrol /tmp
FROM busybox
ENV APP_HOME /opt/robotrader
RUN mkdir -p ${APP_HOME}
WORKDIR ${APP_HOME}
COPY --from=builder /tmp/kontrol .
ENTRYPOINT ["./kontrol", "-initial"]
CMD ["./kontrol"]
The container builds successfully... but when I start it I always get the following error message:
kontrol | standard_init_linux.go:190: exec user process caused "no such file or directory"
Any help would be really appreciated.
EDIT
Thanks to zero298 who helped me to figure out the issue, here below is a working Dockerfile:
FROM golang:1.8.3 as builder
RUN go get -u github.com/golang/dep/cmd/dep
RUN go get -d github.com/koding/kite
WORKDIR ${GOPATH}/src/github.com/koding/kite
RUN ${GOPATH}/bin/dep ensure
RUN CGO_ENABLED=0 go install ./kontrol/kontrol
RUN mv ${GOPATH}/bin/kontrol /tmp
FROM busybox
ENV APP_HOME /opt/robotrader
RUN mkdir -p ${APP_HOME}
WORKDIR ${APP_HOME}
COPY --from=builder /tmp/kontrol .
ENTRYPOINT ["./kontrol", "-initial"]
CMD ["./kontrol"]
The go application should be built with CGO_ENABLED=0 - see this post for more info.
I think you are running into a different issue than you think you are. Running your Dockerfile and then executing:
docker build -t j3d .
docker run -it --rm --name j3d-test --entrypoint sh j3d
Allows me to run my own commands from within the container.
Using ls lists out the PWD contents:
-rwxr-xr-x 1 root root 16.8M Jun 21 19:20 kontrol
Everything seems normal. However, trying to run that myself generates the following error:
sh: ./kontrol: not found
To me, this is likely similar to: Linux executable fails with “File not found” even though the file is there and in PATH.
In fact, if you instead:
Copy the compiled kontrol executable out of your builder image
Run the ubuntu container mounting the directory with the copied kontrol executable docker run -it --rm -v $PWD:/mnt/go ubuntu sh
Try to run kontrol
You will get the "correct" error which stipulates you haven't setup your keys correctly:
2018/06/21 19:56:57 cannot read public key file: open : no such file or directory
Your path forward is probably to figure out why you can't cross-compile
Create a script that runs it twice:
E.g in "startup.sh"
#!/bin/bash
# Run kontrol twice
./kontrol -initial
./kontrol
Then replace the last two lines in your Dockerfile with:
COPY startup.sh .
CMD ["./startup.sh"]
If kontrol terminates when you run it with the init flag, then you shuold just use
RUN /opt/robotrader/kontrol -init
CMD ["./kontrol"]
If it doesn't terminate, you'll have to find another way to architect your appp.
I am trying using Docker using Dockerfile.
My Dockerfile as follows, where I am using debian linux system.
FROM debian:jessie
ENV DEBIAN_FRONTEND noninteractive
ARG AIRFLOW_VERSION=1.7.1.3
ENV AIRFLOW_HOME /usr/local/airflow
..
..
COPY script/entrypoint.sh /entrypoint.sh
COPY config/airflow.cfg ${AIRFLOW_HOME}/airflow.cfg
..
..
USER airflow
WORKDIR ${AIRFLOW_HOME}
ENTRYPOINT ["/entrypoint.sh"]
So when I run docker build -t test ., it build without problem.
However, when I run docker run -p 8080:8080 test.
It throws following error:
container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied".
What is I am doing wrong ?
You need to change the permission of the bash file by chmod +x entrypoint.sh before calling ENTRYPOINT. So change your code to the following:
USER airflow
WORKDIR ${AIRFLOW_HOME}
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Rebuild the image and run the container, it should work.
Since COPY copies files including their metadata, you can also simply change the permissions of the file in the host machine (the one building the Docker image):
$ chmod +x entrypoint.sh
Then, when running docker build -t test . the copied file will have the execution permission and docker run -p 8080:8080 test should work.
Obs.: I'm not advocating this as best practice, but still, it works.
In your terminal, run "chmod +x entrypoint.sh"
or if the entrypoint.sh file is in a folder, run "chmod +x folder_name/entrypoint.sh"
I changed the location of the entrypoint in the dockerfolder and rebuild & it worked!