I have the following Dockerfile:
FROM ubuntu:21.04
COPY keep-alive.sh $HOME/keep-alive.sh
CMD ["$HOME/keep-alive.sh"]
Yes I know its really useless but I am learning.
When I run this:
$ docker run -d --name linux-worker myorg/linux-worker
71cfc9ff7072688d1758f2ac98a8293ed2bcf77bf68f980da20237c9961aca6c
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"$HOME/keep-alive.sh\": stat $HOME/keep-alive.sh: no such file or directory": unknown.
All I want the container to do is start up and run $HOME/keep-alive.sh. Can anyone spot where I'm going awry?
The $HOME environment variable will not be set when running the COPY instruction in your Dockerfile so your script was likely placed at /keep-alive.sh instead of /root/keep-alive.sh where your CMD instruction expects it to be.
Check the logs of your build and you'll likely see that line executed like:
=> [3/3] COPY keep-alive.sh /keep-alive.sh
instead of:
=> [3/3] COPY keep-alive.sh /root/keep-alive.sh
To fix this, you can explicitly set an environment variable to use in this command:
FROM ubuntu:21.04
ENV DIR /root
COPY keep-alive.sh $DIR/keep-alive.sh
CMD ["/bin/bash", "-c", "$DIR/keep-alive.sh"]
Another change that had to be made was to specify that you want the script to be run in a shell so it will expand environment variables. See this answer for more details about this issue.
Alternatively, if you didn't want to use environment variables at all, you could change that line to this if you know the script's path:
CMD [ "/root/keep-alive.sh" ]
Related
I have a Dockerfile with Entrypoint where I specify the config file variable and the executable file but it looks like Docker or Entrypoint doesn't recognize it. My main.py has to be executed with the config file.
ENTRYPOINT ["CONFIG_FILE=path/to/config.file ./main.py"]
always reproduce no such file or directory: OCI not found
Note: I have copied all the files in the current work directory already. main.py is an executable file. So I guess the problem is the config variable appended before the executable file. Does anyone know what is going on there? Also changing from ENTRYPOINT to CMD does not help as well.
Dockerfile
FROM registry.fedoraproject.org/fedora:34
WORKDIR /home
COPY . /home
ENTRYPOINT ["CONFIG_FILE=path/to/config.file ./main.py"]
If you just need to set an environment variable to a static string, use the Dockerfile ENV directive.
ENV CONFIG_FILE=/path/to/config.file
CMD ["./main.py"]
The Dockerfile ENTRYPOINT and CMD directives (and also RUN) have two forms. You've used the JSON-array form; in that form, there is not a shell involved and you have to manually split out words. (You are trying to run a command CONFIG_FILE=... ./main.py, where the executable file itself needs to include the = and space.) If you don't use the JSON-array form you can use the shell form instead, and this form should work:
CMD CONFIG_FILE=/path/to/config.file ./main.py
In general you should prefer CMD to ENTRYPOINT. There's a fairly standard pattern of using ENTRYPOINT to do first-time setup and then execute the CMD. For example, if you expected the configuration file to be bind-mounted in, but want to set the variable only if it exists, you could write a shell script:
#!/bin/sh
# entrypoint.sh
#
# If the config file exists, set it as an environment variable.
CONFIG_FILE=/path/to/config.file
if [ -f "$CONFIG_FILE" ]; then
export CONFIG_FILE
else
unset CONFIG_FILE
fi
# Run the main container CMD.
exec "$#"
Then you can specify both the ENTRYPOINT (which sets up the environment variables) and the CMD (which says what to actually do)
# ENTRYPOINT must be JSON-array form for this to work
ENTRYPOINT ["./entrypoint.sh"]
# Any valid CMD syntax is fine
CMD ["./main.py"]
You can double-check the environment variable setting by providing an alternate docker run command
# (Make sure to quote things so the host shell doesn't expand them first)
docker run --rm my-image sh -c 'echo $CONFIG_FILE'
docker run --rm -v "$PWD:/path/to" my-image sh -c 'echo $CONFIG_FILE'
If having the same environment in one-off debugging shells launched by docker exec is important to you, of these approaches, only Dockerfile ENV will make the variable visible there. In the other cases the environment variable is only visible in the main container process and its children, but the docker exec process isn't a child of the main process.
I've created a Dockerfile at the root of a directory that contains a web application. My Dockerfile reads as follows:
FROM openjdk:8u282-jre
MAINTAINER me <me#email.com>
COPY target/spring-petclinic-2.2.0.BUILD-SNAPSHOT.jar /
EXPOSE 8080
ENTRYPOINT ["java -jar"]
I'm attempting to copy a jar file from my local machine using the path from the within the root of the project directory - copying this to the root of the container, expose port 8080, and use an entry point thinking that after building the image, the jar will be run as an executable using this entry point. I then built the image as follows:
docker build -t se441/spring-petclinic:standalone .
Giving the build the name se441/spring-petclinic:standalone, I then attempt to run the container:
docker run -i -t se441/spring-petclinic:standalone
And I am getting the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370:
starting container process caused: exec: "java -jar": executable file not found in $PATH:
unknown.
I edited the Dockerfile to have the entry point be a /bin/bash, did an 'ls' and the jar file is definitely there. While in the container, I can run the jar successfully. Any advice on why the jar file can't be found when building/running with the Dockerfile as I've listed above. would be greatly appreciated. Thank you.
Change:
ENTRYPOINT ["java -jar"]
to:
ENTRYPOINT ["java", "-jar"]
The exec syntax (with the json array) does no do any command line parsing that you get with a shell, so it's looking for the literal binary java -jar rather than java and passing the -jar argument. You need to separate each parameter as another array entry, just as you'd separate them with spaces on the command line.
At that point, you'll find that -jar expects the name of the jar file. You can pass that as an argument when you run the container, e.g.:
docker run -i -t se441/spring-petclinic:standalone spring-petclinic-2.2.0.BUILD-SNAPSHOT.jar
Or specify that as well in the entrypoint:
ENTRYPOINT ["java", "-jar", "spring-petclinic-2.2.0.BUILD-SNAPSHOT.jar"]
This is quite strange.
I have a structure like this
app/
CLI/
someOtherFolder/
thingIwantToRun.py
tests.Dockerfile
Dockerfile
README.md
gunicorn.conf
This is what my Dockerfile looks like
FROM python:3.6
WORKDIR /app
COPY ./requirements.txt /.requirements.txt
# Install any needed packages specified in requirements.txt
RUN pip install -r /.requirements.txt
COPY gunicorn.conf /gunicorn.conf
COPY . /app
EXPOSE 8000
RUN ls
ENV FLASK_ENV=development
CMD ["python ./someOtherFolder/thingIwantToRun.py"]
This gives me this error when I start the container -
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"ls ./someOtherFolder\": stat ls ./someOtherFolder: no such file or directory": unknown.
When I change the CMD command into something else which doesn't fail and I jump into the container I see that my folder is indeed there.
When I add a RUN ls into my Dockerfile, I can still see my folder.
If it exists, why can't I run it?
UPDATE -
If I move thingIWantToRun.py into the top level folder and change my Docker CMD to
CMD [python thingIWantToRun.py]
I see the same issue. However, I can ssh into the container and verify that the file is there.
The problem is how you are running the CMD command. It is something like this:
CMD ["executable", "param1", "param2"]
ref: https://docs.docker.com/engine/reference/builder/#cmd
In that sense actual command should be
CMD ["python", "./someOtherFolder/thingIwantToRun.py"]
Docker tries to find the executable part (first item of the array) and run it, and passes rest of the array items (param1, param2) to it. If you look closer to the error is prints
... process caused "exec: \"ls ./someOtherFolder\": stat ls ./someOtherFolder: no such file or directory"
It says that ls ./someOtherFolder is not a file or directory and it can't exec it! Which is the first item of the array, the executable!
Here ls should be first item and ./someOtherFolder should be second item of array for CMD command.
You need to use the CMD command something like this:
CMD ["python", "./someOtherFolder/thingIwantToRun.py"]
I have a docker image with this command:
FROM ruby:2.4-alpine
WORKDIR /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
RUN bundle config build.nokogiri --use-system-libraries
RUN bundle install --without development test
VOLUME /state
COPY . /usr/src/app/
ENTRYPOINT ["api-entrypoint.sh"]
CMD ["foreman", "start"]
it builds correctly but when I try to run bash, for example, I get this
container_linux.go:247: starting container process caused "exec: \"api-entrypoint.sh\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"api-entrypoint.sh\": executable file not found in $PATH".
I tried copy the entrypoint file, give it executable permissions as well with CMD...nothing worked
I had this problem with Docker for Windows and the solution was changing the entrypoint script file from CRLF -> LF.
I had the same problem - the entrypoint was not found, but I was sure that it was there.
It seems that you can't use single quotes ' for the entrypoint/command.
So I changed from
ENTRYPOINT ['/foo/bar/script.sh']
CMD ['run']
to
ENTRYPOINT ["/foo/bar/script.sh"]
CMD ["run"]
and it works.
/usr/src/app may not be in your path so you should include the full path to the script. You also need to ensure that your entrypoint.sh is executable, docker will copy the permissions exactly as they are on your build host, so this step may not be needed depending on your scenario.
FROM ruby:2.4-alpine
WORKDIR /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
RUN bundle config build.nokogiri --use-system-libraries
RUN bundle install --without development test
VOLUME /state
COPY . /usr/src/app/
RUN chmod 755 api-entrypoint.sh
ENTRYPOINT ["/usr/src/app/api-entrypoint.sh"]
CMD ["foreman", "start"]
Another source of issues can be your shebang, if you have /bin/bash and you don't have bash in your image/base image it will tell your that your entrypoint is not found. This is one of the issues I ran into.
In my case I had an error:
> [27/35] RUN /entrypoint.sh:
#31 0.503 /bin/sh: 1: /entrypoint.sh: not found
I just run dos2unix command and the issue gone:
dos2unix entrypoint.sh
I had a multi-stage build with a golang application where this problem occured. The golang executable was build in builder stage (alpine image) and then copied to the next stage (debian image). In the second stage the error occured: 'mygoexecutable' not found or does not exist.
The reason was that the executable was not compatible with the image of the second stage due to having some cgo references only available in the builder stage. Afaik apline uses libc and the debian images use glibc. The solution is to use compatible images or to set the environment variable CGO_ENABLED=0 (disable cgo) while building the executable.
On my case I do try to remove the EXEC command from the Dockerfile first to check if the .sh entry file exist. And I confirm that it is there.
When I try to run the .sh from inside the docker container it shows that the .sh file doesn't exist. So I try to run the .sh file using this command sh /path_to_entrypoint/your_sh_file.sh and it shows that there is an error in the .sh file.
After some researching I found the answer why there is an error on this post:
https://stackoverflow.com/a/67836849/10835742
If you use a variable in your ENTRYPOINT it might not get resolved.
e.g.
ENTRYPOINT ["$WORKING_DIR/start.sh"]
This will not do variable substitution.
ENTRYPOINT ["sh", "-c", "$WORKING_DIR/start.sh"]
I would like to use a prod.conf file in production inside a Docker container. I added this to my Dockerfile:
ENTRYPOINT ["bin/myapp", "-D", "config.resource=prod.conf"]
But I got this error:
Bad root server path: /opt/docker/-D
I get the same error when I try to run the command manually as root
/opt/docker/bin/myapp -D config.resource=prod.conf
If I run
/opt/docker/bin/myapp
It works but using the default application.conf file.
I guess there is no permission issue.
Here is my full Dockerfile:
FROM openjdk:8u121-alpine
WORKDIR /opt/docker
ADD opt /opt
RUN ["chown", "-R", "daemon:daemon", "."]
EXPOSE 9000
USER daemon
ENTRYPOINT ["bin/myapp", "-D", "config.resource=prod.conf"]
CMD []
Edit:
I got the same error locally:
activator clean stage
target/universal/stage/bin/myapp -D config.resource=prod.conf
Bad root server path: /home/me/Documents/MyApp-D
There should be no space between the -D and the config value. Use this instead:
ENTRYPOINT ["bin/myapp", "-Dconfig.resource=prod.conf"]
JAVA_OPTS shall be used to avoid such errors.
JAVA_OPTS="-Dconfig.resource=prod.conf" bin/myapp
Worsk with command line, systemctl.
If you use sbt plugin "DockerPlugin" you can type
dockerEntrypoint := Seq("")
in your build.sbt file. It will cause
ENTRYPOINT [""]
in your Dockerfile. So, then you run docker with your image you should specify in run command the following
bin/myapp "-Dconfig.resource=prod.conf"
i.e.
docker run YOUR_DOCKER_IMAGE bin/myapp "-Dconfig.resource=prod.conf"
Pay attention to quotes on -D