If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi
Related
I am trying to write ssh keys to docker image using CMD.
I have docker file like below.
FROM public.ecr.aws/ubuntu/ubuntu:18.04_stable
CMD ["sh", "-c", "echo $PUBLIC_KEY >> ./.ssh/id_rsa.pub"]
CMD ["sh", "-c", "echo $PRIVATE_KEY >> ./.ssh/id_rsa"]
I run the container with env var like so:
docker run -it -d -e PUBLIC_KEY="key1" -e PRIVATE_KEY="key2" my-image
As result, writing both of them doesn't work. However, when I manually docker exec these 2 cmd against the running container, it will write both public key and private key to the correct location.
Can anyone explain this? How should I make the CMD work?
CMD is a way to define a default command when starting a container. There can only be one default command. In the example you have given, the second CMD will be the default command, and the first CMD will not run. The default command will run only when you do not specify a command to run on the command line, i.e. as part of the command line
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
if you provide a COMMAND, the CMD in the dockerfile will not be run.
When you issue docker exec, you explicitly run the command line, so it will always run.
I want to run a script during run time and not during image build.
The script runs based on env variable that I pass during container run.
Script:
#!/bin/bash
touch $env
Docker file
FROM busybox
ENV env parm
RUN mkdir PRATHAP
ADD apt.sh /PRATHAP
WORKDIR /PRATHAP
RUN chmod 777 apt.sh
CMD sh apt.sh
when I try to run: docker container run -it -e env=test.txt sh
the script is not running
I am just getting the sh terminal. If I remove it the the container is not alive.. Please help me how to achieve it
Your docker run starts sh which overrides your CMD in Dockerfile. To get around this, you need to replicate the original CMD via the command line.
$ docker run -it -e env=test.txt <image:tag> sh -c "./init.sh; sh"
Remember that a Docker container runs a single command, and then exits. If you docker run your image without overriding the command, the only thing the container will do is touch a file inside the isolated container filesystem, and then it will promptly exit.
If you need to do some startup-time setup, a useful pattern is to write it into an entrypoint script. When a container starts up, Docker runs whatever you have named as the ENTRYPOINT, passing the CMD as additional parameters (or it just runs CMD if there is no ENTRYPOINT). You can use the special shell command exec "$#" to run the command. So revisiting your script as an entrypoint script:
#!/bin/sh
# ^^ busybox image doesn't have bash (nor does alpine)
# Do the first-time setup
touch "$env"
# Launch the main container process
exec "$#"
In your Dockerfile set this script to be the ENTRYPOINT, and then whatever long-running command you actually want the container to do to be the CMD.
FROM busybox
WORKDIR /PRATHAP # Also creates the directory
COPY apt.sh . # Generally prefer COPY to ADD
RUN chmod 0755 apt.sh # Not world-writable
ENV env parm
ENTRYPOINT ["./apt.sh"] # Must be JSON-array syntax
# Do not need to name interpreter, since
# it is executable with #! line
CMD sh # Or whatever the container actually does
FROM alpine:3.5
CMD ["echo", "hello world"]
So after building docker build -t hello . I can run hello by calling docker run hello and I get the output hello world.
Now let's assume I wish to run ls or sh - this is fine. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
How do I go about doing this? my attempt at exec mode fails miserably... docker run hello --cmd=["ls", "-al"]
Anything after the image name in the docker run command becomes the new value of CMD. So you can run:
docker run hello ls -al
Note that if an ENTRYPOINT is defined, the ENTRYPOINT will receive the value of CMD as args rather than running CMD directly. So you can define an entrypoint as a shell script with something like:
#!/bin/sh
echo "running the entrypoint code"
# if no args are passed, default to a /bin/sh shell
if [ $# -eq 0 ]; then
set -- /bin/sh
fi
# run the "CMD" with exec to replace the pid 1 of this shell script
exec "$#"
Q. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
This is just achieved with:
docker run -d hello tail -f /dev/null
So the container is running in background, and it let you to execute arbitrary commands inside it:
docker exec <container-id> ls -la
And, for example a shell:
docker exec -it <container-id> bash
Also, I recommend you what #BMitch says.
My script is as follows:
# start a ubuntu container in the background
docker run -it --name ub -d ubuntu /bin/bash
sleep 1
# run a command in the container
docker exec -it ub bash
echo 234
# exit the container
exit
sleep 1
# do something else
echo 123
But the script would just stop right after exit and hang there. Does anyone know why is that?
p.s: My Docker version is: 17.03.0-ce, build 60ccb22
You have given -it during the run command. which opens up the /bin/bash of your container and waits there. The next command wont get executed until the first command execution is completed.
It's better to create a script file and move it inside the container while making the docker. and run the script on starting the docker. You may specify that using a CMD in the docker file.
You won't be needing an additional exec command.
The corresponding Dockerfile would be
FROM ubuntu:latest
COPY <path-to-script> <dest>
CMD [" <path-to-script> "]
You have to create the script file along with the Dockerfile. Build the docker using the command
docker build -t <image-name> <location of Dockerfile>
The execution command would be
docker run -d --name <name> -d ubuntu <path-to-script>
The documentation for the run command follows the following syntax:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
however I've found at times that I want to pass a flag to [COMMAND].
For example, I've been working with this image, where the [COMMAND] as specified in the Dockerfile is:
CMD ["/bin/bash", "-c", "/opt/solr/bin/solr -f"]
Is there any way to tack on flags to "/opt/solr/bin/solr -f" so that it's in the form "/opt/solr/bin/solr -f [-MY FLAGS]"?
Do I need to edit the DockerFile or is there some built in functionality for this?
There is a special directive ENTRYPOINT which fits your needs. Unlike CMD it will add additional flags at the end of your command.
For example, you can write
ENTRYPOINT ["python"]
and run it with
docker run <image_name> -c "print(1)"
Note, that this only will work if you write command in exec form (via ["...", "..."]), otherwise ENTRYPOINT will invoke shell and pass your args there, not to your script.
More generally, you can combine ENTRYPOINT and CMD
ENTRYPOINT ["ping"]
CMD ["www.google.com"]
Where CMD means default args for your ENTRYPOINT. Now you can run both of
docker run <image_name>
docker run <image_name> yandex.ru
and only CMD will be replaced.
Full reference about how ENTRYPOINT and CMD interact can be found here
The CMD directive of a Dockerfile is the command that would be run when the container starts if no command was specified in the docker run command.
The main purpose of a CMD is to provide defaults for an executing container.
In your case, just use the docker run command as follow to override the default command specified in the Dockerfile:
docker run makuk66/docker-solr /bin/bash -c "/opt/solr/bin/solr -f [your flags]"