Run a command line when starting a docker container - docker

As far as I'm concerned you can run a command line when building an image with RUN or when running a container with CMD. Is there anyway to do so when starting a docker container?
My goal is to run the gcloud datastore automatically just after typing docker start my_container_name.
If this is possible, which changes should I apply to my Dockerfile?
(I have already installed all the packages required and I can run that command after docker run --name my_container_name -i -t my_image_name but I want it to be run also when starting the container)

Docker execute RUN command when you build the image.
Docker execute ENTRYPOINT command when you start the container. CMD goes as arguments to ENTRYPOINT. Both of these can be overridden when you create a container from an image. Their purpose in Dockerfile is to provide defaults for future when you or someone else will be creating containers from this image.
Consider the example:
FROM debian:buster
RUN apt update && apt install procps
ENTRYPOINT ["/usr/bin/ps"]
CMD ["aux"]
The RUN command adds ps command to the image, ENTRYPOINT and CMD are not executed but they will be when you run the container:
# create a container named 'ps' using default CMD and ENTRYPOINT
docker run --name ps my_image
# equivalent to /usr/bin/ps aux
# start the existing container 'ps'
docker start ps
# equivalent to /usr/bin/ps aux
# override CMD
docker run my_image au
# equivalent to /usr/bin/ps au
# override both CMD and ENTRYPOINT
docker run --entrypoint=/bin/bash my_image -c 'echo "Hello, world!"'
# will print Hello, world! instead of using ps aux
# no ENTRYPOINT, only CMD
docker run --entrypoint="" my_image /bin/bash -c 'echo "Hello, world!"'
# the output is the same as above
Each time you use docker run you create a container. The used ENTRYPOINT and CMD are saved as container properties and executed each time you start the container.

Related

How to `docker cp` ssh key to docker container before its entrypoint is executed

Say I have this right now:
docker run -v /root/.ssh:/root/.ssh:ro my_image
and the ENTRYPOINT for the above image is:
ENTRYPOINT ["echo", "foo"]
instead I want to do something like this:
docker run -d --name c my_image # problem: this will likely exit early :(
docker cp /root/.ssh c:/root/.ssh
docker exec c echo foo
the problem is: how do I keep the container alive so that it waits for me to copy the ssh key into it and then run the echo foo command?
Maybe I can keep it alive by telling it to wait for stdin? But how exactly?
you need first to create the container:
docker create my_image
then Copy the files:
docker cp /root/.ssh MY_CREATED_CON:/root/.ssh
start the container normally:
docker start MY_CREATED_CON

Why aren't commands from the Dockerfile executed when the `/bin/bash` command is appended to `docker run -it IMAGE ID`?

I am just starting out with Docker. I have this Dockerfile:
FROM jonathonf/manjaro
CMD ["pacman", "-S", "--noconfirm", "git"]
When I build the image with
sudo docker build -t uname/description:tag .
and then run it with
sudo docker run IMAGE_ID
, where IMAGE_ID is the ID I get from sudo docker images command, the command in the Dockerfile CMD ["pacman", "-S", "--noconfirm", "git"] runs, git is installed, a container is created (that I can commit).
If I run the image with
sudo docker run IMAGE_ID /bin/bash
the CMD from the Dockerfile is not executed.
I expected it to run the commands from the Dockerfile, make git available in the container and let me work further in the shell.
Couple of things here:
If you want git installed always, why run it as a CMD and then manually commit as a new image, rather than just running in a Dockerfile RUN instruction?
Anything you put after docker run... is going to run as the CMD and override it. If you don't want to override it, you should put it as an ENTRYPOINT instead. But really you should do 1.
When you use CMD ["pacman", "-S", "--noconfirm", "git"] in your dockerfile, you are setting pacman -S --noconfirm git as pid-1 process of your container.
Now when you run container sudo docker run IMAGE_ID the first process will be the one specified in the CMD. You can verify this by running docker exec -it container-id ps -ef
When you run sudo docker run IMAGE_ID /bin/bash the pid-1 process of your docker container is replaced by /bin/bash.
[user#jumphost ~]$ docker run -itd -p 3666:3306 alpine sh
dcef6d1cc121bfd195552fa7639038ac513a74eaa035a855bb7917dd620be642
[user#jumphost ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dcef6d1cc121 alpine "sh" 2 seconds ago Up 2 seconds 0.0.0.0:3666->3306/tcp fervent_euclid
[user#jumphost ~]$ docker exec -it dcef6d1cc121 ps -ef
PID USER TIME COMMAND
1 root 0:00 sh
7 root 0:00 ps -ef
[user#jumphost ~]$
Also to know more about CMD check this out.
Also check docker entrypoint and its difference with CMD.
Hope this helps.
Thats how Docker works. You override the CMD from your Dockerfile. If you want to achive both, a git Install and your bash command, move your CMD command to RUN command. Please consider Dockerfile documentation https://docs.docker.com/engine/reference/builder/#run

Printing output of shell script running inside a docker container

I have a Dockerfile in which I have specified an ENTRYPOINT "my_script.sh".In my_script.sh,I am exe.cuting a CURL command.When the docker image with this Dockerfile is built.How should I run it so that output of my_script.sh will be printed on my host.
Dockerfile -
FROM my-company-repo-java-base-image
ADD my_script.sh /root
ENTRYPOINT bash "/root/my_script.sh
my_script.sh
echo "Hello My Script"
curl -x POST "some_api_which_returns_json"
I have built the image using command
docker build
I want to run this image and see output of my_script.sh on my dockerhost.
Given a Docker image whose tag is $DOCKER_IMAGE:
docker container run -it --rm $DOCKER_IMAGE
-i keeps STDIN open
-t allocates a pseudo-TTY
--rm automatically removes the container when it exits
See docker container run for all the options.
Of course you can see the output of shell script. Make sure you delete the old image before building new one when you change the script. Else, your container will keep using the old script over and over. Here's an example
Dockerfile
FROM alpine:3.7
ENTRYPOINT ["/usr/bin/myscript.sh"]
COPY myscript.sh /usr/bin/myscript.sh
myscript.sh
#!/usr/bin/env sh
echo "Hello there"
commands to run:
docker image rm testdocker
docker build --tag testdocker .
docker run testdocker
You should see the line Hello there appears on the terminal

How to continue running scripts when exiting docker containers

My script is as follows:
# start a ubuntu container in the background
docker run -it --name ub -d ubuntu /bin/bash
sleep 1
# run a command in the container
docker exec -it ub bash
echo 234
# exit the container
exit
sleep 1
# do something else
echo 123
But the script would just stop right after exit and hang there. Does anyone know why is that?
p.s: My Docker version is: 17.03.0-ce, build 60ccb22
You have given -it during the run command. which opens up the /bin/bash of your container and waits there. The next command wont get executed until the first command execution is completed.
It's better to create a script file and move it inside the container while making the docker. and run the script on starting the docker. You may specify that using a CMD in the docker file.
You won't be needing an additional exec command.
The corresponding Dockerfile would be
FROM ubuntu:latest
COPY <path-to-script> <dest>
CMD [" <path-to-script> "]
You have to create the script file along with the Dockerfile. Build the docker using the command
docker build -t <image-name> <location of Dockerfile>
The execution command would be
docker run -d --name <name> -d ubuntu <path-to-script>

Automatically run command inside docker container after starting up + volume mount

I have created my simple own image from .
FROM python:2.7.11
RUN mkdir /extra/later/ \
&& mkdir /yyy
Now I'm able to perform the following steps:
docker run -d -v xxx:/yyy myimage:latest
So now my volume is mounted inside the container. I'm going to access and I'm able to perform commands on that mounted volume inside my container:
docker exec -it container_id bash
bash# tar -cvpzf /mybackup.tar -C /yyy/ .
Is there a way to automate this steps in your Dockerfile or describing the commands in your docker run command?
The commands executed in the Dockerfile build the image, and the volume is attached to a running container, so you will not be able to run your commands inside of the Dockerfile itself and affect the volume.
Instead, you should create a startup script that is the command run by your container (via CMD or ENTRYPOINT in your Dockerfile). Place the logic inside of your startup script to detect that it needs to initialize the volume, and it will run when the container is launched. If you run the script with CMD you will be able to override running that script with any command you pass to docker run which may or may not be a good thing depending on your situation.
Try using the CMD option in the Dockerfile to run the tar command
CMD tar -cvpzf /mybackup.tar -C /yyy/ .
or
CMD ["tar", "-cvpzf", "/mybackup.tar", "-C", "/yyy/", "."]

Resources