Arguments for overridden docker entrypoint - docker

The Entrypoint of a docker image can be modified while running the image using --entrypoint in docker run command. I want to start a script in my image with some arguments at startup. I can get docker to run the script at startup as
docker run -it --rm --entrypoint /my/script/path.sh my-docker-image
How do I pass arguments to my script?
Note that I cannot modify the original dockerfile with which this image was created. Neither do I want to create another docker image with this image as its base.

When your Docker image has an ENTRYPOINT, either via a Dockerfile or provided on the command line with --entrypoint, any arguments on the docker run command line after the image name are passed to the entrypoint script.
So for example, if I have a script like this in myscript.sh:
#!/bin/sh
echo "Here are my arguments: $#"
And I run an image like this:
$ chmod 755 myscript.sh
$ docker run -it --rm -v $PWD/myscript.sh:/myscript.sh \
--entrypoint /myscript.sh alpine one two three
I will see the output:
Here are my arguments: one two three
...and the container will exit, because the entrypoint script didn't arrange to do anything else. You could replace alpine here (which is a minimal docker image) with any other Docker image that has /bin/sh (so, most of them). For example:
$ docker run -it --rm -v $PWD/myscript.sh:/myscript.sh \
--entrypoint /myscript.sh centos one two three
Here are my arguments: one two three
Note that I'm using the -v argument in this example to mount a script on my host into the container, since I didn't want to create a new image for the purposes of this example. You could obviously bake a similar script into your image instead.
For details, read the ENTRYPOINT docs.

Related

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

Can't mount volume in Docker CLI

I have the following Dockerfile:
FROM continuumio/anaconda3
VOLUME /code
I execute it using the following command line:
docker run -it 626058fb269a --mount src="$(pwd)",target=/code,type=bind /bin/bash
However I'm getting this error:
[FATAL tini (8)] exec --mount failed: No such file or directory
Clearly I'm missing something. If run docker run -it 626058fb269a /bin/bash, the directory is there, but obviously has nothing mounted. I just want to have access to my code from the container. How can I mount this correctly?
docker run interprets everything after the image name as the "command" part of the command line (passed as command-line arguments to the entrypoint, if present, or else run directly), so your command is
docker run \
-it \ # Container launch options
626058fb269a \ # Image name
\ # Command and its arguments follow
--mount src="$(pwd)",target=/code,type=bind /bin/bash
You don't need to declare a VOLUME in a Dockerfile to mount a named volume or host directory into a container, so for your use the custom image isn't adding anything for you. I'd probably suggest something like
docker run \
--rm -it \ # Container launch options
--mount src="$(pwd)",target=/code,type=bind \
continuumio/anaconda3 \ # Image name
/bin/bash # Command and its arguments
(Better still, develop and test the application locally without Docker, then COPY it in a Dockerfile, so that you can run the image without also being forced to separately copy around the application code.)

Printing output of shell script running inside a docker container

I have a Dockerfile in which I have specified an ENTRYPOINT "my_script.sh".In my_script.sh,I am exe.cuting a CURL command.When the docker image with this Dockerfile is built.How should I run it so that output of my_script.sh will be printed on my host.
Dockerfile -
FROM my-company-repo-java-base-image
ADD my_script.sh /root
ENTRYPOINT bash "/root/my_script.sh
my_script.sh
echo "Hello My Script"
curl -x POST "some_api_which_returns_json"
I have built the image using command
docker build
I want to run this image and see output of my_script.sh on my dockerhost.
Given a Docker image whose tag is $DOCKER_IMAGE:
docker container run -it --rm $DOCKER_IMAGE
-i keeps STDIN open
-t allocates a pseudo-TTY
--rm automatically removes the container when it exits
See docker container run for all the options.
Of course you can see the output of shell script. Make sure you delete the old image before building new one when you change the script. Else, your container will keep using the old script over and over. Here's an example
Dockerfile
FROM alpine:3.7
ENTRYPOINT ["/usr/bin/myscript.sh"]
COPY myscript.sh /usr/bin/myscript.sh
myscript.sh
#!/usr/bin/env sh
echo "Hello there"
commands to run:
docker image rm testdocker
docker build --tag testdocker .
docker run testdocker
You should see the line Hello there appears on the terminal

How to pass command line arguments to a docker image

I run tests inside docker image and I need to pass custom arguments
all the time.
When I put arguments after image name docker thinks that argument is image name.
docker run -t -i image-name -s test.py
docker run -t -i image-name -- -s test.py
Error:
Failed no image test_arena2.py
Docker version 1.11.2, build b9f10c9
You can build your Dockerfile with a combination of ENTRYPOINT and CMD instructions, which will let you run containers with or without arguments, e.g:
FROM ubuntu
ENTRYPOINT ["/bin/echo"]
CMD ["hello"]
That says the entrypoint is the echo command, and the default argument is hello. Run a container with no arguments:
> docker run temp
hello
Run with arguments and they all get passed to the entrypoint command:
> docker run temp -s stackoverflow
-s stackoverflow
docker run -i is good to be the begin of your command line then set of arguments for the run command only then at the very last -t image with image name provided
Image name should always be at the end of docker run command i.e. as last parameter to the command. Are you fine with passing the values as environment variable using below command
docker run -e "ENV_VAR_NAME=VALUE" -it image_name

how to pass command line arguments to a python script running in docker

I have a python file called perf_alarm_checker.py, this python file requires two command line arguments: python perf_alarm_checker.py -t something -d something, the Dockerfile looks like this:
# Base image
FROM some base image
ADD perf_alarm_checker.py /perf-test/
CMD python perf_alarm_checker.py
How to pass the two command line arguments, -t and -d to docker run? I tried docker run -w /perf-test alarm-checker -t something -d something but doesn't work.
Use an ENTRYPOINT instead of CMD and then you can use command line options in the docker run like in your example.
ENTRYPOINT ["python", "perf_alarm_checker.py"]
You cannot use -t and -d as you intend, as those are options for docker run.
-t starts a terminal.
-d starts the docker container as a daemon.
For setting environment variables in your Dockerfile use the ENV command.
ENV <key>=<value>
See the Dockerfile reference.
Another option is to pass environment variables through docker run:
docker run ... -e "key=value" ...
See the docker run reference.
Those environment variables can be accessed from the CMD.
CMD python perf_alarm_checker.py -t $ENV1 -d $ENV2

Resources