Docker pass arguments to docker without replacing default CMD - docker

I am writing a dockerfile that runs a pp that wil use the same argument most of the time.
ENTRYPOINY [ "mayapp" ]
CMD [ "defaultarg" ]
But every now and the it needs to take another argument, since the users will be mostly windows I want to do this without many scripts or extra arguments
docker run my/app somearg
Issue is they need to use a bind volume. However when I add a -v switch it will take that to override the CMD.
docker run my/app -v datadir:/app/datadir
This will replace 'defaultarg' with '-v'
Is there a way to stop this from happening without moving the CMD arg to a env/ARG?
I have used the build args switch because I couldn't find any info. However resolving this without build-args would be nicer since it less switches on running the container.

Parameters on docker run come in 2 flavours. Parameters for docker are placed before the image name and parameters for the container are placed after the image name. As you've discovered, parameters for the container replace any CMD statement in the image.
The -v parameter is for docker, so it should go before the image name, like this
docker run -v datadir:/app/datadir my/app
That will leave your CMD intact, so it'll work as before.

Related

How to run two components using the same image passing tow different arguments to docker-compose

I would like to launch two services using docker-compose using the same image passing to the same image two different parameters, with eth and btc values. The jar expects an argument with one of those two values.
The services are demo-quartz-btc and demo-quartz-eth.
When I launch docker-compose, I can see in the log the following messages:
demo-quartz-btc_1 | Error: Unable to access jarfile /opt/app/demo-quartz.jar $CRYPT_TYPE
demo-quartz-eth_1 | Error: Unable to access jarfile /opt/app/demo-quartz.jar $CRYPT_TYPE
This is the link that shows the docker-compose.yml.
This is the link that shows the Dockerfile.
I followed the solution of this link, , but it doesn't work for me. Also I followed the official help, but it does not work.
Docker engine version is 18.09.2, compose version is 1.23.2.
Can someone help me? What do am I doing wrong?
EDIT
I can see in logs that no argument is being applied using docker-compose up. Is there any recommended way to run two services that need another services to run, like zookeeper, kafka, Eureka or others?
EDIT 1
Ok, now I know that I can run a container with the service using a entry-point.sh script file passing the argument to the container. Now, ¿how can I do the same using docker-compose up command?. I need all services/containers running in the same process.
EDIT 2
Do i have to put in script file the necessary docker run -it commands to have everything up and running?
// 31 July 2019 new Dockerfile. I can run this container in isolated mode,
//without kafka,zookeeper and Eureka server.
// I can pass arguments to Dockerfile running the next commands:
// mvn clean package -DskipTests
// docker build -t aironman/demo-quartz .
// docker run -it aironman/demo-quartz:latest eth
// where eth is the argument.
~/D/demo-quartz> cat Dockerfile
FROM adoptopenjdk/openjdk12:latest
ARG JAR_FILE
ARG CRYPT_TYPE
RUN mkdir /opt/app
COPY target/demo-quartz-0.0.2-SNAPSHOT.jar /opt/app/demo-quartz.jar
COPY entry-point.sh /
RUN chmod +x entry-point.sh
ENTRYPOINT ["/entry-point.sh"]
The solution is to have two Dockerfile files, each with an entry-point.sh file, each with the parameters you need. I realize that I have to adapt to the way Docker/Kubernetes wants you to work and not the other way around.

Difference between Docker Build and Docker Run

If I want to run a python script in my container what is the point in having the RUN command, if I can pass in an argument at build along with running the script?
Each time I run the container I want x.py to be run on an ENV variable passed in during the build stage.
If I were to use Swarm, and the only goal was to run the x.py script, swarm would only be building nodes, rather than building and eventually running, since the CMD and ENTRYPOINT instructions only happen at run time.
Am I missing something?
The docker build command creates an immutable image. The docker run command creates a container that uses the image as a base filesystem, and other metadata from the image is used as defaults to run that image.
Each RUN line in a Dockerfile is used to add a layer to the image filesystem in docker. Docker actually performs that task in a temporary container, hence the selection of the confusing "run" term. The only thing preserved from that RUN command are the filesystem changes, running processes, changes to environment variables, shell settings like the current working directory, are all lost when the temporary container is cleaned up at the completion of the RUN command.
The ENTRYPOINT and CMD value are used to specify the default command to run when the container is started. When both are defined, the result is the value of the entrypoint is run with the value of the cmd appended as a command line argument. The value of CMD is easily overridden at the end of the docker run command line, so by using both you can get easy to reconfigure containers that run the same command with different user input parameters.
If the command you are trying to run needs to be performed every time the container starts, rather than being stored in the immutable image, then you need to perform that command in your ENTRYPOINT or CMD. This will add to the container startup time, so if the result of that command can be stored as a filesystem change and cached for all future containers being run, you want to make that setting in a RUN line.

Is CMD in parent docker overriden by CMD/ENTRYPOINT in child docker image?

I am trying to get my hands dirty on docker. I know that CMD or ENTRYPOINT is used to specify the start/runnable command for docker image and CMD is overridden by ENTRYPOINT. But I don't know, how does it works, when parent docker image, also has CMD OR ENTRYPOINT or BOTH ?
Does child image inherit those values from parent docker image ? If so, then does ENTRYPOINT in parent image override CMD in child image ?
I know that such question is already discussed at https://github.com/docker/compose/issues/3140. But, the discussion is quite old(before 2-3 years) and it doesn't answer my question clearly.
Thanks in advance.
If you define an ENTRYPOINT in a child image, it will null out the value of CMD as identified in this issue. The goal is to avoid a confusing situation where an entrypoint is passed as args a command you no longer want to run.
Other than this specific situation, the value of ENTRYPOINT and CMD are inherited and can be individually overridden by a child image or even a later step of the same Dockerfile. Only one value for each of these will ever exist in an image with the last defined value having precedence.
ENTRYPOINT doesn't override CMD, they just represent to parts of resulting command and exist to make life easier. Whenever container is started, the command for process 1 is determined as ENTRYPOINT + CMD, so usually ENTRYPOINT is just path to the binary and CMD is a list of arguments for that binary. CMD can also be easily overriden from command line.
So, again, it's just a thing to make life easier and make containers behave just like regular binaries - if you have man container, you can set entrypoint to /usr/bin/man and cmd to man. So if you just start container, docker will execute /usr/bin/man man, but if you run something like docker run man docker, the resulting container command will be /usr/bin/man docker - the entrypoint stays the same, cmd changes, and resulting command to start container is just a simple merging of those.
ENTRYPOINT and CMD are both inherited from parent layers (images) unless overriden, so if you inherit from image X and redefine CMD, you will still have the very same ENTRYPOINT and vice versa. However, as #BMitch mentioned below, changing ENTRYPOINT in child image will effectively reset CMD.

Run commands on create a new Docker container

Is it possible to add instructions like RUN in Dockerfile that, instead of run on docker build command, execute when a new container is created with docker run? I think this can be useful to initialize a volume attached to host file system.
Take a look at the ENTRYPOINT command. This specifies a command to run when the container starts, regardless of what someone provides as a command on the docker run command line. In fact, it is the job of the ENTRYPOINT script to interpret any command passed to docker run.
I think you are looking for the CMD
https://docs.docker.com/reference/builder/#cmd
The main purpose of a CMD is to provide defaults for an executing
container. These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT
instruction as well.
Note: don't confuse RUN with CMD. RUN actually runs a command and
commits the result; CMD does not execute anything at build time, but
specifies the intended command for the image.
You should also look into using Data Containers see this excellent Blog post.
Persistent volumes with Docker - Data-only container pattern
http://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/

passing parameters to app on docker image

I was looking around and I saw some simple examples of HelloWorld running on a docker container like this one:
http://dotnet.dzone.com/articles/docker-%E2%80%98hello-world-mono
at the end of the Dockerfile, the author calls:
CMD ["mono", "/src/hello.exe"]
What I want to do is have a reusable image as we build our Console App. Put that on a docker image using a Dockerfile. That part makes sense to me. But then I want to be able to pass the ConsoleApp parameters. Is that possible?
for example,
sudo docker run crystaltwix/helloworld -n "crystal twix"
where -n was a parameter I defined in my helloworld app.
You can use ENTRYPOINT foo rather than CMD foo to achieve this. All arguments after the docker run are passed to foo.
#seanmcl's answer is the simplest... if you have to pass secret values like application keys you may have to worry about exposing them is process lists.... So, you could use environment vars that you app looks for during startup:
SECRET_KEY="crystal twix"
docker run -e APP_KEY=$SECRET_KEY crystaltwix/helloworld

Resources