Dockerfile: is it possible to reference overridden entrypoint and cmd? - docker

I'm trying to build a image using mysql 5.6 from here as a base image. I need to do some initialization before the database starts up, so I need to override the entrypoint:
# Stuff in my Dockerfile
...
COPY my-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["my-entrypoint.sh"]
My entrypoint is fairly simple, too:
#!/bin/bash
echo "Running my-entrypoint.sh"
# My initialization stuff here
...
# Call mysql entrypoint
/usr/local/bin/docker-entrypoint.sh mysqld
This seems to work, but I'd rather not have to hard-code the mysql entrypoint in my script (or my Dockerfile). Is there a way to reference the overridden entrypoint in my Dockerfile, so that it is available to my entrypoint script? Something like this, perhaps?
COPY my-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["my-entrypoint.sh", BASE_ENTRYPOINT, BASE_CMD]

It has to appear in somewhere in someway, otherwise you can't get such information.
Option 1: use an ENV for previous entrypoint in Dockerfile, and then refer to it in your own entrypoint.sh:
Dockerfile:
FROM alpine:3.3
ENV MYSQL_ENTRYPOINT "/usr/bin/mysql mysqld"
ADD entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
Entrypoint.sh:
#!/bin/sh
echo $MYSQL_ENTRYPOINT
Option 2: just pass previous entrypoint command as parameter to your entrypoint:
Dockerfile:
FROM alpine:3.3
ADD entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/bin/mysql mysqld"]
Entrypoint.sh:
#!/bin/sh
echo $1
Personally I prefer option #1.

2 ways:
Just pass it in after you specify your image, everything after that becomes the CMD and appended to your ENTRYPOINT. So...
# Stuff in my Dockerfile
...
COPY my-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["my-entrypoint.sh"]
Then docker run ... image <your-entrypoint-etc> then just have your custom entrypoint pick up the 1st arg to it and use that however you need.
Second way, just pass it as an environment variable at runtime.
docker run ... -e MYSQL_ENTRYPOINT=<something> ...
And in your entrypoint script refer to the env variable ... $MYSQL_ENTRYPOINT ...

Related

Run mvn commands as part of the docker file using Entrypoint and/or CMD

How can we run mvn commands from a Dockerfile
Here is my Dockerfile:
FROM maven:3.3.9-jdk-8-alpine
WORKDIR /app
COPY code /app
WORKDIR /app
ENTRYPOINT ["mvn"]
CMD ["clean test -Dsurefire.suiteXmlFiles=/app/abc.xml"]
I tried to build and run the above image and it fails ( abc.xml is under the /app directory)
Is there a way to get this to work.
According to the documentation:
"If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the JSON array format."
As such you should rewrite CMD as follow:
CMD ["clean","test","-Dsurefire.suiteXmlFiles=/app/abc.xml"]
You can also parameterize entry-point as a JSON array, as per documentation:
ENTRYPOINT["mvn","clean","test","-Dsurefire.suiteXmlFiles=/app/abc.
But, I suggest you use best-practice with an entrypoint ash-file. This ensures that changing these parameters does not require rewriting of the dockerfile:
create an entrypoint.sh file in the code directory. make it executable. It should read like this:
#!/bin/sh
if [ "$#" -ne 1 ]
then
FILE="abc.xml"
else
FILE=$1
fi
mvn clean test -Dsurefire.suiteXmlFiles="/app/$FILE"
replace your entrypoint with ENTRYPOINT["./entrypoint.sh"]
replace your command with CMD["abc.xml"]
PS
you have "WORKDIR /app" twice. this isn't what fails you, but it is redundant, you can get rid of it.

How to access build args in ENTRYPOINT dockerfile

I am trying to deploy an app in payara micro based on payara dockerimage and I need to pass one arguement snapshotversion in ENTRYPOINT(basically i want to access the build args in ENTRYFORM) exec form, as exec form of ENTRYPOINT is preferred: my docker file is as follows:
FROM payara/micro:5.193.1
ARG snapshotversion
ENV snapshotvs=$snapshotversion
RUN jar xf payara-micro.jar
COPY /service/war/target/app-emailverification-service-war-${snapshotversion}.war ${DEPLOY_DIR}/
COPY ojdbc6.jar ${PAYARA_HOME}/
COPY --chown=payara domain.xml /opt/payara/MICRO-INF/domain/domain.xml
RUN cd /opt/payara/MICRO-INF/domain && ls -lrt
#ENTRYPOINT ["java", "-jar", "/opt/payara/payara-micro.jar", "--deploy", "/opt/payara/deployments/app-service-war-$snapshotvs.war", "--domainConfig", "/opt/payara/MICRO-INF/domain/domain.xml","--addLibs", "/opt/payara/ojdbc6.jar"]
ENTRYPOINT java -jar /opt/payara/payara-micro.jar --deploy /opt/payara/deployments/app-service-war-$snapshotvs.war --domainConfig /opt/payara/MICRO-INF/domain/domain.xml --addLibs /opt/payara/ojdbc6.jar
The commented ENTRYPOINT does not work. Container logs says invalid deployment. What am i missing here? Also how can I use CMD with this. Can someone post an example.
The commented line doesn't work, because it is an exec form of ENTRYPOINT, which doesn't invoke shell (/bin/sh -c), so variable substitution doesn't happening.
If you want to use an exec form and environment variables you need to specify it directly:
ENTRYPOINT ["sh", "-c", "your command with env variable"]
To your question about how can you use CMD with this, for example like this:
ENTRYPOINT ["sh", "-c"]
CMD ["your command with env variable"]
You mentioned, that you want to use build args in ENTRYPOINT instruction. It's not really possible, because nor ARG nor ENV are expanded in ENTRYPOINT or CMD: https://docs.docker.com/engine/reference/builder/#environment-replacement, https://docs.docker.com/engine/reference/builder/#scope
Also you could take a look at great page with best practices for writing Dockerfile and ENTRYPOINT instructions specifically.
Two suggestions that complement each other:
If you're COPYing a file into the image, you can give it a fixed name inside the image. That avoids this problem.
WORKDIR /opt/payara
COPY service/war/target/app-emailverification-service-war-${snapshotversion}.war deployments/app-service.war
If you have a particularly long or involved command that you're trying to make be the main container process, wrap it in a shell script. You want to make sure to exec the main container process to avoid some trouble around signal handling (resulting in docker stop pausing for 10 seconds and then hard-killing your actual process).
#!/bin/sh
exec java \
-jar /opt/payara/payara-micro.jar \
--deploy /opt/payara/deployments/app-service.war \
--domainConfig /opt/payara/MICRO-INF/domain/domain.xml \
--addLibs /opt/payara/ojdbc6.jar
COPY launch.sh ./
RUN chmod +x launch.sh
CMD ["/opt/payara/launch.sh"]
In this second case, it's a shell script, so you can have ordinary shell variable substitutions.

entrypoint: "entrypoint.sh" - docker compose

There is no such file by name entrypoint.sh in my workspace.
But below instruction in docker-compose.yml is referring it:
builder:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- ../../target:/wheelhouse
volumes_from:
- cache
entrypoint: "entrypoint.sh"
command: ["pip", "wheel", "--non-index", "-f /build", "."]
where ../docker/dev/Dockerfile has
# Set defaults for entrypoint and command string
ENTRYPOINT ["test.sh"]
CMD ["python", "manage.py", "test", "--noinput"]
What does entrypoint: "entrypoint.sh" actually do?
entrypoint: "entrypoint.sh" overrides ENTRYPOINT ["test.sh"] from Dockerfile.
From the docs:
Setting entrypoint both overrides any default entrypoint set on the
service’s image with the ENTRYPOINT Dockerfile instruction, and clears
out any default command on the image - meaning that if there’s a CMD
instruction in the Dockerfile, it is ignored.
ENTRYPOINT ["test.sh"] is set in Dockerfile describing docker image
entrypoint: "entrypoint.sh" is set in docker-compose file which describes multicontainer environment while referencing the Dockerfile.
docker-compose build builder will build image and set entrypoint to ENTRYPOINT ["test.sh"] set in Dockerfile.
docker-compose up builder will start container with entrypoint entrypoint.sh pip wheel --no-index '-f /build' . set in docker-compose file
ENTRYPOINT is a command or script that is executed when you run the docker container.
If you specify entrypoint in the docker-compose.yaml, it overrides ENTRYPOINT from specified Dockerfile.
CMD is something that is passed as the parameters to the ENTRYPOINT
So if you just run the dev/Dockerfile, it would execute
test.sh python manage.py test --noinput
If you overrided CMD in docker-compose.yaml as you did, it would execute
test.sh pip wheel --non-index -f /build .
But because you also overrided ENTRYPOINT in your docker-compose.yaml, it is going to execute
entrypoint.sh pip wheel --non-index -f /build .
So basically, entrypoint.sh is a script that will run inside your container builder when you execute docker-compose up command.
Also you can check this answer for more info What is the difference between CMD and ENTRYPOINT in a Dockerfile?
Update:
If the base image has entrypoint.sh, it will run that, but if you override with your own entrypoint then the container will run the override entrypoint.
If you to override the default behaviour of base image then you can change, ohterwise you do not need to override it from docker-compose.
What does entrypoint: "entrypoint.sh" actually do?
It totally depend on the script or command inside entrypoint.sh, but few things can be considered.
ENTRYPOINT instruction allows you to configure a container that will
run as an executable. It looks similar to CMD, because it also allows
you to specify a command with parameters. The difference is ENTRYPOINT
command and parameters are not ignored when Docker container runs with
command line parameters. (There is a way to ignore ENTTRYPOINT, but it
is unlikely that you will do it.)
In simple word, entrypoint can be a complex bash script, for example in case of mysql entrypoint which is more then 200 LOC which does the following task.
start MySQL server
wait for MySQL server to up
Create DB
Can perform DB migration or DB initlization
So much complex task is not possible with CMD, as in CMD you can run the bash but it will be more headache to make it work. Also it make Dockerfile simple and put the complex task to entrypoint.
When there is entrypoint, anything that is passed to CMD will be consider as a argument for entrypoint.
In your case, CMD is CMD ["python", "manage.py", "test", "--noinput"] it will be passed as an argument and the best to run this is to use use
# set of command
#start long running process at the end that is passed from CMD
exec "$#"
Finally, the exec shell construct is invoked, so that the final
command given becomes the container's PID 1. $# is a shell variable
that means "all the arguments",
use-a-script-to-initialize-stateful-container-data
cmd-vs-entrypoint

Docker CMD when base image has a CMD?

I have a Dockerfile which starts with:
FROM puppet/puppetserver
When I look at the source container it is built from another:
FROM puppet/puppetserver-standalone:5.0.0
The second contains a CMD command:
ENTRYPOINT ["dumb-init", "/docker-entrypoint.sh"]
CMD ["foreground" ]
In my own container I end with:
COPY start.sh /
CMD /start.sh
The CMD run but with unexpected results:
puppetserver: '/bin/sh' is not a puppetserver command. See 'puppetserver --help'.
I know that I have bash availible because I'm using RUN commands.sh before CMD in the same Dockerfile.
How do CMD commands stack when inheriting from base images?
Is my CMD not run as a normal bash command and instead run in conjunction with the CMD of the base image?
You need to reset the ENTRYPOINT from the parent image
COPY start.sh /
ENTRYPOINT []
CMD /start.sh
See https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact
CMD should be used as a way of defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container.
and https://docs.docker.com/engine/reference/builder/#cmd
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.

Execute a script before CMD

As per Docker documentation:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
I wish to execute a simple bash script(which processes docker environment variable) before the CMD command(which is init in my case).
Is there any way to do this?
Use a custom entrypoint
Make a custom entrypoint which does what you want, and then exec's your CMD at the end.
NOTE: if your image already defines a custom entrypoint, you may need to extend it rather than replace it, or you may change behavior you need.
entrypoint.sh:
#!/bin/sh
## Do whatever you need with env vars here ...
# Hand off to the CMD
exec "$#"
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Docker will run your entrypoint, using CMD as arguments. If your CMD is init, then:
/entrypoint.sh init
The exec at the end of the entrypoint script takes care of handing off to CMD when the entrypoint is done with what it needed to do.
Why this works
The use of ENTRYPOINT and CMD frequently confuses people new to Docker. In comments, you expressed confusion about it. Here is how it works and why.
The ENTRYPOINT is the initial thing run inside the container. It takes the CMD as an argument list. Therefore, in this example, what is run in the container is this argument list:
# ENTRYPOINT = /entrypoint.sh
# CMD = init
["/entrypoint.sh", "init"]
# or shown in a simpler form:
/entrypoint.sh init
It is not required that an image have an ENTRYPOINT. If you don't define one, Docker has a default: /bin/sh -c.
So with your original situation, no ENTRYPOINT, and using a CMD of init, Docker would have run this:
/bin/sh -c 'init'
^--------^ ^--^
| \------- CMD
\--------------- ENTRYPOINT
In the beginning, Docker offered only CMD, and /bin/sh -c was hard-coded as the ENTRYPOINT (you could not change it). At some point along the way, people had use cases where they had to do more custom things, and Docker exposed ENTRYPOINT so you could change it to anything you want.
In the example I show above, the ENTRYPOINT is replaced with a custom script. (Though it is still ultimately being run by sh, because it starts with #!/bin/sh.)
That ENTRYPOINT takes the CMD as is argument. At the end of the entrypoint.sh script is exec "$#". Since $# expands to the list of arguments given to the script, this is turned into
exec "init"
And therefore, when the script is finished, it goes away and is replaced by init as PID 1. (That's what exec does - it replaces the current process with a different command.)
How to include CMD
In the comments, you asked about adding CMD in the Dockerfile. Yes, you can do that.
Dockerfile:
CMD ["init"]
Or if there is more to your command, e.g. arguments like init -a -b, would look like this:
CMD ["init", "-a", "-b"]
Dan's answer was correct, but I found it rather confusing to implement. For those in the same situation, here are code examples of how I implemented his explanation of the use of ENTRYPOINT instead of CMD.
Here are the last few lines in my Dockerfile:
#change directory where the mergeandlaunch script is located.
WORKDIR /home/connextcms
ENTRYPOINT ["./mergeandlaunch", "node", "keystone.js"]
Here are the contents of the mergeandlaunch bash shell script:
#!/bin/bash
#This script should be edited to execute any merge scripts needed to
#merge plugins and theme files before starting ConnextCMS/KeystoneJS.
echo Running mergeandlaunch script
#Execute merge scripts. Put in path to each merge script you want to run here.
cd ~/theme/rtb4/
./merge-plugin
#Launch KeystoneJS and ConnextCMS
cd ~/myCMS
exec "$#"
Here is how the code gets executed:
The ENTRYPOINT command kicks off the mergeandlaunch shell script
The two arguments 'node' and 'keystone.js' are passed along to the shell script.
At the end of the script, the arguments are passed on to the exec command.
The exec command then launched my node program the same way the Docker command CMD would do.
Thanks to Dan for his answer.
Although I found I had to do something like this within the Dockerfile:
WORKDIR /
COPY startup.sh /
RUN chmod 755 /startup.sh
ENTRYPOINT sh /startup.sh /usr/sbin/init
NOTE: I named the script startup.sh as opposed to entrypoint.sh
The key here was that I needed to provide 'sh' otherwise I kept getting "no such file..." errors coming out of 'docker logs -f container_name'.
See:
https://github.com/docker/compose/issues/3876

Resources