Run java program and shell script on entrypoint in Dockerfile - docker

I have base docker image where I have a Dockerfile which has an entry like follows
CMD ["/bootstrap.sh"]
In my child docker image where I am referencing this parent docker image, I have a separate entrypoint which starts my container as java program like below
ENTRYPOINT exec java -Duser.timezone=GMT -Djava.security.egd=file:/dev/./urandom $JAVA_OPTS -jar /app.jar
When I build this child docker image and try to run it, only the java program is coming up. bootstrap,sh is not building up.
After going through some blogs and stackoverflow posts, I cam to know in the child image, CMD context from parent image is lost and overwritten by child image entrypoint. However, I would like to run both shell script as well as java program on entrypoint. In my shell script, I jhave few executables reference which I require it to run and they will keep running the shell programs in background which my Java code will call to and do the processing based on these shell programs.
I also did try to combine both of these steps into one entrypoint like below but then this is only starting up one process - either shell script or Java program depending on which is specified first in dockerfile
ENTRYPOINT /bootstrap.sh && exec java -Duser.timezone=GMT -Djava.security.egd=file:/dev/./urandom $JAVA_OPTS -jar /app.jar
How can I specify in Dockerfile so that it can execute both the processes on startup of the container. bootstrap.sh can be run in background also.
Please help me on this one.

For those who are struggling, I was somehow able to resolve this issue. I don't know if this is the perfect solution but it worked for me.
What I did was I created another shell script file and I called bootstrap.sh from here followed by Java program.
Script file was something like this:
#!/bin/bash
echo "[server-startup] Executing bootstrap script"
/bootstrap.sh &
echo "[server-startup] Starting java application"
exec java -Duser.timezone=GMT -Djava.security.egd=file:/dev/./urandom $JAVA_OPTS -jar /app.jar
And my Dockerfile entry was modified like below where it called this new shell script file created above.
ENTRYPOINT ["/server-start.sh"]
Hope this helps who's looking for the answer.

Related

Date is not being read in Dockerfile ENTRYPOINT

The problem for first time is works fine but, for next it fails to write the file with same name, So I tried adding pod_id to make it unique but seems pod id to remains same when it restarts after failing, so tried adding timestamp to the filename, but date function is being treated as string, so again the file name remains same for every restarts and heap dump file doesn't get generated next time.
ENTRYPOINT ["java","-XX:+HeapDumpOnOutOfMemoryError","-XX:HeapDumpPath=mountpath-heapdump/$MY_POD_ID$(date +'%Y-%m-%d_%H-%M-%S')_heapdump.bin","-jar", "/app.jar"])
I also without double quotes:
ENTRYPOINT java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=mountpath-heapdump/$MY_POD_ID$(date +'%Y-%m-%d_%H-%M-%S')_heapdump.bin -jar app.jar
working fine in POC but for my project application, environment variables are not reflecting, spring-profile was not picked. Even other environments variable too was not picked. So I am trying the first approach only I need help to append timestamp in the filename.
Even though the exec form is recommended, in this case, since the format is complicated, a separate shell script was created and executed.
The test environment does not include the java environment, and is configured to pass arguments to exec form when running the container.
[Dockerfile]
FROM ubuntu
COPY ./entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
[entrypoint.sh]
#!/bin/bash
echo java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=mountpath-heapdump/$POD_ID$(date +$DATE_FORMAT)_heapdump.bin -jar /app.jar
[docker build & run]
docker build -t test .
docker run --rm -e POD_ID='POD_ID' -e DATE_FORMAT='%Y-%m-%d_%H-%M-%S' test
output is...
root#DESKTOP-6Q87NVH:/tmp/test# docker run --rm -e POD_ID='POD_ID' -e DATE_FORMAT='%Y-%m-%d_%H-%M-%S' test
java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=mountpath-heapdump/POD_ID2021-11-03_15-02-22_heapdump.bin -jar /app.jar
Answer provided by #rzlvmp worked for me.
You may try to run java inside shell ENTRYPOINT ["bash", "-c", "java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=mountpath-heapdump/$MY_POD_ID$(date +'%Y-%m-%d_%H-%M-%S')_heapdump.bin -jar /app.jar"]

How to run shell file as entrypoint when using docker image with tomcat:9.0.45-jdk8-adoptopenjdk-hotspot in a dockerfile?

I am creating a docker file using docker image tomcat:9.0.45-jdk8-adoptopenjdk-hotspot. To run the dockerfile I use the command docker run -it -p 8888:8080 tomcatcustom , this turns on the tomcat server.
I would like to run another custom .sh file along with tomcat server that gets executed with running the docker image. How could I define the Entrypoint in the dockerfile that I have created so that I can have both the tomcat as well as my .sh file executed ?
or is there any other option?
The Dockerfile can only have one Entrypoint that is executed, but nothing stops you from having a script as entrypoint that runs your custom script and tomcat.
E.g.
if you have a directory with
Dockerfile
custom.sh // your custom script
start.sh
where start.sh is
#!/usr/bin/env bash
./custom.sh &
catalina.sh run
and Dockerfile is
FROM tomcat:9.0.45-jdk8-adoptopenjdk-hotspot
COPY start.sh .
COPY custom.sh .
CMD "./start.sh"
In that case custom.sh is executed in the background (with &) so that tomcat is started in parallel.

Docker entrypoint not working as expected

I have a Docker container that executes a bash script as its ENTRYPOINT. This script does lots of things that rely on environment variables being configured.
The strangest thing happens, when I run the container, the entrypoint script is executed, and for a lack of better words, it eventually fails.
Now, if I enter the container manually $ docker exec -it <id> bash Then manually run the SAME script, it works!
What's going on here? Why does Docker executing the script differ from myself manually executing the script?
UPDATE for more context
Dockerfile
FROM cuda:torch:cudnn # Not real source, but these are what are in play
# setup lua and python
COPY . /app
WORKDIR /app
ENTRYPOINT ["./entrypoint.py"]
CMD ["start"]
entrypoint.py
class DoSomething:
def methods_that_work(self):
...
def run_torch(self):
"""
I do NOT work if called from the Dockerfiles ENTRYPOINT
I DO work if I manually run ./entrypoint.py start from within the container
cmd = ['th', ...]
subprocess.run(cmd)
Torch and Lua need to know where CUDA and CudNN are located. I can confirm all the ENV vars are set. When run via the Docker ENTRYPOINT, torch just kinda hangs, no errors, no anything, just kinda hangs.
When I bash into the container and manually run ./entrypoint.py it works.
For anyone who runs into this situation. This was explicitly an issue with Lua.
Lua paths expect to be delineated with ; not : like $PATH for example.
$LUA_PATH=/some/path;/some/other/path
Now, for why this was working in an interactive bash shell, and not via Docker? Well inside the .bashrc there was an "activate torch" function that essentially did a find and replace on : to ;.
The end of the day, this was not a Docker issue, but simply incorrectly formatted Lua environment variables.

Why is docker still running CMD when overridden in by docker run?

I have a Dockerfile with the following CMD as the last line
CMD ["/usr/local/myapp/bin/startup.sh", "docker"]
Part of a script that is executed against the docker image during startup is as follows
# find directory of cacerts file in DOCKER_JAVA_HOME of container
DOCKER_CACERTS_DIR=$(dirname "$(docker run "$DOCKER_IMAGE_ID" find "$DOCKER_JAVA_HOME" -name cacerts)")
However, this still executes the CMD line from my Dockerfile.
I have found that I can alter this behaviour by changing the line in the script as follows.
# find directory of cacerts file in DOCKER_JAVA_HOME of container
DOCKER_CACERTS_DIR=$(dirname "$(docker run --entrypoint find "$DOCKER_IMAGE_ID" "$DOCKER_JAVA_HOME" -name cacerts)")
However, I didn't think this would be necessary. Is it normal for docker to execute the CMD when overridden in the docker run command? I thought this was supposed to be one of the differences between using CMD and ENTRYPOINT, that you could easily override CMD without using the --entrypoint flag.
In case it's important, this is using docker version 17.03.0-ce
The image being run has an ENTRYPOINT defined somewhere. Probably in the image you are building FROM if there isn't one in your Dockerfile.
When ENTRYPOINT and CMD are defined, Docker will pass the CMD to the ENTRYPOINT as arguments. From there, it's up to the ENTRYPOINT executable to decide what to do.
The arguments could be ignored completely, modified as the entry point sees fit or it can pass the complete command on to be run. That behaviour is image specific.

Dockerfile run entrypoint before shell entrypoint

I'm wanting to do some last minute setup on run before passing arguments to the shell entrypoint, to accomplish this I have the following in mind.
ENTRYPOINT ./run_binary ${args}; /bin/sh -c
CMD ./run_binary
However, by doing this, it doesn't seem that any CMD arguments make it to the shell entrypoint. Is there a way around this? I'm just wanting to run a setup step on a binary before handing back control to the shell entrypoint (and then to the USER via CMD).
CMD becomes a list of arguments to send to ENTRYPOINT when both are specified, see the manual, so that's not the way to go
but you could use a .sh script as ENTRYPOINT, that first executes your binary command and then forwards the received arguments to a shell
I haven't tried it but something along the lines of
#!/bin/sh
./run_binary
/bin/sh -c "$#"
You could use an intermediate build image that triggers an ONBUILD statement from your original Dockerfile, see: https://docs.docker.com/engine/reference/builder/#/onbuild
The ONBUILD instruction adds to the image a trigger instruction to be
executed at a later time, when the image is used as the base for
another build. The trigger will be executed in the context of the
downstream build, as if it had been inserted immediately after the
FROM instruction in the downstream Dockerfile.
This is useful if you are building an image which will be used as a
base to build other images, for example an application build
environment or a daemon which may be customized with user-specific
configuration.
Regarding CMD and ENTRYPOINT, see: https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact
Dockerfile should specify at least one of CMD or ENTRYPOINT
commands.
ENTRYPOINT should be defined when using the container as an
executable.
CMD should be used as a way of defining default arguments for an
ENTRYPOINT command or for executing an ad-hoc command in a
container.
CMD will be overridden when running the container with alternative
arguments.

Resources