adding startup script to dockerfile - docker

I have built my docker image using openjdk.
# config Dockerfile
FROM openjdk:8
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
# build image
docker build -t shantanuo/dbt .
It is working as expected using this command...
docker run -p 8081:8080 -it shantanuo/dbt
Once I log-in, I have to run this command...
sh bin/startup.sh
My Question: Is it possible to add the startup command to dockerfile? I tried adding this line in my dockerfile.
CMD ["sh", "bin/startup.sh"]
But after building the image, I can not use -d parameter to start the container.

You can use the entrypoint to run the startup script. In the entrypoint you can specify your custom script, and then run catlina.sh.
Example:
ENTRYPOINT "bin/startup.sh && catalina.sh run"
This will run your startup script and then start your tomcat server, and it won't exit the container.

This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1

A docker container must have a dedicated task. It is important that this task/startup script does not terminate. When it does, the task is done and everything for docker is done right.
It makes no sense to start a container only with the JDK. You have to put your application in it.
I think it would help when you will post what you exactly want to do.
The Docker reference is always a good place to look at: https://docs.docker.com/engine/reference/builder/#entrypoint

Related

Anyway to set the Docker image entrypoint when building the image or some equvalent?

I have a Dockerfile that I use to build the same image but for slightly different purposes. Most of the time I want it to just be an "environment" without a specific entrypoint so that the user just specifies that on the Docker run line:
docker run --rm -it --name ${CONTAINER} ${IMAGE} any_command parameters
But for some applications I want users to download the container and run it without having to set a command.
docker build -t ${IMAGE}:demo (--entrypoint ./demo.sh) <== would be nice to have
Yes, I can have a different Dockerfile for that, or append an entrypoint to the basic Dockerfile during builds, or various other mickey-mouse hacks, but those are all just one more thing that can go wrong, adding complexity, and are workarounds for the essential requirement.
Any ideas? staged builds?
The Dockerfile CMD directive sets the default command. So if your Dockerfile ends with
CMD default_command
then you can run the image in multiple ways
docker run "$IMAGE"
# runs default_command
docker run "$IMAGE" any_command parameters
# runs any_command instead
A container must run a command; you can't "just run a container" with no process in it.
You do not want ENTRYPOINT here since its syntax is noticeably harder to work with at the command line. Your any_command would be passed as arguments to the entrypoint process, rather than replacing the built-in default.

Run a command without entry point

How can I run a command against a container and tell docker not to run the entry point? e.g.
docker-compose run foo bash
The above will run the entry point of the foo machine. How to prevent it temporarily without modifying Dockerfile?
docker-compose run --entrypoint=bash foo bash
It'll run a nested bash, a bit useless, but you'll have your prompt.
If you control the image, consider moving the entire default command line into the CMD instruction. Docker concatenates the ENTRYPOINT and CMD together when running a container, so you can do this yourself at build time.
# Bad: prevents operators from running any non-Python command
ENTRYPOINT ["python"]
CMD ["myapp.py"]
# Better: allows overriding command at runtime
CMD ["python", "myapp.py"]
This is technically "modifying the Dockerfile" but it won't change the default operation of your container: if you don't specify entrypoint: or command: in the docker-compose.yml then it will run the exact same command, but it also allows running things like debug shells in the way you're trying to.
I tend to reserve ENTRYPOINT for two cases. There's a common pattern of using an ENTRYPOINT to do some first-time setup (e.g., running database migrations) and then exec "$#" to run whatever was passed as CMD. This preserves the semantics of CMD (your docker-compose run bash will still work, but migrations will happen first). If I'm building a FROM scratch or other "distroless" image where it's actually impossible to run other commands (there isn't a /bin/sh at all) then making the single thing in the image be the ENTRYPOINT makes sense.

Geting a log of execution of current instructions when doing a 'docker run'

I am a newbie to Docker and I am trying to see if I can get logs for the following instructions when they are getting executed.
FROM ubuntu:16.04
ENV name John
ENTRYPOINT echo "Hello, $name"
My aim here is check how(the path etc.) the shell mode is working in executing the ENV here and also how the ENTRYPOINT is being executed.
I can imagine these kind of logs could be useful in debugging purposes, so probably I am missing something obvious. Any ideas please?
The Dockerfile instructions don’t do much; they record some state in fields in the built Docker image. As #BrayanCaldera’s answer indicates, you’ll see these go by in the docker build output, but nothing runs during container build time.
If the ENTRYPOINT is a full-blown script then you can use usual script debugging techniques on it to see what happens when the container starts up. For instance:
#!/bin/sh
# Print out every command as it executes
set -x
# Print out the current environment
env
# Run the actual command
exec "$#"
To tell the Docker image what to do by default when you docker run you should usually use the CMD directive. If you need to do pre-launch setup then an ENTRYPOINT script that uses exec "$#" to run the CMD is a typical path.
For see the logs on terminal you only need to execute this line:
docker build --rm -f dockerfile -t log:latest .
If you need to store these logs in a file you only need to execute this line:
docker build -t logs . > image.log

Delay Docker Container RUN until tox environment is built

I am trying to find a way to delay the docker container to be up until the task in ENTRYPOINT is completed. To explain it further, I have a docker file which has the entry point
ENTRYPOINT ["bash", "-c", "tox", "-e", "docker-server"]
When I run the container using
Docker run -d -t -p 127.0.0.1:8882:8882 datawarehouse
it immediately makes the container up where as tox command is still building the environment. The problem with this is that, if I trigger a cron job or run a python code immediately it will fail because the tox environment is still in the build phase. I want to avoid running anything until the ENTRYPOINT task is complete, can this be achieved in the docker file or in the run command?
yes , in the docker-compose file you can set it to sleep or you can define dependencies.
https://docs.docker.com/compose/startup-order/
https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html
I dont have an elegant solution, but here is what I did.
RUN <your dependencies>
# Then add a second RUN command with a sleep at the beginning:
RUN sleep 400 && gcloud dataproc jobs submit xxxxxx
Each RUN command will run at a separate container layer on a clean slate, hence the sleep && the actual entry-point command goes together as one logical command.
But as you can see this was Hard coded, change the sleep duration accordingly.
I think that this in an incorrect approach. When a container "start" we need to avoid install dependencies, libraries, etc. The build image process is the moment to do that: We ensure that an image "AAAA" always will "works" if we install any dependencies, build any code in the "build" images process. When a container run, is only for do that, just "run".

How does ENTRYPOINT Docker directive react when extending images

I want to know does react the ENTRYPOINT instruction from Dockerfiles when heritage happens :
Let's say for example I have an image called : jenkins
FROM java:8-jdk
RUN ...
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
By running this image, the ENTRYPOINT directive will start and install the application as expected
Let's say now that I want to extend this image with a new Dockerfile, I call it : jenkins-custom
FROM jenkins
# enable start tls
RUN echo "JENKINS_JAVA_OPTIONS=\"-Dmail.smtp.starttls.enable=true\"" >> /etc/default/jenkins
RUN chown jenkins:docker /etc/default/jenkins
Should I consider that :
the jenkins entrypoint is triggered after my new lines.
entrypoint will be trigered before my new lines.
entrypoint will not be triggered.
In my example, I am trying to activate STARTTLS in default Jenkins Docker image, should I just restart the process in the second image ?
The commands in ENTRYPOINT run when you execute docker run. However, commands in RUN are executed when you run docker build.
In your case, what's going to happen is that when you docker build the image, a new Jenkins configuration file is generated, and then when you docker run it, tini is launched, and in turns execute the jenkins-entrypoint.sh.
If what you're trying to do is change the Jenkins configuration and nothing else, what you have here is good.

Resources