How can I change a docker image start - docker

I have a SOLR image starting with a default configuration when I start my container.
I want to change the way SOLR starts in my container by referencing a different configuration file. Of course, I still want to use the original image I had from the beginning.
What is the best practice to do so?
If I use a docker file referencing my original image it will start it with the default value as no script has been modified.
I also thought about committing my new configuration file on my image, that works but that still does not change the starting script.
Can someone guide me on the best practice to do that?
Thanks in advance for your help.

Startup of a container is always controlled by ENTRYPOINT & CMD. In this case if you want to override it, you can create your own script & define it in the CMD & ENTRYPOINT defines a null environment to execute CMD but moreover, it overwrites your previous ENTRYPOINT in Dockerfile(You can provide a different ENTRYPOINT script as well). You can do it as below in Dockerfile -
FROM solr:latest
...................
...................
COPY your-data /container-data
ENTRYPOINT ["/usr/bin/env"]
CMD /run.sh
You can copy your data inside container using COPY & define operations to be performed in run.sh, run.sh is your own script which you want to get executed on container startup.

Related

Named container shared between different docker-compose files

I've seen some similar questions but found no solution for myself.
I have 2 docker-compose files, I have created a named volume and I'm currently using it like this:
app:
...
volumes:
- volume_static:/path/to/container
...
...
volumes:
...
volume_static:
external:
name: static
...
...
During the build process, it happens that the script adds some new file to this volume, but then, the second docker-compose, which mount the volume in the exact same manner, have no access to the new data, I need to restart it to make it work.
Is this the right approach?
I just need to push some new file in the volume from one docker-compose, and see them directly on the second docker-compose (yeah I know, docker, but saying specifying compose give a better idea on what is my problem) without restarting and building the service
Is this possible?
Thanks!
Docker believes named volumes are there to hold user data, and other things that aren't part of the normal container lifecycle.
If you start a container with an empty volume, only the very first time you run it, Docker will load content from the image into the volume. Docker does not have an update mechanism for this: since the volume presumably holds user data, Docker can't risk corrupting it by overwriting files with content from the updated image.
The best approach here is to avoid sharing files at all. If the files are something like static assets for a backend application, you can COPY --from those files from the backend image into a proxy image, using the image name and tag of your backend application (COPY --from=my/backend ...). That avoids the need for the volume altogether.
If you really must share files in a volume, then the container providing the files needs to take responsibility for copying in the files itself when it starts up. An entrypoint script is the easiest place to do this; it gives you a hook to run things as the container starts (and volumes exist and are mounted) but before running the main container process.
#!/bin/sh
set -e
# Populate (or update) the shared static tree
cp -r ./app/assets /static
# Now run the image CMD
exec "$#"
Make this script be the ENTRYPOINT in your Dockerfile; it must use the JSON-array syntax. You can leave your CMD unchanged. If you've split an interpreter and filename into separate ENTRYPOINT and CMD you can combine those into a single CMD line (and probably should anyways).
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
In terms of build lifecycle, images are built without any of the surrounding Compose ecosystem; they are not aware of the network environment, volumes, environment variables, bind mounts, etc.; so when you rebuild the image you build a new changed image but don't modify the volume at all. The very first time you run the whole file, since the named volume is empty, it is populated with content from the volume, but this only happens the very first time you run it.
Rebuilding images and restarting containers is extremely routine in Docker and I wouldn't try to avoid that. (It's so routine that re-running docker-compose up -d will delete and recreate an existing container if it needs to in order to change settings.)

Avoid override ENTRYPOINT base docker image

I have a base docker image pointing to daggerok/jboss-eap-7.1:7.1.0-alpine and it execute a ENTRYPOINT that i don't want to override. But i also need execute another command after base image execute theirs, so my Dockerfile looks like it:
FROM daggerok/jboss-eap-7.1:7.1.0-alpine
#SOME CODE HERE
ENTRYPOINT ["mybash.sh"]
I think this code override ENTRYPOINT in base image, and i need avoid it. My script need to be executed after all commands in base image.
Any tips to solve it ?
There are some problems to achieve what you want:
You cannot find out the ENTRYPOINT of the base image at runtime within a .sh-script, so you cannot execute it, without copying it explicitly into your mybash.sh
The ENTRYPOINT of the base image you mention is /bin/bash ${JBOSS_HOME}/bin/standalone.sh which launches the main process with id 1 of your docker container. You should not alter that and start this in background for example. Read further here.
I would advise to rewrite mybash.sh:
First execute whatever you would like before starting jboss. Then, finish your script with a last line starting jboss:
exec "/bin/bash ${JBOSS_HOME}/bin/standalone.sh" (adapted from here)

Deploying Spring Boot App in Docker container without auto starting Tomcat

I was under the impression that including the line
CMD ["catalina.sh", "run"]
to the Dockerfile would be what triggers the Tomcat server to start.
I've removed that line but it still starts the server on deployment. I basically want to add the catalina.sh run and include CATALINA_OPTS all in a Kubernetes deployment to handle this stuff, but Tomcat still auto starts up when I deploy to a container.
Docker image usually has an entry point or a command already.
If you create your custom image based on another image from a registry, which is a common case, you can override base image entry point by specifying the ENTRYPOINT or the CMD directive.
If you omit the entry point in your docker file, it will be inherited from the base image.
Consider reading Dockerfile: ENTRYPOINT vs CMD article to have a better understanding of how it works.

Is CMD in parent docker overriden by CMD/ENTRYPOINT in child docker image?

I am trying to get my hands dirty on docker. I know that CMD or ENTRYPOINT is used to specify the start/runnable command for docker image and CMD is overridden by ENTRYPOINT. But I don't know, how does it works, when parent docker image, also has CMD OR ENTRYPOINT or BOTH ?
Does child image inherit those values from parent docker image ? If so, then does ENTRYPOINT in parent image override CMD in child image ?
I know that such question is already discussed at https://github.com/docker/compose/issues/3140. But, the discussion is quite old(before 2-3 years) and it doesn't answer my question clearly.
Thanks in advance.
If you define an ENTRYPOINT in a child image, it will null out the value of CMD as identified in this issue. The goal is to avoid a confusing situation where an entrypoint is passed as args a command you no longer want to run.
Other than this specific situation, the value of ENTRYPOINT and CMD are inherited and can be individually overridden by a child image or even a later step of the same Dockerfile. Only one value for each of these will ever exist in an image with the last defined value having precedence.
ENTRYPOINT doesn't override CMD, they just represent to parts of resulting command and exist to make life easier. Whenever container is started, the command for process 1 is determined as ENTRYPOINT + CMD, so usually ENTRYPOINT is just path to the binary and CMD is a list of arguments for that binary. CMD can also be easily overriden from command line.
So, again, it's just a thing to make life easier and make containers behave just like regular binaries - if you have man container, you can set entrypoint to /usr/bin/man and cmd to man. So if you just start container, docker will execute /usr/bin/man man, but if you run something like docker run man docker, the resulting container command will be /usr/bin/man docker - the entrypoint stays the same, cmd changes, and resulting command to start container is just a simple merging of those.
ENTRYPOINT and CMD are both inherited from parent layers (images) unless overriden, so if you inherit from image X and redefine CMD, you will still have the very same ENTRYPOINT and vice versa. However, as #BMitch mentioned below, changing ENTRYPOINT in child image will effectively reset CMD.

How to ensure certain scripts on the host system are present inside the Docker container when the container starts?

I wish to have certain scripts present in the host machine to be present inside the docker container when the container is created. How to I ensure this ? Thanks
You can use a COPY or an ADD statement in your Dockerfile.
COPY <src> <dest>
Docker will error when the file isn't present on the host.
See also:
Stackoverflow: Docker copy VS add
Dockerfile best practices
Docker documentation on COPY
Create a customized image for your container, use COPY or ADD statement in that image's Dockerfile to add scripts to customized image. Once you have the image, use it to start container then this container will have scripts you added.
If you can't, for any reasons, add the scripts to the image at creation with COPY or ADD, the only solution imho would be to mount the folder on the host machine into the container at runtime with the -voption. But in this case you will still need a kind of mechanism build in the image to trigger the script to execute. Via cron or something similar. Maybe have a look at the Phusion Baseimage as it has cron build in and an option to run scripts at container runtime, see here

Resources