Best way to update config file in Docker with environment variables - docker

im unable to find an easy solution, but probably i'm just searching for the wrong things:
I have a docker-compose.yml which contains a tomcat that is built by the contents of the /tomcat folder. In /tomcat there is a Dockerfile, a .war and a server.xml.
The Dockerfile is based on tomcat:9, and copys the server.xml and .war files into the right directories.
If I do docker-compose up, everything is running fine. But i would love to find a way to update the connectors within the server.xml, without pruning the image, adjusting the server.xml and start it again.
It would be perfect to put a $CONNECTOR_CONFIG in the server.xml, and provide an variables.env to docker-compose where the $CONNECTOR_CONFIG variable is set to like ""
I know i could adjust the server.xml within the Dockerfile with sed, but this way the image must be pruned everytime i want to change something right?
Is there a way that i can later just edit the variables.env and docker-compose down/up?
Regards,
EdFred

A useful pattern here is to use the image's ENTRYPOINT as a wrapper script that does first-time setup. If that script ends with exec "$#" then it will execute the image's CMD as normal. You can use this to do things like rewrite configuration files based on environment variables.
#!/bin/sh
# docker-entrypoint.sh
# Replace any environment variable references in server.xml.tmpl.
# (Assumes the image has the full GNU tool set.)
envsubst <"$CATALINA_BASE/conf/server.xml.tmpl" >"$CATALINA_BASE/conf/server.xml"
# Run the standard container command.
exec "$#"
Normally in a tomcat image you wouldn't include a CMD since the base image knows how to start Tomcat. The Docker Hub tomcat image page has a mention of it, or you can click through to find the original Dockerfile. You need to know this since specifying an ENTRYPOINT in a derived Dockerfile will reset the CMD.
Your Dockerfile then needs to COPY this script in and set up the ENTRYPOINT and CMD.
# Dockerfile
FROM tomcat:9
COPY myapp.war /usr/local/tomcat/webapps/
COPY server.xml.tmpl /usr/local/tomcat/conf/
COPY docker-entrypoint.sh /usr/local/tomcat/bin/
# ENTRYPOINT _MUST_ be JSON-array form
ENTRYPOINT ["docker-entrypoint.sh"]
# Duplicate from base image
CMD ["catalina.sh", "run"]
You can verify this by hand using a docker run command. Any command you specify after the image name gets run instead of the CMD; but the main container command is still constructed by passing that command as arguments to the alternate ENTRYPOINT and so your wrapper script will run.
docker run --rm \
-e CONNECTOR_CONFIG=test-connector-config \
my-image \
cat /usr/local/tomcat/conf/server.xml
In your final Compose setup, you can include the configuration as an environment: variable.
version: '3.8'
services:
myapp:
build: .
ports: ['8080:8080']
environment:
CONNECTOR_CONFIG: ...
envsubst is a GNU tool that replaces $ENVIRONMENT_VARIABLE references in text files. It's very useful for this specific case, but you can do the same work with sed or another text-processing tool, especially if you don't have the GNU tools available (in particular if you have an Alpine-based image).

Related

Should Dockerfiles specify an entry point when using docker compose?

I am migrating some web-apps to be managed via docker compose
It seems the docker-compose.yaml has a section for the container entry-point.
However, my individual docker files have an ENTRYPOINT themselves... should I remove this from the Dockerfiles? Does the entry-point in docker-compose override the Docker one?
You usually shouldn't specify entrypoint: or command: in a Compose file. Prefer specifying these in a Dockerfile. The one big exception is if you have a container that can do multiple things (for example, it can be both a Web server and a queue worker, with the same code) and you need to tell it with a command: to do not-the-default thing.
I'd suggest a typical setup like:
# docker-compose.yml
version: '3.8'
services:
app:
build: .
# with neither entrypoint: nor command:
# Dockerfile
FROM ...
WORKDIR /app
COPY ...
RUN ...
# ENTRYPOINT ["./entrypoint-wrapper.sh"]
CMD ["./my_app"]
Compose entrypoint: overrides the Dockerfile ENTRYPOINT and resets the CMD. Compose command: overrides the Dockerfile CMD.
In the Dockerfile both ENTRYPOINT and CMD are optional. If your base image already includes a correct command setup (nginx, php:fpm) then you can safely skip both.
It's otherwise somewhat a matter of style whether to use CMD or ENTRYPOINT in your Dockerfile. I prefer CMD for two reasons: it's easier to replace in a docker run ... image-name alternate command invocation, and there's a pattern of using ENTRYPOINT as a wrapper script to do first-time setup and then launch the CMD with exec "$#". If you have a JSON-array-syntax ENTRYPOINT then you can pass additional command-line arguments to it as docker run ... image-name --option. Both setups are commonplace.
The thing you shouldn't do is put an interpreter in ENTRYPOINT and a script name in CMD. I only ever see this in Python, but ENTRYPOINT ["python3"] is wrong. On the one hand this is hard to override in the same way ENTRYPOINT is in general, and on the other neither normal command override format works (you still have to repeat the script name if you want to run the same script with different options).

Dockerfile Entrypoint no such file or directory: OCI not found

I have a Dockerfile with Entrypoint where I specify the config file variable and the executable file but it looks like Docker or Entrypoint doesn't recognize it. My main.py has to be executed with the config file.
ENTRYPOINT ["CONFIG_FILE=path/to/config.file ./main.py"]
always reproduce no such file or directory: OCI not found
Note: I have copied all the files in the current work directory already. main.py is an executable file. So I guess the problem is the config variable appended before the executable file. Does anyone know what is going on there? Also changing from ENTRYPOINT to CMD does not help as well.
Dockerfile
FROM registry.fedoraproject.org/fedora:34
WORKDIR /home
COPY . /home
ENTRYPOINT ["CONFIG_FILE=path/to/config.file ./main.py"]
If you just need to set an environment variable to a static string, use the Dockerfile ENV directive.
ENV CONFIG_FILE=/path/to/config.file
CMD ["./main.py"]
The Dockerfile ENTRYPOINT and CMD directives (and also RUN) have two forms. You've used the JSON-array form; in that form, there is not a shell involved and you have to manually split out words. (You are trying to run a command CONFIG_FILE=... ./main.py, where the executable file itself needs to include the = and space.) If you don't use the JSON-array form you can use the shell form instead, and this form should work:
CMD CONFIG_FILE=/path/to/config.file ./main.py
In general you should prefer CMD to ENTRYPOINT. There's a fairly standard pattern of using ENTRYPOINT to do first-time setup and then execute the CMD. For example, if you expected the configuration file to be bind-mounted in, but want to set the variable only if it exists, you could write a shell script:
#!/bin/sh
# entrypoint.sh
#
# If the config file exists, set it as an environment variable.
CONFIG_FILE=/path/to/config.file
if [ -f "$CONFIG_FILE" ]; then
export CONFIG_FILE
else
unset CONFIG_FILE
fi
# Run the main container CMD.
exec "$#"
Then you can specify both the ENTRYPOINT (which sets up the environment variables) and the CMD (which says what to actually do)
# ENTRYPOINT must be JSON-array form for this to work
ENTRYPOINT ["./entrypoint.sh"]
# Any valid CMD syntax is fine
CMD ["./main.py"]
You can double-check the environment variable setting by providing an alternate docker run command
# (Make sure to quote things so the host shell doesn't expand them first)
docker run --rm my-image sh -c 'echo $CONFIG_FILE'
docker run --rm -v "$PWD:/path/to" my-image sh -c 'echo $CONFIG_FILE'
If having the same environment in one-off debugging shells launched by docker exec is important to you, of these approaches, only Dockerfile ENV will make the variable visible there. In the other cases the environment variable is only visible in the main container process and its children, but the docker exec process isn't a child of the main process.

docker-compose and listing volume contents

Maybe I'm just not understanding correctly but I'm trying to visually verify that I have used volumes properly.
In my docker-compose I'd have something like
some-project:
volumes:
- /some-local-path/some-folder:/v-test
I can verify it's contents via "ls -la /some-local-path/some-folder"
In some-projects Dockerfile I'd have something like
RUN ls -la /v-test
which returns 'No such file or directory"
Is this the correct way to use it? If so, why can't I view the contents from inside the container?
Everything in the Dockerfile runs before anything outside the build: block in the docker-compose.yml file is considered. The image build doesn't see volumes or environment variables that get declared only in docker-compose.yml, and it can't access other services.
In your example, first the Dockerfile tries to ls the directory, then Compose will start the container with the bind mount.
If you're just doing this for verification, you can docker-compose run a container with most of its settings from the docker-compose.yml file, but an alternate command:
docker-compose run some-project \
ls -la /v-test
(Doing this requires that the image's CMD is a well-formed shell command; either it has no ENTRYPOINT or the ENTRYPOINT is a wrapper script that ends in exec "$#" to run the CMD. If you only have ENTRYPOINT, change it to CMD; if you've split the command across both directives, consolidate it into a single CMD line.)

Recommended way to handle empty vs existing DB in Dockerfile

I want to run M/Monit (https://mmonit.com/) in a docker container and found this Dockerfile: https://github.com/mlebee/docker-mmonit/blob/master/Dockerfile
I'm using it with a simple docker-compose.yml in my test environment:
version: '3'
services:
mmonit:
build: .
ports:
- "8080:8080"
#volumes:
#- ./db/:/opt/mmonit/db/
It does work, but I want to extend the Dockerfile so that the path /opt/mmonit/db/ is exported as a volume. I'm struggling to implement the following behaviour:
When the volume mapped to /opt/mmonit/db/ is empty (for example on first setup) the files from the install archive should be written to the volume. The db folder is part of the archive.
When the database file /opt/mmonit/db/mmonit.db already exists in the volume, it should not be overwritten in any circumstances.
I do have an idea how to script the required operations / checks in bash, but I'm not even sure if it would be better to replace the ENTRYPOINT with a custom start script or if it should be done by modifying the Dockerfile only.
That's why I ask for the recommended way.
In general the strategy you lay out is the correct path; it's essentially what the standard Docker Hub database images do.
The image you link to is a community image, so you shouldn't feel particularly bound to that image's decisions. Given the lack of any sort of license file in the GitHub repository you may not be able to copy it as-is, but it's also not especially complex.
Docker supports two "halves" of the command to run, the ENTRYPOINT and CMD. CMD is easy to provide on the Docker command line, and if you have both, Docker combines them together into a single command. So a very typical pattern is to put the actual command to run (mmmonit -i) as the CMD, and have the ENTRYPOINT be a wrapper script that does the required setup and then exec "$#".
#!/bin/sh
# I am the Docker entrypoint script
# Create the database, but only if it does not already exist:
if ! test -f /opt/mmonit/db/mmonit.db; then
cp -a /opt/monnit/db_base /opt/monnit/db
fi
# Replace this script with the CMD
exec "$#"
In your Dockerfile, then, you'd specify both the CMD and ENTRYPOINT:
# ... do all of the installation ...
# Make a backup copy of the preinstalled data
RUN cp -a db db_base
# Install the custom entrypoint script
COPY entrypoint.sh /opt/monit/bin
RUN chmod +x entrypoint.sh
# Standard runtime metadata
USER monit
EXPOSE 8080
# Important: this must use JSON-array syntax
ENTRYPOINT ["/opt/monit/bin/entrypoint.sh"]
# Can be either JSON-array or bare-string syntax
CMD /opt/monit/bin/mmonit -i
I would definitely make these kind of changes in a Dockerfile, either starting FROM that community image or building your own.

How to create a properties file via Dockerfile with dynamic values passed in docker run?

I am relatively new to Docker. Maybe this is a silly question.
My goal is to create an image which has a system.properties file, which as the name says, is a properties file with key value pairs.
I want to fill the values in this file dynamically. So I think the values need to be passed as environment variables to the Docker run command.
For example, if this is what i want in my system.properties file:
buildMode=true
source=/path1
I want to provide the values to this file dynamically, something like:
$ docker run -e BUILD_MODE=true -e SOURCE='/path1' my_image
But I'm stuck at how I can copy the values into the file. Any help will be appreciated.
Note: Base image is linux centos.
As you suspect, you need to create the actual file at runtime. One pattern that’s useful in Docker is to write a dedicated entrypoint script that does any required setup, then launches the main container command.
If you’re using a “bigger” Linux distribution base, envsubst is a useful tool for this. (It’s part of the GNU toolset and isn’t available by default on Alpine base images, but on CentOS it should be.) You might write a template file:
buildMode=${BUILD_MODE}
source=${SOURCE}
Then you can copy that template into your image:
...
WORKDIR /app
COPY ...
COPY system.properties.tmpl ./
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["java", "-jar", "app.jar"]
The entrypoint script needs to run envsubst, then go on to run the command:
#!/bin/sh
envsubst <system.properties.tmpl >system.properties
exec "$#"
You can do similar tricks just using sed(1), which is more universally available, but requires potentially trickier regular expressions.

Resources