Add config.txt to docker container - docker

We have a war which needs a configuration file to work.
We want to dockerize it. At the moment we're doing the following:
FROM tomcat:8.0
COPY apps/test.war /usr/local/tomcat/webapps/
COPY conf/ /config/
Our containers is losing the advantages of docker because it's dependent of the configfile. So when we want to execute the .war for other purposes we have to recreate the image which isn't a good approach.
Is it possible to give a config-file as a parameter without mounting it as a volume? Because we don't want the config on our local machine. What could be a solution?

You can pass it as an ENV but I don't see you losing the advantages of Docker. Docker is essentially all about temporary containers that are dispensable. Ideally you want to build a new image for every new app version.

Related

Docker "share" Container

I'd like to share some files via a Docker container, but I'm not sure how. I have a project that has several scripts in it. I also have several VMs that need access to those scripts, and especially the latest versions. I'd like to build a docker container that has those scripts inside of it, and then have my VMs be able to mount the container and access the scripts. I tried https://hub.docker.com/r/erichough/nfs-server/ and "baking" the files in, but I don't think that does what I want it to do. Here is the docker file:
FROM erichough/nfs-server:latest
COPY ./Scripts /etc/exports/
EXPOSE 2049
It fails saying that I need to define /etc/exports. Looking at the entrypoint.sh it wants exports to be a file, so I'm guessing paths inside. So I tried creating an exports.txt file that has the path of my files:
exports.txt:
./Scripts
Dockerfile:
FROM erichough/nfs-server:latest
ADD ./exports.txt /etc/exports
EXPOSE 2049
No bueno. Is there a way to accomplish this? My end goal is a docker container in my registry that I can run in my stack. Whenever I update the scripts I push a new container.

Named container shared between different docker-compose files

I've seen some similar questions but found no solution for myself.
I have 2 docker-compose files, I have created a named volume and I'm currently using it like this:
app:
...
volumes:
- volume_static:/path/to/container
...
...
volumes:
...
volume_static:
external:
name: static
...
...
During the build process, it happens that the script adds some new file to this volume, but then, the second docker-compose, which mount the volume in the exact same manner, have no access to the new data, I need to restart it to make it work.
Is this the right approach?
I just need to push some new file in the volume from one docker-compose, and see them directly on the second docker-compose (yeah I know, docker, but saying specifying compose give a better idea on what is my problem) without restarting and building the service
Is this possible?
Thanks!
Docker believes named volumes are there to hold user data, and other things that aren't part of the normal container lifecycle.
If you start a container with an empty volume, only the very first time you run it, Docker will load content from the image into the volume. Docker does not have an update mechanism for this: since the volume presumably holds user data, Docker can't risk corrupting it by overwriting files with content from the updated image.
The best approach here is to avoid sharing files at all. If the files are something like static assets for a backend application, you can COPY --from those files from the backend image into a proxy image, using the image name and tag of your backend application (COPY --from=my/backend ...). That avoids the need for the volume altogether.
If you really must share files in a volume, then the container providing the files needs to take responsibility for copying in the files itself when it starts up. An entrypoint script is the easiest place to do this; it gives you a hook to run things as the container starts (and volumes exist and are mounted) but before running the main container process.
#!/bin/sh
set -e
# Populate (or update) the shared static tree
cp -r ./app/assets /static
# Now run the image CMD
exec "$#"
Make this script be the ENTRYPOINT in your Dockerfile; it must use the JSON-array syntax. You can leave your CMD unchanged. If you've split an interpreter and filename into separate ENTRYPOINT and CMD you can combine those into a single CMD line (and probably should anyways).
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
In terms of build lifecycle, images are built without any of the surrounding Compose ecosystem; they are not aware of the network environment, volumes, environment variables, bind mounts, etc.; so when you rebuild the image you build a new changed image but don't modify the volume at all. The very first time you run the whole file, since the named volume is empty, it is populated with content from the volume, but this only happens the very first time you run it.
Rebuilding images and restarting containers is extremely routine in Docker and I wouldn't try to avoid that. (It's so routine that re-running docker-compose up -d will delete and recreate an existing container if it needs to in order to change settings.)

Dealing with data in Docker Containers with Gitlab-Ci

So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.

How to ensure certain scripts on the host system are present inside the Docker container when the container starts?

I wish to have certain scripts present in the host machine to be present inside the docker container when the container is created. How to I ensure this ? Thanks
You can use a COPY or an ADD statement in your Dockerfile.
COPY <src> <dest>
Docker will error when the file isn't present on the host.
See also:
Stackoverflow: Docker copy VS add
Dockerfile best practices
Docker documentation on COPY
Create a customized image for your container, use COPY or ADD statement in that image's Dockerfile to add scripts to customized image. Once you have the image, use it to start container then this container will have scripts you added.
If you can't, for any reasons, add the scripts to the image at creation with COPY or ADD, the only solution imho would be to mount the folder on the host machine into the container at runtime with the -voption. But in this case you will still need a kind of mechanism build in the image to trigger the script to execute. Via cron or something similar. Maybe have a look at the Phusion Baseimage as it has cron build in and an option to run scripts at container runtime, see here

Taking files off a docker container

Is it possible to pull files off a docker container onto the local host?
I want to take certain directories off the docker containers I have worked on and move them to the local host on a daily basis.
Is this possible and how can it be done?
Yes you can, simply use the docker cp command.
An example from the official CLI documentation :
sudo docker cp <container-id>:/etc/hosts .
May I ask what's the reason you want to copy these files out of the container? If it's a one off thing then you're better off copying them as #abronan suggested, but if these directories are something that you'll be copying out and copying back into another container or the same container next time, you might want to look at volumes that enable you to have persistent data in your container as well as between containers

Resources