Inject configuration into volume before Docker container starts - docker

I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.

Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file

Related

Docker different volume path for specific container

By default docker uses /var/lib/docker/volumes/ for any started container.
Is there any way to launch a new container and have it consume all the required disk on a different specified path on the host?
Basically have the root volume different.
For a specific container only, the simplest way i think would be to use docker volumes, Create docker volume and then attach the volume to the container. So the process running on the container uses up the share, so this is using the disk you would like to use.
More information on the following webpage,
https://docs.docker.com/storage/volumes/
you can define the volume path.
docker run -it --rm -v PWD$:/MyVolume ubuntu bash
This command will use the current folder where you execute the command from.
In the container you'll find your file under /MyVolume.
jens#DESKTOP:~$ docker run -it --rm -v $PWD:/MyVolume ubuntu bash
root#71969d68099e:/# cd /MyVolume/
root#71969d68099e:/MyVolume# ls
But you can define any path:
docker run -it --rm -v /home/someuser/somevolumepath:/MyVolume ubuntu bash
Almost the same is available in docker compose.
ports:
- "80:8080"
- "443:443"
volumes:
- $HOME/userhome/https_cert:/etc/nginx/certs
Jens

Docker Volume point to host Directory in Dockerfile

I have the following Dockerfile :
FROM jboss/wildfly
USER jboss
RUN mkdir -p /opt/jboss/wildfly/standalone/log
VOLUME /opt/jboss/wildfly/standalone/log
CMD /bin/bash
# CMD true
This resulting image is started with docker run -ti --name=data_volume data/volume. The next Dockerfile
FROM jboss/wildfly
RUN sed -i 's|<file relative-to="jboss.server.log.dir"
path="server.log"/>|\<file relative-to="jboss.server.log.dir"
path="\${jboss.host.name}-server.log"/\>|'
/opt/jboss/wildfly/standalone/configuration/standalone.xml
overrides the logging of the resulting jboss to log to "servername"-server.log in the logging dir. When I start the resulting image with docker run -ti --name=wild-01 --volumes-from=data_volume my/wildfly and docker run -ti --name=wild-02 --volumes-from=data_volume my/wildfly I have two log files in my data_colume container. So fine so good.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
How can I achieve this in Dockerfiles and not with the -v parameter when running data/volume
Thanks a lot in advance
Inside dockerfiles you can only define volumes in /var/lib/docker/volumes. This is because every host can be different from the other.
Docker uses /var/lib/docker as "docker area" where it stores all docker-related data. It's the directory that's guaranteed on every host because it gets created on installation.
If you were to point out a volume in the dockerfile, let's say to /home/mbieren/docker_vol, the image would result in multiple errors when executed on a different host, as that directory does not exist and the user probably has insufficient permissions to create it.
Docker goes around that problem by not allowing custom mount-paths to be set in the dockerfile.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
remove all mention of volumes from your Dockerfile ... launch your container using
docker run -d -v /var/log/wildfly:/var/log/wildfly your-image-name
then in your code just reference the normal path
/var/log/wildfly
Your syntax to launch the container using docker run -ti makes the container shell interactive whereas -d is the normal mode to spin it up as a daemon running in the background

how to share folder between host os and docker container

I have created a volume of a docker image. The docker image is:
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/tensorflow/tensorflow latest-gpu 7f09e75cdc12 4 months ago 1.289 GB
And the container volume is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
e99c80d2d53e gcr.io/tensorflow/tensorflow:latest-gpu "/run_jupyter.sh" 21 hours ago Up 11 minutes 6006/tcp, 0.0.0.0:8888->8888/tcp deep
I need to share a folder between the host Ubuntu 16.04 OS and the docker container.
I ran this command for doing this:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
This didnt lead to the folder being loaded into the container deep. I dont know what to do after this and am really new to the container stuff in docker. Please explain your answer a bit too.
EDIT:
I deleted the container and then ran these commands:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker run -p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker exec -it deep bash
There is no folder called deep-learning in the /home/ folder in the container. What have I done wrong here?
There's no API I'm aware of to change the mounted volumes on a running container. You destroy the existing container (docker stop and docker rm) and create a new one with the proper configuration (docker run). If you find yourself trying to maintain a single container, upgrading apps inside the container or with data inside, odds are good that you're trying to recreate a VM rather than isolating a process, which is an anti-pattern.
From your edit, you didn't create the /home/deep-learning folder, you created the /home folder. You also appear to be creating a second container named deep without any volume mounts and exec'ing into that one. To make a container with the /home/deep-learning volume mount and the name deep, run it like:
docker run -v /home/cortana/deep-learning:/home/deep-learning \
-p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu

How to run a docker containter linked to a previously created volume

I have a named volume with stuff in it.
I would like to provide this volume as I provide a path: docker run -v /host:/path.in.docker.container - this works for paths. I'd like to do the same with a volume I manually created and filled.
I know about --volumes-from, but how do i first connect the volume to the empty container.
You can create a volume thanks to docker create, see the documentation, then, mount this volume-container with the option --volume of the command docker run as in docker run -v volumename:/data -it my_image.

Share and update docker data containers across containers

I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4

Resources