I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4
Related
By default docker uses /var/lib/docker/volumes/ for any started container.
Is there any way to launch a new container and have it consume all the required disk on a different specified path on the host?
Basically have the root volume different.
For a specific container only, the simplest way i think would be to use docker volumes, Create docker volume and then attach the volume to the container. So the process running on the container uses up the share, so this is using the disk you would like to use.
More information on the following webpage,
https://docs.docker.com/storage/volumes/
you can define the volume path.
docker run -it --rm -v PWD$:/MyVolume ubuntu bash
This command will use the current folder where you execute the command from.
In the container you'll find your file under /MyVolume.
jens#DESKTOP:~$ docker run -it --rm -v $PWD:/MyVolume ubuntu bash
root#71969d68099e:/# cd /MyVolume/
root#71969d68099e:/MyVolume# ls
But you can define any path:
docker run -it --rm -v /home/someuser/somevolumepath:/MyVolume ubuntu bash
Almost the same is available in docker compose.
ports:
- "80:8080"
- "443:443"
volumes:
- $HOME/userhome/https_cert:/etc/nginx/certs
Jens
I started using docker only recently. It is my understanding that in order to mount the local folder into a docker volume inside the container C1 on the image image_name can be done by running the following code:
var=$(pwd)
docker run -d --name=C1 -v $var:/host image_name
However, because I am detaching the container, I am not able to see it among the containers created doing docker ps or docker container ls.
However, if I run docker volume list and then docker volume rm VOLUMEID I get the error volume is in use - [CONTAINER_C1_ID].
Any idea how can I see where C1 is?
Where am I doing wrong?
I am running a docker container with docker mounted inside using :
docker run -v /Path/to/service:/src/service -v /var/run/docker.sock:/var/run/docker.sock --net=host image-name python run.py
This runs a python script that creates a data folder in /src and fills it. When printing os.listdir('/src/data'), I get a list of files.
I then run a container from within this container, mounting the data folder, using docker-py.
volumes = {'/src/data': {'bind': '/src', 'mode': 'rw'}}
client.containers.run(image, command='ls data', name=container_key, network='host', volumes=volumes)
And it prints :
Starting with UID: 0 and HOME: /src\n0\n'
Which means data is mounted, but empty. What am I doing wrong ?
So- mounting docker inside the container means that containers started from in there are running on your HOST machine.
The end result is you have two containers on host- one with
/Path/to/service:/src/service
and one with
/src/data:/src
If you want to share a volume between two containers you should usually use a "named" volume like
docker run -v sharedvolume:/src/data and docker run -v sharedvolume:/src
I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file
I have created a volume of a docker image. The docker image is:
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/tensorflow/tensorflow latest-gpu 7f09e75cdc12 4 months ago 1.289 GB
And the container volume is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
e99c80d2d53e gcr.io/tensorflow/tensorflow:latest-gpu "/run_jupyter.sh" 21 hours ago Up 11 minutes 6006/tcp, 0.0.0.0:8888->8888/tcp deep
I need to share a folder between the host Ubuntu 16.04 OS and the docker container.
I ran this command for doing this:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
This didnt lead to the folder being loaded into the container deep. I dont know what to do after this and am really new to the container stuff in docker. Please explain your answer a bit too.
EDIT:
I deleted the container and then ran these commands:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker run -p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker exec -it deep bash
There is no folder called deep-learning in the /home/ folder in the container. What have I done wrong here?
There's no API I'm aware of to change the mounted volumes on a running container. You destroy the existing container (docker stop and docker rm) and create a new one with the proper configuration (docker run). If you find yourself trying to maintain a single container, upgrading apps inside the container or with data inside, odds are good that you're trying to recreate a VM rather than isolating a process, which is an anti-pattern.
From your edit, you didn't create the /home/deep-learning folder, you created the /home folder. You also appear to be creating a second container named deep without any volume mounts and exec'ing into that one. To make a container with the /home/deep-learning volume mount and the name deep, run it like:
docker run -v /home/cortana/deep-learning:/home/deep-learning \
-p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu