Moving volume between containers docker-composer - docker

I have image A (some_laravel_project) and B (laravel_module). Image A is a Laravel project that looks like this.
app
modules
core
Volume Image b here
config
As the list above suggests I want to share a volume from Image B in Image A using docker-compose. I want to access the files in container B.
This is the docker-compose I tried and didn't receive any errors creating those images in gitlab ci. I checked and the volume and its files are in stored in the module_user:latest container.
I think I made a mistake mounting the volume to some_laravel_project.
version: '3'
services:
laravel:
image: some_laravel_project
working_dir: /var/www
volumes:
- /var/www/storage
- userdata:/var/www/Modules
user:
image: laravel_module
volumes:
- userdata:/user
volumes:
userdata:
webroot:

The method you used to share volumes across container in docker compose is the correct one. You can find this documented under docker-compose volumes
if you want to reuse a volume across multiple services, then define a
named volume in the top-level volumes key. Use named volumes with
services,
In you case, the directory /var/www/Modules in laravel will have the same content as that in /user inside user service. You can verify that by going into the containers and checking each directoty by running;
docker exec -it <container-name> bash

Related

define volumes in docker-compose.yaml

I am writing a docker-compose.yaml file for my project. I have checked the volumes documentation here .
I also understand the concept of volume in docker that I can mount a volume e.g. -v my-data/:/var/lib/db where my-data/ is a directory on my host machine while /var/lib/db is the path inside database container.
My confuse is with the link I put above. There it has the following sample:
version: "3.9"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
I wonder does it mean that I have to create a directory named data-volume on my host machine? What if I have a directory on my machine with path temp/my-data/ and I want to mount that path to the database container /var/lib/db ? Should I do something like below?
version: "3.9"
services:
db:
image: db
volumes:
- temp/my-data/:/var/lib/db
volumes:
temp/my-data/:
My main confusion is the volumes: section at the bottom, I am not sure whether the volume name should be the path of my directory or should be just literally a name I give & if it is the latter case then how could the given name be mapped with temp/my-data/ on my machine? The sample doesn't indicate that & is ambiguous to clarify that.
Could someone please clarify it for me?
P.S. I tried with above docker-compose I guessed, ended up with the error:
ERROR: The Compose file './docker-compose.yaml' is invalid because:
volumes value 'temp/my-data/' does not match any of the regexes: '^[a-zA-Z0-9._-]+$'
Mapped volumes can either be files/directories on the host machine (sometimes called bind mounts in the documentation) or they can be docker volumes that can be managed using docker volume commands.
The volumes: section in a docker-compose file specify docker volumes, i.e. not files/directories. The first docker-compose in your post uses such a volume.
If you want to map a file or directory (like in your last docker-compose file), you don't need to specify anything in the volumes: section.
Docker volumes (the ones specified in the volumes: section or created using docker volume create) are of course also stored somewhere on your host computer, but docker manages that and you shouldn't normally need to know where or what the format is.
This part of the documentation is pretty good about explaining it, I think https://docs.docker.com/storage/volumes/
As #HansKilian mentions, you don't need both volumes and services.volumes. To use services.volumes, map the host directory to the container directory like this:
services:
db:
image: db
volumes:
- /host/path/lib/db:/container/path/lib/db
With that, the directory /host/path/lib/db on the host machine will be used by the container and available at /container/path/lib/db.
Now, if you're like me, I get really confused with fake examples, so let's say the real directory on your host machine is /var/lib/db and you just want to see it at /db when you run a shell in Docker (i.e., docker exec -it /bin/bash container-id).
docker-compose.yaml would look like this:
services:
db:
image: db
volumes:
- /var/lib/db:/db
Now when you run the shell, cd /logs and ls, you'll see the same results as if you'd cd /var/lib/db on the host.
If you want to use the volumes section to indicate a global volume to use, you first have to create that volume using docker volume create. The documentation Hans linked includes steps to do this. The syntax of /host/path:/container/path is replaced by volume-name:/container/path. Then, once defined, you'd alter your docker-compose.yaml to be more like this:
services:
db:
image: db
volumes:
- your-global-volume-name:/db
volumes:
your-global-volume-name:
external: true
Note that I have not tested or used the this configuration. I'm assuming it's correct based on the other method working and the few changes I can identify in the docs.

How is the Docker Mount Point decided?

I pull an image from Docker Hub (say) Ghost CMS and after reading the documentation, I see that the default mount point is /var/lib/ghost/content
Now, when I make my own application with Ghost as the base image, I map some folder (say) CMS-Content and mount it on /var/lib/ghost/content written like this -
volumes:
- CMS-Content: /var/lib/ghost/content
The path /var/lib/ghost/content are System Level paths. However, CMS-Content is a folder I created to host my files (persistent data).
Finally, I decide to publish my application as an image in Docker Hub, so what will be the mount point now?
If you want to make a pesistent data for the container :
Using command-line :
docker run -it --name <WHATEVER> -p <LOCAL_PORT>:<CONTAINER_PORT> -v <LOCAL_PATH>:<CONTAINER_PATH> -d <IMAGE>:<TAG>
Using docker-compose.yaml :
version: '2'
services:
cms:
image: <IMAGE>:<TAG>
ports:
- <LOCAL_PORT>:<CONTAINER_PORT>
volumes:
- <LOCAL_PATH>:<CONTAINER_PATH>
Assume :
IMAGE: ghost-cms
TAG: latest
LOCAL_PORT: 8080
CONTAINER_PORT: 8080
LOCAL_PATH: /persistent-volume
CONTAINER_PATH: /var/lib/ghost/content
Examples :
First create /persistent-volume.
$ mkdir -p /persistent-volume
docker-compose -f docker-compose.yaml up -d
version: '2'
services:
cms:
image: ghost-cms:latest
ports:
- 8080:8080
volumes:
- /persistent-volume:/var/lib/ghost/content
Each container has its own isolated filesystem. Whoever writes an image's Dockerfile gets to decide what filesystem paths it uses, but since it's isolated from other containers and the host it's very normal to use "system" paths. As another example, the standard Docker Hub database images use /var/lib/mysql and /var/lib/postgresql/data. For a custom application you might choose to use /app/data or even just /data, if that makes sense for you.
If you're creating an image FROM a pre-existing image like this, you'll usually inherit its filesystem layout, so the mount point for your custom image would be the same as in the base image.
Flipping through the Ghost Tutorials, it looks like most things you could want to do either involve using the admin UI or making manual changes in the content directory. The only files that changes are in the CMS-Content named volume in your example (and even if you didn't name a volume, the Docker Hub ghost image specifies an anonymous volume there). That means you can't create a derived image with a standard theme or other similar setup: you can't change the content directory in a derived image, and if you experiment with docker commit (not recommended) the image it produces won't have the content from the volume.

Re-using existing volume with docker compose

I have setup two standalone docker containers, one runs a webserver another one runs a mysql for it.
Right now I was attempting to have it working with docker-compose. All is nice and it runs well, but I was wondering how could I re-use existing volumes from the existing standalone containers that I have previously created (since I want to retain the data from them).
I saw people suggesting to use external: true command for this, but could not get the right syntax so far.
Is external: true the correct way approach for this, or should I approach this differently?
Or can I just specify the path to the volume within docker-compose.yml and make it use the old existing volume?
Yes you can do it normally, just an example below:
Set external to true and set name to the name of the volume you want to mount.
version: "3.5"
services:
transmission:
image: linuxserver/transmission
container_name: transmission
volumes:
- transmission-config:/config
- /path/to/downloads:/downloads
ports:
- 51413:51413
- 51413:51413/udp
networks:
- rede
restart: always
networks:
rede:
external: true
name: rede
volumes:
transmission-config:
external: true
name: transmission-config
Per the documentation, using the external flag allows you to use volumes created outside the scope of the docker-compose file.
However, it is advisable to create a fresh volume via the docker-compose file and copy the existing data from the old volumes to the new volumes
You can create a volume explicitly using the docker volume create command, or Docker can create a volume during container or service creation. When you create a volume, it is stored within a directory on the Docker host. When you mount the volume into a container, this directory is what is mounted into the container.
If your system is running, you can exec into the mysql container, copy and move it outside.
docker cp "${container_id}":/path_to_folder /path_to_server

use volume defined in Dockerfile from docker-compose

I have for example this service and volume defined in my docker-compose file
postgres:
image: postgres:9.4
volumes:
- db_data:/var/lib/postgresql/data
volumes:
blue_prod_db:
driver: rancher-nfs
Then. if you define a volume inside a Dockerfile like this:
RUN mkdir /stuff
COPY ./stuff/* /stuff/
VOLUME /stuff
How can you later access it through the docker-compose configuration and add it to a container?
When configured in the Dockerfile, a volume will result in any container started from that image, including temporary containers later in the build process from the RUN command, to have a volume defined at the specified location, e.g. /stuff. If you do not define a source for that volume at run time, you will get an anonymous volume created by docker for you at that location. However, you can always define a volume with a source at run time (even without the volume being defined) by specifying the location in your compose file:
version: "3"
services:
app:
image: your_image
volumes:
- data:/stuff
volumes:
data:
Note that there are two volumes sections, one for a specific service that specifies where the volume is mounted inside the container, and another at the top level where you can specify the source of the volume. Without specifying a source, you'll get a local volume driver with a directory under /var/lib/docker bind mounted into the container.
I do not recommend specifying volumes inside the Dockerfile in general, it breaks the ability to extend the image in later steps for child images, and clutters the filesystem with anonymous volumes that are not easy to track back to their origin. It's best to define them at runtime with something like a compose file.

How to sync code between container and host using docker-compose?

Until now, I have used a local LAMP stack to develop my web projects and deploy them manually to the server. For the next project I want to use docker and docker-compose to create a mariaDB, NGINX and a project container for easy developing and deploying.
When developing I want my code directory on the host machine to be synchronised with the docker container. I know that could be achieved by running
docker run -dt --name containerName -v /path/on/host:/path/in/container
in the cli as stated here, but I want to do that within a docker-compose v2 file.
I am as far as having a docker-composer.yml file looking like this:
version: '2'
services:
db:
#[...]
myProj:
build: ./myProj
image: myProj
depends_on:
- db
volumes:
myCodeVolume:/var/www
volumes:
myCodeVolume:
How can I synchronise my /var/www directory in the container with my host machine (Ubuntu desktop, macos or Windows machine)?
Thank you for your help.
It is pretty much the same way, you do the host:container mapping directly under the services.myProj.volumes key in your compose file:
version: '2'
services:
...
myProj:
...
volumes:
/path/to/file/on/host:/var/www
Note that the top-level volumes key is removed.
This file could be translated into:
docker create --links db -v /path/to/file/on/host:/var/www myProj
When docker-compose finds the top-level volumes section it tries to docker volume create the keys under it first before creating any other container. Those volumes could be then used to hold the data you want to be persistent across containers.
So, if I take your file for an example, it would translate into something like this:
docker volume create myCodeVolume
docker create --links db -v myCodeVoume:/var/www myProj

Resources