Force creation of bind mount in docker-compose - docker

I am trying to make a neo4j container use host system directories for data and logs using docker-compose. My compose file looks like this
neo4j:
image: neo4j:3.5.6
ports:
- "127.0.0.1:7474:7474"
- "127.0.0.1:7473:7473"
- "127.0.0.1:7687:7687"
environment:
NEO4J_AUTH: "none"
volumes:
- "~/neo4j/data:/data"
- "~/neo4j/logs:/logs"
However, it only works for the logs directory, for the data directory, the container keeps its own volume. The binds section of docker inspect looks like this
"Binds": [
"/home/rbusche/neo4j/logs:/logs:rw",
"6f989b981c12a252776404343044b6678e0fac48f927e80964bcef409ab53eef:/data:rw"
],
Peculiar enough it works when I use docker run and specify the volume there. The neo4j Dockerfile declares both data and logs as container volumes. This there any way to force docker-compose to override those?

After removing the volume 6f989b981c12a252776404343044b6678e0fac48f927e80964bcef409ab53eef and the container associated with it, it works as expected. It seems like the container was clinging to a volume it created on a previous start.

Related

docker-compose: volume problem: path on host created but not populated by container

I have the following docker-compose:
version: '3.7'
services:
db:
image: bitnami/mongodb:5.0.6
volumes:
- "/app/local-data:/data/db"
env_file: ./db/.env
The problem is data does not persist between docker-compose up/down and docker does not seem to use /app/local-data even though it creates it.
When I run docker-compose, container starts and works naturally. The directory /app/local-data is created by docker, however Mongodb does not populate it, and no r/w error is being shown on console. This makes me thing a temporary volume is assigned to container instead.. But if that is true then why docker still creates /app/local-data and not using it?
Any ideas how can I debug this?
Docker directives like volumes: don't know anything about what's actually running in the image. That directive creates the specified host and container paths if required, and bind-mounts the host path into the container path. It's up to the application code to use that directory (or not).
If you look at the bitnami/mongodb Docker Hub page under "Persisting your database", the database is configured to store data in the /bitnami/mongodb directory inside the container, and that directory needs to be the second volumes: path. Also note the requirement that the data directory needs to be writable by user ID 1001, which may or may not exist on your host (there's no specific requirement to create it).
volumes:
- "/app/local-data:/bitnami/mongodb"
# ^^^^^^^^^^^^^^^^
sudo chown -R 1001 /app/local-data
sudo docker-compose up -d

Re-using existing volume with docker compose

I have setup two standalone docker containers, one runs a webserver another one runs a mysql for it.
Right now I was attempting to have it working with docker-compose. All is nice and it runs well, but I was wondering how could I re-use existing volumes from the existing standalone containers that I have previously created (since I want to retain the data from them).
I saw people suggesting to use external: true command for this, but could not get the right syntax so far.
Is external: true the correct way approach for this, or should I approach this differently?
Or can I just specify the path to the volume within docker-compose.yml and make it use the old existing volume?
Yes you can do it normally, just an example below:
Set external to true and set name to the name of the volume you want to mount.
version: "3.5"
services:
transmission:
image: linuxserver/transmission
container_name: transmission
volumes:
- transmission-config:/config
- /path/to/downloads:/downloads
ports:
- 51413:51413
- 51413:51413/udp
networks:
- rede
restart: always
networks:
rede:
external: true
name: rede
volumes:
transmission-config:
external: true
name: transmission-config
Per the documentation, using the external flag allows you to use volumes created outside the scope of the docker-compose file.
However, it is advisable to create a fresh volume via the docker-compose file and copy the existing data from the old volumes to the new volumes
You can create a volume explicitly using the docker volume create command, or Docker can create a volume during container or service creation. When you create a volume, it is stored within a directory on the Docker host. When you mount the volume into a container, this directory is what is mounted into the container.
If your system is running, you can exec into the mysql container, copy and move it outside.
docker cp "${container_id}":/path_to_folder /path_to_server

use volume defined in Dockerfile from docker-compose

I have for example this service and volume defined in my docker-compose file
postgres:
image: postgres:9.4
volumes:
- db_data:/var/lib/postgresql/data
volumes:
blue_prod_db:
driver: rancher-nfs
Then. if you define a volume inside a Dockerfile like this:
RUN mkdir /stuff
COPY ./stuff/* /stuff/
VOLUME /stuff
How can you later access it through the docker-compose configuration and add it to a container?
When configured in the Dockerfile, a volume will result in any container started from that image, including temporary containers later in the build process from the RUN command, to have a volume defined at the specified location, e.g. /stuff. If you do not define a source for that volume at run time, you will get an anonymous volume created by docker for you at that location. However, you can always define a volume with a source at run time (even without the volume being defined) by specifying the location in your compose file:
version: "3"
services:
app:
image: your_image
volumes:
- data:/stuff
volumes:
data:
Note that there are two volumes sections, one for a specific service that specifies where the volume is mounted inside the container, and another at the top level where you can specify the source of the volume. Without specifying a source, you'll get a local volume driver with a directory under /var/lib/docker bind mounted into the container.
I do not recommend specifying volumes inside the Dockerfile in general, it breaks the ability to extend the image in later steps for child images, and clutters the filesystem with anonymous volumes that are not easy to track back to their origin. It's best to define them at runtime with something like a compose file.

Moving volume between containers docker-composer

I have image A (some_laravel_project) and B (laravel_module). Image A is a Laravel project that looks like this.
app
modules
core
Volume Image b here
config
As the list above suggests I want to share a volume from Image B in Image A using docker-compose. I want to access the files in container B.
This is the docker-compose I tried and didn't receive any errors creating those images in gitlab ci. I checked and the volume and its files are in stored in the module_user:latest container.
I think I made a mistake mounting the volume to some_laravel_project.
version: '3'
services:
laravel:
image: some_laravel_project
working_dir: /var/www
volumes:
- /var/www/storage
- userdata:/var/www/Modules
user:
image: laravel_module
volumes:
- userdata:/user
volumes:
userdata:
webroot:
The method you used to share volumes across container in docker compose is the correct one. You can find this documented under docker-compose volumes
if you want to reuse a volume across multiple services, then define a
named volume in the top-level volumes key. Use named volumes with
services,
In you case, the directory /var/www/Modules in laravel will have the same content as that in /user inside user service. You can verify that by going into the containers and checking each directoty by running;
docker exec -it <container-name> bash

Unable to understand a Docker-compose service property

I am new to docker, and stumbled upon a docker-compose file. I get the gist of all other properties but I have no idea what below line is doing:
volumes:
- ./data:/data/db
Can anyone please help me with this.
multiple volumes can be attached to your container ... each are defined as a pair
volumes:
- /parent/host/path01:/inside/container/path_one
- /parent/host/path02:/inside/container/path_another
of each pair the left side is a pre-existing volume reachable on host before container is created ... right side is what the freshly launched container views that left side as from inside the container
in your example, in same dir where you launch docker-compose from, there evidently exists a dir called data ... using ./data will reach it using a relative path ... the right side /data/db is what the code in your container calls that same dir
/full/path/to/reach/data:/data/db
is using the absolute path to reach that same ./data dir which lives on the parent host which docker-compose is executed on
This volume mapping allows permanent storage on parent host to become visible (read/writable) to the container ... since the container filesystem is ephemeral and so goes away when container exits this volume mapping gives the container access to permanent storage for specified paths which must appear in your yaml file ... especially important for database containers like mongo ... all files used in your container not mapped in the volumes yaml disappear once the container exists
Here is a typical yaml snippet for mongo where it gains access to permanent storage on parent host
loudmongo:
image: mongo
container_name: loud_mongo
restart: always
ports:
- 127.0.0.1:27017:27017
volumes:
- /cryptdata7/var/data/db:/data/db
The dash symbol is probably what is throwing you off, because it is poorly formatted YAML syntax for a YAML list element.
The volume syntax after the dash is just following the so-called "short" syntax for a host-to-container bind-mounted volume mapping.

Resources