I have a docker-compose file to build a web server with django and a postgres database. It basically looks like that :
version: '3'
services:
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
image: backend
volumes:
- ./api:/app
ports:
- 8000:8000
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
postgres:
image: kartoza/postgis:11.0-2.5
volumes:
- pg_data:/var/lib/postgresql/data:rw
environment:
POSTGRES_DB: "gis,backend"
POSTGRES_PORT: "5432"
POSTGRES_USER: "user"
POSTGRES_PASS: "pass"
POSTGRES_MULTIPLE_EXTENSIONS: "postgis,postgis_topology"
ports:
- 5432:5432
redis:
image: "redis:alpine"
volumes:
pg_data:
I'm using a volume to make my data persistent
I managed to run my containers and add data to the database. A volume has successfully been created : docker volume ls
DRIVER VOLUME NAME
local server_pg_data
But this volume is empty as the output of docker system df -v shows:
Local Volumes space usage:
VOLUME NAME LINKS SIZE
server_pg_data 1 0B
Also, if I want or need to build the containers once again using docker-compose down and docker-compose up, data has been purged from my database. Yet, I thought that volumes were used to make data persistent on diskā¦
I must be missing something in the way I'm using docker and volumes but I don't get what:
why does my volume appears empty while there is some data in my postgres container ?
why does my volume does not persist after doing docker-compose down ?
This thread (How to persist data in a dockerized postgres database using volumes) looked similar but the solution does not seem to apply.
The kartoza/postgis image isn't configured the same way as the standard postgres image. Its documentation notes (under "Cluster Initializations"):
By default, DATADIR will point to /var/lib/postgresql/{major-version}. You can instead mount the parent location like this: -v data-volume:/var/lib/postgresql
If you look at the Dockerfile in GitHub, you will also see that parent directory named as a VOLUME, which has some interesting semantics here.
With the setting you show, the actual data will be stored in /var/lib/postgresql/11.0; you're mounting the named volume on a different directory, /var/lib/postgresql/data, which is why it stays empty. Changing the volume mount to just /var/lib/postgresql should address this:
volumes:
- pg_data:/var/lib/postgresql:rw # not .../data
Related
I'm using windows with linux containers. I have a docker-compose file for an api and a ms sql database. I'm trying to use volumes with the database so that my data will persist even if my container is deleted. My docker-compose file looks like this:
version: '3'
services:
api:
image: myimage/myimagename:myimagetag
environment:
- SQL_CONNECTION=myserverconnection
ports:
- 44384:80
depends_on:
- mydatabase
mydatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypassword
volumes:
- ./data:/data
ports:
- 1433:1433
volumes:
sssvolume:
everything spins up fine when i do docker-compose up. I enter data into the database and my api is able to access it. The issue I'm having is when I stop everything and try deleting my database container, then do docker-compose up again. The data is no longer there. I've tried creating an external volume first and adding
external: true
to the volumes section, but that hasn't worked. I've also messed around with the path of the volume like instead of ./data:/data I've had
sssvolume:/var/lib/docker/volumes/sssvolume/_data
but still the same thing happens. It was my understanding that if you name a volume and then reference it by name in a different container, it will use that volume.
I'm not sure if my config is wrong or if I'm misunderstanding the use case for volumes and they aren't able to do what I want them to do.
MSSQL stores data under /var/opt/mssql, so you should change your volume definition in your docker-compose file to
volumes:
- ./data:/var/opt/mssql
I have been trying to install drupal using the official image from docker hub. I created a new folder in my D directory, for my Drupal project and created a docker-compose.yml file.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1' services:
drupal:
image: drupal:8-apache ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- /var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose up -d command in a terminal from within the folder which constrong texttained docker-compose.yml file, my drupal container and its databse were successfully installed and running and I was able to access the site from http://localhost:8080 but I couldnt find their core files in the folder. It was just docker-compose.yml file in the folder.
I then removed the whole docker container and began with a fresh installation again with by editing the volume section in the docker-compose.yml file to point to the directory and folder where I want the core files of drupal to be populated.
Example D:/My Project/Drupal Project.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1'
services:
drupal:
image: drupal:latest
ports:
- 8080:80
volumes:
- d:\projects\drupalsite/var/www/html/modules
- d:\projects\drupalsite/var/www/html/profiles
- d:\projects\drupal/var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- d:\projects\drupalsite/var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose.yml command I received the error as shown below.
Container drupalsite_postgres_1 Created 3.2s
- Container drupalsite_drupal_1 Creating 3.2s
Error response from daemon: invalid mount config for type "volume": invalid mount path: 'z:/projects/drupalsite/var/www/html/sites' mount path must be absolute
PS Z:\Projects\drupalsite>
Please help me find a solution to this.
If these directories contain your application, they probably shouldn't be in volumes: at all. Create a file named Dockerfile that initializes your custom application:
FROM drupal:8-apache
COPY modules/ /var/www/html/modules/
COPY profiles/ /var/www/html/profiles/
COPY themes/ /var/www/html/themes/
COPY sites/ /var/www/html/sites/
# EXPOSE, CMD, etc. come from the base image
Then reference this in your docker-compose.yml file:
version: '3.8'
services:
drupal:
build: . # instead of image:
ports:
- 8080:80
restart: always
# no volumes:
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
If you really want to use volumes: here, there are three forms of that directive. The form you have in the question with just a path creates an anonymous volume: it causes Compose to persist that directory, initialized from what's in the image, but disconnected from your host system. With a bare name and a path, it creates a named volume, which is similar but can be explicitly managed. With two paths, it creates a bind mount, which unconditionally replaces the container content with the host-system content (there is no initialization).
version: '3.8'
services:
something:
volumes:
- /path1 # anonymous volume
- named:/path2 # named volume
- /host/path:/path3 # bind mount
volumes: # named volumes referenced in containers only
named: # usually do not need any settings
So if you do want to replace the image's contents with host directories, you need to use the bind-mount syntax. Relative paths here are interpreted relative to the location of the docker-compose.yml file.
version: '3.8'
services:
drupal:
image: drupal:8-apache
volumes:
- ./modules:/var/www/html/modules
# etc.
A final comment on named volume initialization: your file has a comment about initializing anonymous volumes. There are two major problems with this approach, though. First, the second time you start the container, the content of the volume takes precedence, and any changes in the underlying images will be ignored. Second, this setup only works for Docker named and anonymous volumes, but not Docker bind mounts, volume mounts in Kubernetes, or other types of mount. I'd generally avoid relying on this "feature".
I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.
I am using mariadb as mysql docker container and am having trouble uploading the data from the docker volume.
My database dockerfile is similar to the one posted at this link. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_containers/install_and_deploy_a_mariadb_container
Instead of importing the data as shown in the example in the above link, I would like to import it from a docker volume.
I did try using the docker-entrypoint.sh example where I loop through the files in docker-entrypoint-initdb.d but that gets me a myssql.sock error probably because the database is already shutdown from the dockerfile RUN command.
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
I think your problem is this:
You want to get your schema trough mount volume, but you don't see in the database, right ?
First of all, if your intention is to use mariadb you can use the MariaDB Official Docker Image.
So, with this way you avoid to use redhat images or custom building.
Second, You're copying this SQL, but you don't run a mysql import o dump or something like that. So, what you can do is to put in your docker-compose a command (for example):
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
command: "mysql -u username -p database_name < file.sql"
But is not the best way.
On the other hand, you can follow MariaDB documentation, set a mount volume, and in the first run, import data. And with the volume you don't have to run again imports.
This is the mount that you have to put:
/my/own/datadir:/var/lib/mysql
Obviously, you can mount your sql file in a /tmp/ folder and thats it, and with docker exec execute sh and run it from inside.
I have a docker compose file in a local folder on my mac. I have also another folder /src which should act as the root element. The docker-compose file looks like this:
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- fpm
- db
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
I understand what we are doing here, but I am missing the solution that /src is taken as the root and I think I need to set up an lsync service which syncs between local and my docker container. So I found this one, but it is not working properly - the root /src is not taken into account. I just want to type localhost in my browser and it should open the /src folder.
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
links:
- sync
volumes_from:
- sync
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
links:
- sync
volumes_from:
- sync
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- sync
volumes_from:
- sync
sync:
image: zeroboh/lsyncd
volumes:
- /var/www/html
- ./src:/src:Z
- ./docker-config/nginx:/etc/nginx/conf.d
- /var/lib/php/session
- ./docker-config/lrsync/lrsync.lua:/etc/lrsync/lrsync.lua
- ./sync:/sync
What I do understand is that every image that is loaded links the sync service into it. What I do not understand is why every image needs a volumes_from and that the syntax in sync explicitly says - can somebody help me, setting this up correctly?
Thanks
volumes_from imports volumes from another container
By default, each container has no volumes. You can define local volumes using the volumes attribute, but the volumes are only used in that container. In order for other containers to make use of them, those containers must import the volumes using volumes_from, pointing to the name of one or more containers. All volumes in those named containers are then made available in the current container.
The Z volume label indicates a private volume
You are mounting the /src volume using this:
volumes:
- ./src:/src:Z
That's fine, except you are also using volumes_from, and your question indicates that you specifically wanted to share /src. But by using the Z label, you have told Docker to make this a private volume.
From the documentation:
Volume labels
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
In this case, "current container" is sync, so only that container may use the volume. The others may not use it.