Data not persisting in docker volume - docker

I'm using windows with linux containers. I have a docker-compose file for an api and a ms sql database. I'm trying to use volumes with the database so that my data will persist even if my container is deleted. My docker-compose file looks like this:
version: '3'
services:
api:
image: myimage/myimagename:myimagetag
environment:
- SQL_CONNECTION=myserverconnection
ports:
- 44384:80
depends_on:
- mydatabase
mydatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypassword
volumes:
- ./data:/data
ports:
- 1433:1433
volumes:
sssvolume:
everything spins up fine when i do docker-compose up. I enter data into the database and my api is able to access it. The issue I'm having is when I stop everything and try deleting my database container, then do docker-compose up again. The data is no longer there. I've tried creating an external volume first and adding
external: true
to the volumes section, but that hasn't worked. I've also messed around with the path of the volume like instead of ./data:/data I've had
sssvolume:/var/lib/docker/volumes/sssvolume/_data
but still the same thing happens. It was my understanding that if you name a volume and then reference it by name in a different container, it will use that volume.
I'm not sure if my config is wrong or if I'm misunderstanding the use case for volumes and they aren't able to do what I want them to do.

MSSQL stores data under /var/opt/mssql, so you should change your volume definition in your docker-compose file to
volumes:
- ./data:/var/opt/mssql

Related

docker-compose not loading definitions.json for RabbitMQ

I am experimenting with Docker to create a container for RabbitMQ on my Windows 11 laptop. Doing the basics I can get it to run without error. So, from this I tried to have expand it by adding to the compose yaml file the definitions.json. The definitions.json I simply downloaded the definitions straight from the UI.
My docker-compose.yml looks like this:
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3-management
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ./definitions.json:/etc/rabbitmq/definitions.json
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net
networks:
rabbitmq_go_net:
driver: bridge
Now, when I run the compose file, it runs without any error at all, but none of the queues are visible in the UI. I have tried various things, but it appears as though the definitions.json is being ignored. As a further check, I did reload the definitions through the UI and queues reappeared.
So, how do you configure the docker compose file to load the definitions.json when creating a container from docker compose up?
Actually, the problem was the location where the definitions.json is meant to be stored. Some websites I have read have it located in the rabbitmq folder. However, I followed this link https://thomasdecaux.medium.com/deploy-rabbitmq-with-docker-static-configuration-23ad39cdbf39 and it worked. The other point to make is ensuring there is a rabbitmq.conf file to load the definitions.json file - this is critical to load the file

Grafana on Docker

I am using docker to run prometheus, grafana and node exporter. I am trying to use named volumes and I am having some issues with that. My docker-compose code is:
version: "3.7"
volumes:
grafana_ini:
prometheus_data:
grafana_data:
dashboards_data:
services:
grafana:
build: ./grafana
volumes:
- grafana_ini:/etc/grafana/grafana.ini
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
- dashboards_data:/etc/grafana/provisioning/dashboards
- ./dashboards/linux_dashboard.json:/etc/grafana/provisioning/dashboards/linux_dashboard.json
ports:
- 3000:3000
links:
- prometheus
prometheus:
build: ./prometheus
volumes:
- prometheus_data:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
node-exporter:
image: prom/node-exporter:latest
container_name: node_exporter
restart: unless-stopped
expose:
- 9100
and my dockerfile for grafana is:
FROM grafana/grafana:latest
COPY ./Ini/grafana.ini /etc/grafana/grafana.ini
COPY datasource.yml /etc/grafana/provisioning/datasources/datasource.yml
COPY ./dashboards/dashboard.yml /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/linux_dashboard.json /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/windows_dashboard.json /etc/grafana/provisioning/dashboards
EXPOSE 3000:3000
and I am getting this error while building it
ERROR: for 2022_grafana_1 Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
ERROR: for grafana Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
Can anybody please help me.
It looks like there are some problems with the volume configuration in your Grafana container:
First, I think this was simply a typo in your question:
- grafana_ini:/etc/grafana/grafana.inianticipated location in container
I suspect that you were actually intending this:
- grafana_ini:/etc/grafana/grafana.ini
Which doesn't make any sense: grafana.ini is a file, but a volume is
a directory. Docker won't allow you to mount a directory on top of a
file, hence the error:
ERROR: .../etc/grafana/grafana.ini is not directory
You have the same problem with the grafana_data volume, which you're
attempting to mount on top of datasource.yml:
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
I think you may be approaching this configuration in the wrong way;
you may want to read through these documents:
https://grafana.com/docs/grafana/latest/installation/docker/
https://grafana.com/docs/grafana/latest/administration/configure-docker/
https://grafana.com/docs/grafana/latest/administration/provisioning/
It is possible to configure Grafana (and Prometheus!) using only bind
mounts and environment variables (this includes installing plugin,
data sources, and dashboards), so you don't need to build your own
custom images.
Unrelated to this particular problem, there are some other things in
your docker-compose.yml that are worth changing. You should no
longer be using the links directive...
links:
- prometheus
...because Docker maintains DNS for you automatically; your containers
can refer to each other by name with no additional configuration.

How can I store data with Docker Compose containers?

I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.

Docker volume associated to postgres image empty and not persistent

I have a docker-compose file to build a web server with django and a postgres database. It basically looks like that :
version: '3'
services:
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
image: backend
volumes:
- ./api:/app
ports:
- 8000:8000
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
postgres:
image: kartoza/postgis:11.0-2.5
volumes:
- pg_data:/var/lib/postgresql/data:rw
environment:
POSTGRES_DB: "gis,backend"
POSTGRES_PORT: "5432"
POSTGRES_USER: "user"
POSTGRES_PASS: "pass"
POSTGRES_MULTIPLE_EXTENSIONS: "postgis,postgis_topology"
ports:
- 5432:5432
redis:
image: "redis:alpine"
volumes:
pg_data:
I'm using a volume to make my data persistent
I managed to run my containers and add data to the database. A volume has successfully been created : docker volume ls
DRIVER VOLUME NAME
local server_pg_data
But this volume is empty as the output of docker system df -v shows:
Local Volumes space usage:
VOLUME NAME LINKS SIZE
server_pg_data 1 0B
Also, if I want or need to build the containers once again using docker-compose down and docker-compose up, data has been purged from my database. Yet, I thought that volumes were used to make data persistent on diskā€¦
I must be missing something in the way I'm using docker and volumes but I don't get what:
why does my volume appears empty while there is some data in my postgres container ?
why does my volume does not persist after doing docker-compose down ?
This thread (How to persist data in a dockerized postgres database using volumes) looked similar but the solution does not seem to apply.
The kartoza/postgis image isn't configured the same way as the standard postgres image. Its documentation notes (under "Cluster Initializations"):
By default, DATADIR will point to /var/lib/postgresql/{major-version}. You can instead mount the parent location like this: -v data-volume:/var/lib/postgresql
If you look at the Dockerfile in GitHub, you will also see that parent directory named as a VOLUME, which has some interesting semantics here.
With the setting you show, the actual data will be stored in /var/lib/postgresql/11.0; you're mounting the named volume on a different directory, /var/lib/postgresql/data, which is why it stays empty. Changing the volume mount to just /var/lib/postgresql should address this:
volumes:
- pg_data:/var/lib/postgresql:rw # not .../data

Unable to upload data to mariadb container from docker volume

I am using mariadb as mysql docker container and am having trouble uploading the data from the docker volume.
My database dockerfile is similar to the one posted at this link. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_containers/install_and_deploy_a_mariadb_container
Instead of importing the data as shown in the example in the above link, I would like to import it from a docker volume.
I did try using the docker-entrypoint.sh example where I loop through the files in docker-entrypoint-initdb.d but that gets me a myssql.sock error probably because the database is already shutdown from the dockerfile RUN command.
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
I think your problem is this:
You want to get your schema trough mount volume, but you don't see in the database, right ?
First of all, if your intention is to use mariadb you can use the MariaDB Official Docker Image.
So, with this way you avoid to use redhat images or custom building.
Second, You're copying this SQL, but you don't run a mysql import o dump or something like that. So, what you can do is to put in your docker-compose a command (for example):
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
command: "mysql -u username -p database_name < file.sql"
But is not the best way.
On the other hand, you can follow MariaDB documentation, set a mount volume, and in the first run, import data. And with the volume you don't have to run again imports.
This is the mount that you have to put:
/my/own/datadir:/var/lib/mysql
Obviously, you can mount your sql file in a /tmp/ folder and thats it, and with docker exec execute sh and run it from inside.

Resources