how to share docker container volume with other team members - docker

I have a container with postgres ( for dev purposes), on that container i did some manual changes over the database that i wish to share with other team members,
the following yml file is committed to git, i wish to commit the volume it self so all other team members could compose the container using the data i manually added. so each team member will compose the container with the data i pre-created.
is there a clean way to share the volume with other team members ??
we all use macOS
ideally i want to commit the volume to git and define a fixed volume file but not sure how to do that.
version: "3.7"
services:
postgres-server:
image: postgres:13
restart: always
env_file:
- .env.dev
environment:
PGDATA: /var/lib/postgresql/data
volumes:
- postgres-server:/var/lib/postgres-server/data
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4:6.5
restart: always
env_file:
- .env.dev
environment:
PGADMIN_LISTEN_PORT: 80
ports:
- "8080:80"
volumes:
- pgadmin:/var/lib/pgadmin
volumes:
postgres-server:
pgadmin:

it's not really how volumes work. They are not meant to be source-controlled in git. You could make a DB dump and source control that.
For example, you could use pgdump but there are other tools that can do this too.
pg_dump --schema-only mydb > db.sql
You can then configure your compose file to mount this dump file when starting. If there is no volume with data, Postgres will use the dump to set up the DB.
services:
postgres-server:
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
- postgres-server:/var/lib/postgres-server/data
But be careful that there is no confidential data inside this dump file. That's why in the above example, I have dumped only the schema. So no actual data is source controlled. Data itself isn't meant for git either.

You could use tar to backup the volume content. From that tar archive you could then create a new container.
If you want to do so, have a look at
https://stackoverflow.com/a/68230268/4222206
It tars/untars the data, however you may want to have a step inbetween to share the tar ball with your team members.

Related

How do I access odoo files (such as installed add-ons folder)?

I've set up Odoo 15 on a container with docker compose, and am accessing the container through remote container extension on VS code, I've looked everywhere I can't seem to get how to access the odoo files such as the installed add-ons folder
I've set up my volumes in docker-compose file pretty much in this way:
version: '3'
services:
odoo:
image: odoo:15.0
env_file: .env
depends_on:
- postgres
ports:
- "127.0.0.1:8069:8069"
volumes:
- data:/var/lib/odoo
- ./config:/etc/odoo
- ./extra-addons:/mnt/extra-addons
But since I would like to apply changes on the html/css of non custom add-ons that are already present in odoo I'd have to access the source code of odoo that is present in the container (if doable).
For example in the volume odoo-addons:/mnt/extra-addons would be a directory where i could add my custom module but what i want is to find the source code of the add-ons already present in Odoo ?!
Use named volumes - it will copy the existing data from the container image into a new volume. In docker-compose you can do it, by defining a volume:
version: '2'
volumes:
data:
services:
odoo:
image: odoo:15.0
env_file: .env
depends_on:
- postgres
ports:
- "127.0.0.1:8069:8069"
volumes:
- data:/var/lib/odoo
- ./config:/etc/odoo
- ./extra-addons:/mnt/extra-addons
If your files reside in the /var/lib/odoo folder you will be able to view/edit the files which are thereby accessing them in the /var/lib/docker/volumes/{someName}_data/_data

Data not persisting in docker volume

I'm using windows with linux containers. I have a docker-compose file for an api and a ms sql database. I'm trying to use volumes with the database so that my data will persist even if my container is deleted. My docker-compose file looks like this:
version: '3'
services:
api:
image: myimage/myimagename:myimagetag
environment:
- SQL_CONNECTION=myserverconnection
ports:
- 44384:80
depends_on:
- mydatabase
mydatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypassword
volumes:
- ./data:/data
ports:
- 1433:1433
volumes:
sssvolume:
everything spins up fine when i do docker-compose up. I enter data into the database and my api is able to access it. The issue I'm having is when I stop everything and try deleting my database container, then do docker-compose up again. The data is no longer there. I've tried creating an external volume first and adding
external: true
to the volumes section, but that hasn't worked. I've also messed around with the path of the volume like instead of ./data:/data I've had
sssvolume:/var/lib/docker/volumes/sssvolume/_data
but still the same thing happens. It was my understanding that if you name a volume and then reference it by name in a different container, it will use that volume.
I'm not sure if my config is wrong or if I'm misunderstanding the use case for volumes and they aren't able to do what I want them to do.
MSSQL stores data under /var/opt/mssql, so you should change your volume definition in your docker-compose file to
volumes:
- ./data:/var/opt/mssql

How can I store data with Docker Compose containers?

I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.

Docker compose - save configuration

Here's my docker-compose.yml file, adapted from here:
version: '3.1'
services:
mysql:
image: mariadb
environment:
MYSQL_DATABASE: drupal8
MYSQL_USER: drupal8
MYSQL_PASSWORD: drupal8
MYSQL_ROOT_PASSWORD: admin
volumes:
- /var/lib/mysql
restart: always
drupal:
image: drupal:8.2-apache
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
restart: always
links:
- mysql
Now on running this and opening up localhost:8080 in my browser, I'm presented with Drupal's configuration setup, which I duly follow and presto, my first Drupal page is created. What I ultimately need to do is:
Save the configuration somehow, so that the settings persist
Be able to push these two containers to a single repository in Docker Hub
The end goal is to be able to issue docker run myDockerHubUsername/myRepo, which would pull these two containers and Drupal would be preconfigured.
Your docker-compose is already saving all the data/configurations you made. Even you destroy the containers, the data persists.
You need to keep your mounted volumes!
If you want to run these somewhere else. You need to always carry your data/volume. Remember to check or change the paths.
For 2nd, it is not advisable to keep multiple images in one image. If you still want. You need to prepare a Dockerfile, and prepare a single image out of that.

What is the standard way to move docker volume around servers?

I have a docker volume defined in my docker-compose.yml
version: "2"
services:
postgres:
image: my_image/postgresql:9.3
volumes:
- test_volume:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
test_volume:
I want to know what is the standard way of backing up data from server?
Ideally I would like to just move docker volume around the servers, like from my production server to my sandbox server.
Or do people usually just dump the backup sql file and move it to somewhere else through automation tool?

Resources