I am trying to understand - maybe I already did maybe not - the differences between volumes_from and volumes usage in a docker-compose.yml file. I have read docs already but from there is not so clear to me so I am doing a real exercise.
I have the following setup:
a root directory
a directory named php-apache with a Dockerfile under root
a directory named mongo with a Dockerfile under root
a docker-compose.yml file under root
Note: If it's not clear to you, take a look here and everything exposed down below is right there as well (mongodb-test branch)
At php-apache/Dockerfile I have the following entry:
VOLUME /data /data
At mongo/Dockerfile I have the following entry:
VOLUME /data/db /data/configdb
At docker-compose.yml I have the following:
version: '2'
services:
php-apache:
container_name: "php55-dev"
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/mmi:/var/www
volumes_from:
- volumes_data
mongo:
container_name: "mongodb"
image: reynierpm/mongodb
ports:
- "27017:27017"
volumes_from:
- volumes_data
volumes_data:
image: tianon/true
volumes:
- ~/data/mongo:/data/db
- ~/data:/data
This is what I am understanding from that setup:
The image reynierpm/php55-dev will expose a /data directory and this will be mapped to ~data:/data in tianon/true image
The image reynierpm/mongodb will expose a /data/db to the outside and mapped to /data/configdb internally then the /data/db is mapped to ~/data/mongo:/data/db in tianon/true image.
Is a mess in my head right now because what do I want to achieve is the following:
Keep mapped the code on the host to the container (this line <path_on_host>:/var/www on docker-compose.yml)
Keep data stored on a local directory in the host
So, it's fine what I am doing? Feel free to made any modification at this setup since I am still learning.
The image reynierpm/php55-dev will expose a /data directory and this will be mapped to ~data:/data in tianon/true image
It's better to say it will mapped to your ~/data on docker host. Please note that there will be a /data/db from the second volume too.
The image reynierpm/mongodb will expose a /data/db to the outside and mapped to /data/configdb internally then the /data/db is mapped to ~/data/mongo:/data/db in tianon/true image.
This container will be the same as php-apache in terms of volumes from
volume_data container.
In case of your objectives:
If your code is in ~/mni/ you are fine. You are mounting mongoDB database directory to php-apache container, I don't think you need that.
You need to create a user defined network for your container connectivity or link containers (legacy). To create user defined network:
docker network create --driver bridge <yournetwork name>
You don't need a DOC. Thats why I removed the third container. I also fixed the unnecessary volume mappings.
Updated Docker file:
version: '2'
services:
php-apache:
container_name: "php55-dev"
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/mmi:/var/www
volumes_from:
- volumes_data
mongo:
container_name: "mongodb"
image: reynierpm/mongodb
ports:
- "27017:27017"
volumes_from:
- volumes_data
volumes_data:
image: tianon/true
volumes:
- ~/data/mongo:/data/db
- ~/data:/data
networks:
default:
external:
name: <your network name>
Please note you have to call your mongo container from your web application by its name, in your case mongodb.
Related
I am trying to learn kong, using docker-compose, i am able to run kong+konga and create services. But whenever i do docker-compose down and then up again i lose all my data:
kong:
container_name: kong
image: kong:2.1.4-alpine
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.40
volumes:
- kong_data:/usr/local/kong/declarative
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: password
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DB_UPDATE_FREQUENCY: 1m
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
depends_on:
- kong-migration
ports:
- "8001:8001"
- "8444:8444"
- "8000:8000"
- "8443:8443"
Looks like volume mapping not working. pleasE help
If you want to keep data when your kong docker-compose is down it is better to use kong in database mode.
So then you will create a persistent volume for your database and it will keep your changes.
By the kong manual you will find there are two type of database supported: postgresql and cassandra
Postgresql is my choice for small project as I'm not planning for huge horizontal scale with cassandra database.
As you will find in the manual starting your project with docker and database is very simple.
But remember to add a volume to your database service as in the sample mentioned in manual there is no volume.
For postgresql you can add: -v /custom/mount:/var/lib/postgresql/data in docker run command
or
volumes:
postgress-data:
driver: local
services:
postgress:
restart: unless-stopped
image: postgres:latest
environment:
- POSTGRES_USER=your_db_user
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=your_db_password
volumes:
- postgres-data:/var/lib/postgresql/data
Answer : You should use docker volume for having persistent data
As reference says :
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
First step is to create a volume that you want your host and docker container communicate using :
docker volume create new-volume
Second step is to use that volume in a docker-compose (in your case)
A single docker compose service with a volume looks like this:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
On the first invocation of docker-compose up the volume will be created. The same volume will be reused on following invocations.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true
Hello Guys I am facing a problem in volumes_from in docker-compose file.
- i have 3 services first one has my app files and the second is php-fpm which take volume from the data service.
my file is like this.
version: '2'
services:
cms_data:
image: ""image from private repository contain application file"
container_name: "cms-data"
php-fpm:
image: "image from private repository contain php configuration"
container_name: "php-fpm"
env_file:
- ../.env.production
volumes_from:
- cms_data
working_dir: /iprice/octobercms
expose:
- 9000
depends_on:
- cms_data
restart: "always"
nginx:
image: "image from private repository contain nginx configuration"
container_name: "nginx"
ports:
- "80:80"
- "443:443"
links:
- php-fpm
volumes_from:
- cms_data
depends_on:
- cms_data
restart: "always"
the cms-data image has the files which is correct.
but the php-fpm container doesn't please help.
volumes_from mounts the volumes present on other containers. It does not create new volumes.
The cms-data container does not have any volumes associated with it. So volumes_from cant do anything. If you want to share a particular folder inside cms-data, first create a volume linking that folder.
NOTE: creating a volume will overwrite the contents of the internal container with the /path/on/host folder. So first copy the contents of the container folder to this host folder.
Run the current docker-compose as is so the containers start.
Copy the contents from the cms-data container to the host folder:
docker cp :/path/to/shared/folder /path/on/host
Make the following changes to the docker-compose file and restart.
services:
cms_data:
image: ""image from private repository contain application file"
container_name: "cms-data"
volumes:
- /path/on/host:/path/to/shared/folder
...
I'm trying to share code from host to php and nginx containers. Here is my docker-compose.yml:
version: "3"
services:
php:
image: php:fpm
container_name: php
ports:
- 9000:9000
volumes:
- php_data:/var/www/html/
# configs
- ./php/config:/usr/local/etc/php
nginx:
image: nginx
ports:
- 80:80
volumes:
- php_data:/var/www/html
# logs
- ./nginx/logs:/var/log/nginx
# configs
- ./nginx/config/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./nginx/config/nginx.conf:/etc/nginx/nginx.conf:ro
volumes:
php_data:
./code
The error when running docker-compose up is:
ERROR: In file './docker-compose.yml', volume 'php_data' must be a
mapping not a string.
How do I make make docker-compose know, that I need ./code shared with both nginx and php containers?
Docker saying it needs a mapping, so you need to set something like ./code:{map} in your volume
I think you have 2 options here. Since you share the code map to /var/www/html on both containers.
Option 1
Set the /var/www/html map in your volume:
volumes:
php_data:
./code:/var/www/html
then change /var/www/html to / in your containers.
Option 2
volumes:
php_data:
./code:/
I am new to docker and developing a project using docker compose. From the documentation I have learned that I should be using data only containers to keep data persistant but I am unable to do so using docker-compose.
Whenever I do docker-compose down it removes the the data from db but by doing docker-compose stop the data is not removed. May be this is because that I am not creating named data volume and docker-compose down hardly removes all the containers. So I tried naming the container but it threw me errors.
Please have a look at my yml file:
version: '2'
services:
data_container:
build: ./data
#volumes:
# - dataVolume:/data
db:
build: ./db
ports:
- "5445:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
# - PGDATA=/var/lib/postgresql/data/pgdata
volumes_from:
# - container:db_bus
- data_container
geoserver:
build: ./geoserver
depends_on:
- db
ports:
- "8004:8080"
volumes:
- ./geoserver/data:/opt/geoserverdata_dir
web:
build: ./web
volumes:
- ./web:/code
ports:
- "8000:8000"
depends_on:
- db
command: python manage.py runserver 0.0.0.0:8000
nginx:
build: ./nginx
ports:
- "83:80"
depends_on:
- web
The Docker file for the data_container is:
FROM stackbrew/busybox:latest
MAINTAINER Tom Offermann <tom#offermann.us>
# Create data directory
RUN mkdir /data
# Create /data volume
VOLUME /data
I tried this but by doing docker-compose down, the data is lost. I tried naming the data_container as you can see the commented line, it threw me this error:
ERROR: Named volume "dataVolume:/data:rw" is used in service "data_container" but no declaration was found in the volumes section.
So right now what I am doing is I created a stand alone data only named container and put that in the volumes_from value of the db. It worked fine and didn't remove any data even after doing docker-compose down.
My queries:
What is the best approach to make containers that can store database's data using the docker-compose and to use them properly ?
My conscious is not agreeing with me on approach that I have opted, the one by creating a stand alone data container. Any thoughts?
docker-compose down
does the following
Stops containers and removes containers, networks, volumes, and images
created by up
So the behaviour you are experiencing is expected.
Use docker-compose stop to shutdown containers created with the docker-compose file but not remove their volumes.
Secondly you don't need the data-container pattern in version 2 of docker compose. So remove that and just use
db:
...
volumes:
- /var/lib/postgresql/data
docker-compose down stops containers but also removes them (with everything: networks, ...).
Use docker-compose stop instead.
I think the best approach to make containers that can store database's data with docker-compose is to use named volumes:
version: '2'
services:
db: #https://hub.docker.com/_/mysql/
image: mysql
volumes:
- "wp-db:/var/lib/mysql:rw"
env_file:
- "./conf/db/mysql.env"
volumes:
wp-db: {}
Here, it will create a named volume called "wp-db" (if it doesn't exist) and mount it in /var/lib/mysql (in read-write mode, the default). This is where the database stores its data (for the mysql image).
If the named volume already exists, it will be used without creating it.
When starting, the mysql image look if there are databases in /var/lib/mysql (your volume) in order to use them.
You can have more information with the docker-compose file reference here:
https://docs.docker.com/compose/compose-file/#/volumes-volume-driver
To store database data make sure your docker-compose.yml will look like
if you want to use Dockerfile
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
your docker-compose.yml will looks like
if you want to use your image instead of Dockerfile
version: '3.1'
services:
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
if you want to store or preserve data of mysql then
must remember to add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
to verify or check database data list by using this command
docker volume ls
DRIVER VOLUME NAME
local 35c819179d883cf8a4355ae2ce391844fcaa534cb71dc9a3fd5c6a4ed862b0d4
local 133db2cc48919575fc35457d104cb126b1e7eb3792b8e69249c1cfd20826aac4
local 483d7b8fe09d9e96b483295c6e7e4a9d58443b2321e0862818159ba8cf0e1d39
local 725aa19ad0e864688788576c5f46e1f62dfc8cdf154f243d68fa186da04bc5ec
local de265ce8fc271fc0ae49850650f9d3bf0492b6f58162698c26fce35694e6231c
local phphelloworld_mysql-data
I am looking for guidance on how what is the cleanest way to make a docker-compose.yml version 2 that:
Has container state clearly separated from the container.
Has container state mounted to the host for simplicity (single data point, simply backup /data on the host and you're done). I'm open to be wrong about this, see questions below).
The app is a classic web app with a mysql & redis database for the backend, and with a webserver that is behind a proxy that serves static assets directly. Some details like depends_on, environment variables and the networks are intentionally left out.
Here is what I use at the moment:
version: "2"
services:
proxy:
build:
context: ./apps/nginx
ports:
- "80:80"
- "443:443"
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/web/assets:/var/www/assets:ro
- ./data/web/puma:/var/run/puma
web:
build:
context: ./apps/rails
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/web/assets:/srv/app/public/assets
- ./data/web/puma:/var/run/puma
db:
image: mysql:5.7
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/mysql:/var/lib/mysql
redis:
image: redis
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/redis:/data
Here is what I plan to use for the next release:
version: "2"
services:
proxy:
build:
context: ./apps/nginx
ports:
- "80:80"
- "443:443"
volumes_from:
- localtime
- web-assets-data:ro
- web-puma-data
web:
build:
context: ./apps/rails
volumes_from:
- localtime
- web-assets-data
- web-puma-data
db:
image: mysql:5.7
volumes_from:
- localtime
- db-data
redis:
image: redis
volumes_from:
- localtime
- redis-data
web-assets-data:
image: ubuntu:14.04
volumes:
- ./data/web/assets:/srv/app/public/assets
web-puma-data:
image: ubuntu:14.04
volumes:
- ./data/web/puma:/var/run/puma
db-data:
image: ubuntu:14.04
volumes:
- ./data/mysql:/var/lib/mysql
redis-data:
image: ubuntu:14.04
volumes:
- ./data/redis:/data
localtime:
image: ubuntu:14.04
volumes:
- /etc/localtime:/etc/localtime:ro
I think the benefits of the new version are:
It's more clear where the data is.
It's easier to share data among multiple containers (no need to remember the exact paths like in the current version).
So, my questions are:
Is it problematic to use different images between the container and it's container-data? for example, should db-data use mysql:5.7 instead of ubuntu:14.04?
Is it correct to say that there's no way of having "data stored at a specific path on the host" with a top level volumes: key?
What are the advantages and inconvenients of using a named volume (with a top-level "volumes" key)? Should I prefer using a named volume over a host mount? Workflow comparisons would be nice.
Is it problematic to use different images between the container and it's container-data
Not at all, this is normal.
Is it correct to say that there's no way of having "data stored at a specific path on the host" with a top level volumes: key?
Correct. The top level volumes key is for named volumes, but you can't name host volumes.
What are the advantages and inconveniences of using a named volume (with a top-level "volumes" key)? Should I prefer using a named volume over a host mount? Workflow comparisons would be nice.
Named volumes let you use volume drivers, so you could have the data stored somewhere other than the local filesystem. However named volumes need to be initialized with data, so you might have to add a script or something to do so.