I'm trying to save data using volume. It won't restore my data when I docker stack deploy it. How do I set volume?
Running docker stack deploy -c compose-db.yml db.
This is my compose file.
compose-db.yml
version: '3'
services:
redis:
image: 172.16.12.154:5000/redis
networks:
pitbull-overlay:
aliases:
- redis
volumes:
- redis-volume:/data
ports:
- 6379:6379
mongodb:
image: 172.16.12.154:5000/mongodb
networks:
pitbull-overlay:
aliases:
- mongodb
volumes:
- mongodb-volume:/data/db
ports:
- 27017:27017
networks:
pitbull-overlay:
external:
name: pitbull-overlay
volumes:
mongodb-volume:
redis-volume:
Related
I have this docker-compose.yml file code:
version: '3.3'
services:
redis:
container_name: redis
image: 'redis:latest'
environment:
X_REDIS_PORT: ${REDIS_PORT}
ports:
- ${REDIS_PORT}:${REDIS_PORT}
volumes:
- redis:/data
networks:
app_network:
external: true
driver: bridge
volumes:
postgres:
pgadmin:
supertoken:
redis:
I want to save the cached data inside of the redis container but it is not getting saved in the container instead it gets saved on my local machine.
How to change this behaviour?
inspect your volume,
docker volume inspect redis
the mountpoint is /var/lib/docker/volumes/, check the redis volumes data.
I use folder,there is a demo in my github repo,
volumes:
- ./data:/data
Using Docker toolbox on Windows 10 Home, Docker version 19.03, we have created a docker-compose.yml and added a secrets file as JSON, it runs fine on a Mac system, but it is unable to run the same in Windows 10 Home.
Error after running docker-compose up:
ERROR: for orthancserver Cannot create container for service orthanc: invalid mount config for type
"bind": invalid mount path: 'C:/Users/ABC/Desktop/Project/orthanc.json' mount path must be absolute
docker-compose.yml:
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
secrets:
- orthanc.json
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true
secrets:
orthanc.json:
file: orthanc.json
orthanc.json file kept next to docker-compose.yml
Found an alternative solution for windows 10 home, with docker toolbox. as commented by #Schwarz54, the file-sharing works well with docker volume for Dockerized Orthanc server.
Add shared folder:
Open Oracle VM manager
Go to setting of default VM
Click Shared Folders
Add C:\ drive to the list
Edit docker-compose.yml to transfer the config file to Orthanc via volume
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
- /c/Users/ABCUser/Desktop/Project/orthanc.json:/etc/orthanc/orthanc.json:ro
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true
I have several Postgresql services, and some other services which useful in my case (for creating HA Postgresql cluster). This cluster is described in docker-compose below:
version: '3.3'
services:
haproxy:
image: haproxy:alpine
ports:
- "5000:5000"
- "5001:5001"
- "8008:8008"
configs:
- haproxy_cfg
networks:
- dbs
command: haproxy -f /haproxy_cfg
etcd:
image: quay.io/coreos/etcd:v3.1.2
configs:
- etcd_cfg
networks:
- dbs
command: /bin/sh /etcd_cfg
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
dbnode2:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode2
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode2
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode2:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode2:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
dbnode3:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode3
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode3
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode3:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode3:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
networks:
dbs:
external: true
configs:
haproxy_cfg:
file: config/haproxy.cfg
etcd_cfg:
file: config/etcd.sh
secrets:
patroni.yml:
file: patroni.test.yml
I took this yml-code from https://github.com/seocahill/ha-postgres-docker-stack.git. And i use next command to deploy this services in docker swarm - docker network create -d overlay --attachable dbs && docker stack deploy -c docker-stack.test.yml test_pg_cluster. But if i create some databases and insert some data to it and then restart servies - my data will be lost.
I know that i need to use volume for saving data on host.
I create volume with docker command: docker volume create pgdata with default docker volume directory and mount it like this:
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
volumes:
pgdata:/data/dbnode1
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
volumes:
pgdata:
When container started it has own configs in data directory data/dbnode1 inside container. And if i mount volume pgdata for store data in host, i can't connect to db and there is empty folder in container directory data/dbnode1. How can i create a persistent data volume for saving changed data in PostgerSQL?
It is way easier to create volumes by adding the path directly. Check this example.
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
volumes:
- /opt/dbnode1/:/data/dbnode1
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
Note the lines
volumes:
- /opt/dbnode1/:/data/dbnode1
where I am using a the path /opt/dbnode1/ to store the filesystem from the container at /data/dbnode1.
Also it is important to note that docker swarm does not create folders for you. Thus, you have to create the folder before starting the service. Run mkdir -p /opt/dbnode1 to do so.
I have a Tomcat docker container and Filebeat docker container both are up and running.
My objective: I need to collect tomcat logs from running Tomcat container to Filebeat container.
Issue: I have no idea how to get collected log files from Tomcat container.
What I have tried so far: I have tried to create a docker volume and add tomcat logs to that volume and access that volume from filebeat container, but ended with no success.
Structure: I have wrote docker-compose.yml file under project Logstash(root directory of the project) with following project structure.(Here I want to up and run Elasticsearch, Logstash, Filebeat and Kibana docker containers from one configuration file). docker-containers(root directory of the project) with following structure (here I want to up and run Tomcat, Nginx and Postgres containers from one configuration file).
Logstash: contain 4 main sub directories (Filebeat, Logstash, Elasticsearch and Kibana), ENV file and docker-compose.yml file. Both sub directories contain Dockerfiles to pull images and build the containers.
docker-containers: contains 3 main sub directories (Tomcat, Nginx and Postgres). ENV file and docker-compose.yml file. Both sub directories contain separate Dockerfiles to pull docker image and build the container.
Note: I think this basic structure my helpful to understand my requirements.
docker-compose.yml files
Logstash.docker-compose.yml file
version: '2'
services:
elasticsearch:
container_name: OTP-Elasticsearch
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
filebeat:
container_name: OTP-Filebeat
command:
- "-e"
- "--strict.perms=false"
user: root
build:
context: ./filebeat
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
- logstash
logstash:
container_name: OTP-Logstash
build:
context: ./logstash
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
expose:
- 5044/tcp
ports:
- "9600:9600"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
container_name: OTP-Kibana
build:
context: ./kibana
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
- logstash
- filebeat
networks:
elk:
driver: bridge
docker-containers.docker-compose.yml file
version: '2'
services:
# Nginx
nginx:
container_name: OTP-Nginx
restart: always
build:
context: ./nginx
args:
- comapanycode=${COMPANY_CODE}
- dbtype=${DB_TYPE}
- dbip=${DB_IP}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
- webdirectory=${WEB_DIRECTORY}
ports:
- "80:80"
links:
- db:db
volumes:
- ./log/nginx:/var/log/nginx
depends_on:
- db
# Postgres
db:
container_name: OTP-Postgres
restart: always
ports:
- "5430:5430"
build:
context: ./postgres
args:
- food_db_version=${FOOD_DB_VERSION}
- dbtype=${DB_TYPE}
- retail_db_version=${RETAIL_DB_VERSION}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
volumes:
- .data/db:/octopus_docker/postgresql/data
# Tomcat
tomcat:
container_name: OTP-Tomcat
restart: always
build:
context: ./tomcat
args:
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
links:
- db:db
volumes:
- ./tomcat/${WARNAME}.war:/usr/local/tomcat/webapps/${WARNAME}.war
ports:
- "8080:8080"
depends_on:
- db
- nginx
Additional files:
filebeat.yml (configuration file inside Logstash/Filbeat/config/)
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/.*log
output.logstash:
hosts: ["logstash:5044"]
Additional Info:
System I am using is Ubuntu 18.04
My goal is to collect tomcat logs from running tomcat container and forward them to Logstash and filter logs and forward that logs to Elasticsearch and finally to Kibana for Visualization purpose.
For now I can collect local machine(host) logs and visualize them in Kibana.(/var/log/)
My Problem:
I need to know proper way to get collected tomcat logs from tomcat container and forward them to logstash container via filebeat container.
Any discussion, answer or any help to understand a way to do this is highly expected.
Thanks.
So loooong... Create shared volume among all containers and setup your tomcat to save log files into that folder. If you can put all services into one docker-compose.yml, just setup volume internally:
docker-compose.yml
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
If you need several docker-compose.yml files, create volume globally in advance with docker volume create logs and map it into both compose files:
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
external: true
Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80