docker - how to keep the data after docker down and up? - docker

i run this command to pull images, up services
docker-compose -f dc-all.yml up
but i noticed the data keep in the images e.g. database data is gone once i down and up the docker
the command i tried to down the docker.
docker-compose -f dc-all.yml down
what is the best practice to keep the data?
or how to keep docker running without restart? e.g. windows restart does not
sample yml file
networks:
test:
services:
db:
networks:
- pm
image: microsoft/mssql-server-linux:2017-latest
container_name: mssql
hostname: mssql
volumes:
- ./.db:/var/opt/mssql/
- /var/opt/mssql/data
- ./sqlinit.sql:/scripts/sqlinit.sql
ports:
- 8010:1433
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=Test123!
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait for it to be available
echo "Waiting for MS SQL to be available"
/opt/mssql-tools/bin/sqlcmd -l 30 -S mssql -h-1 -V1 -U sa -P Test123! -Q "SET NOCOUNT ON SELECT \"YAY WE ARE UP\" , ##servername"
is_up=$$?
while [ $$is_up -ne 0 ] ; do
echo -e $$(date)
/opt/mssql-tools/bin/sqlcmd -l 30 -S mssql -h-1 -V1 -U sa -P Test123! -Q "SET NOCOUNT ON SELECT \"YAY WE ARE UP\" , ##servername"
is_up=$$?
sleep 5
done
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
#for foo in /scripts/*.sql
/opt/mssql-tools/bin/sqlcmd -S mssql -U sa -P Test123! -l 30 -e -i /scripts/sqlinit.sql
#done
# So that the container doesn't shut down, sleep this thread
sleep infinity
zookeeper:
networks:
- pm
image: wurstmeister/zookeeper
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ALLOW_ANONYMOUS_LOGIN: 1
kafka:
networks:
- pm
image: wurstmeister/kafka
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_ADVERTISED_HOST_NAME: kafka
schema-registry:
networks:
- pm
image: confluentinc/cp-schema-registry:5.2.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: 'schema-registry'
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
networks:
- pm
image: confluentinc/cp-kafka-rest:5.2.1
depends_on:
- zookeeper
- kafka
- schema-registry
ports:
- "8082:8082"
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: 'rest-proxy'
KAFKA_REST_BOOTSTRAP_SERVERS: 'kafka:9092'
KAFKA_REST_LISTENERS: "http://rest-proxy:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
KAFKA_REST_ZOOKEEPER_CONNECT: 'zookeeper:2181'
katalon:
networks:
- pm
image: katalonstudio/katalon:latest
container_name: katalon
hostname: katalon
depends_on:
- db
- zookeeper
- kafka
- schema-registry
- rest-proxy
volumes:
- ../katalon-service:/katalon/katalon/source
entrypoint: katalon-execute.sh
command:
- -browserType=Web Service
- -retry=0
- -statusDelay=15
- -testSuitePath=Test Suites/TS_IntegrationTestSuites_SQL

You can either mount the docker host directory as below in compose -
volumes:
- /data:/app
Using above, all the data generated inside your /app directory will show up in /data of your docker host.
OR
Use docker logical volumes -
volumes:
- mydata:/data
volumes:
mydata:
Above will create a new volume which can be shared with other services and will not be destroyed once you do a docker-compose down. The data on this logical volume stays on your host itself. You can get the directory details using below command -
docker inspect mydata
Sample output -
[
{
"CreatedAt": "2018-09-24T05:40:37Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "mydata",
"com.docker.compose.version": "1.22.0",
"com.docker.compose.volume": "data"
},
"Mountpoint": "/var/lib/docker/volumes/mydata/_data",
"Name": "mydata",
"Options": null,
"Scope": "local"
}
]
Mountpoint is where your data exists on the host.
Ref - https://docs.docker.com/compose/compose-file/#volume-configuration-reference

You use volumes like you do in Docker. See the full doc for all the details.
but basically you want:
services:
some_service:
volumes:
- $PWD/data:/path/to/data

Related

Wait to start Docker Stack until Filesystem is mounted

I have a problem with my nextcloud docker stack.
I run fsck on every boot of my system. So the volume in the stack is not yet mounted when the stack starts.
Is there a simple way to wait starting the stack until /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/ is mounted??
My Stack looks like this...
version: "2"
services:
nextcloud:
image: linuxserver/nextcloud
container_name: nextcloud
networks:
- homeserver
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
volumes:
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/nextcloud/config:/config
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/nextcloud/data:/data
depends_on:
- mariadb
restart: unless-stopped
mariadb:
image: yobasystems/alpine-mariadb:armhf
container_name: mariadb
networks:
- homeserver
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=xxx
- MYSQL_ROOT_PASSWORD=
volumes:
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/mariadb/logs:/var/lib/mysql/mysql-bin
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/mariadb/mysql/_data:/var/lib/mysql
restart: unless-stopped
phpmyadmin:
container_name: phpmyadmin-nextcloud
image: phpmyadmin
networks:
- homeserver
restart: unless-stopped
environment:
- PMA_HOST=172.18.0.2
- PMA_PORT=3306
ports:
- 8182:80
networks:
homeserver:
external:
name: homeserver
Thanks a lot for your help!!
Afaik there is no native solution for this, I would recommend to take a look at the solution https://github.com/docker/compose/issues/374#issuecomment-310266246
copy-paste from the link
// start.sh
#!/bin/sh
set -eu
docker volume create --name=gql-sync
echo "Building docker containers"
docker-compose build
echo "Running tests inside docker container"
docker-compose up -d pubsub
docker-compose up -d mongo
docker-compose up -d botms
docker-compose up -d events
docker-compose up -d identity
docker-compose up -d importer
docker-compose run status
docker-compose run testing
exit $?
// status.sh
#!/bin/sh
set -eu pipefail
echo "Attempting to connect to bots"
until $(nc -zv botms 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to events"
until $(nc -zv events 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to identity"
until $(nc -zv identity 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to importer"
until $(nc -zv importer 8080); do
printf '.'
sleep 5
done
echo "Was able to connect to all"
exit 0
// in my docker compose file
status:
image: yikaus/alpine-bash
volumes:
- "./internals/scripts:/scripts"
command: "sh /scripts/status.sh"
depends_on:
- "mongo"
- "importer"
- "events"
- "identity"
- "botms"

Change container port in docker compose

I am trying to get a docker-compose file working with both airflow and spark. Airflow typically runs on 8080:8080 which is needed by spark as well. I have the following docker-compose file:
version: '3.7'
services:
master:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.master.Master -h master
hostname: master
environment:
MASTER: spark://master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: localhost
expose:
- 7001
- 7002
- 7003
- 7004
- 7005
- 7077
- 6066
ports:
- 4040:4040
- 6066:6066
- 7077:7077
- 8080:8080
volumes:
- ./conf/master:/conf
- ./data:/tmp/data
worker:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://master:7077
hostname: worker
environment:
SPARK_CONF_DIR: /conf
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 1g
SPARK_WORKER_PORT: 8881
SPARK_WORKER_WEBUI_PORT: 8081
SPARK_PUBLIC_DNS: localhost
links:
- master
expose:
- 7012
- 7013
- 7014
- 7015
- 8881
ports:
- 8081:8081
volumes:
- ./conf/worker:/conf
- ./data:/tmp/data
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=y
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
# Add this to have third party packages
- ./requirements.txt:/requirements.txt
# - ./plugins:/usr/local/airflow/plugins
ports:
- "8082:8080" # NEED TO CHANGE THIS LINE
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
but specifically need to change the line:
ports:
- "8082:8080" # NEED TO CHANGE THIS LINE
under web server so there's no port conflict. However when I change the container port to something other than 8080:8080 it doesnt work (cannot connect/find server). How do I change the container port successfully?
When you specified port mapping in docker you are giving 2 ports, for instance: 8082:8080.
The right port is the one that is listening within the container.
You can have several containers that listening internally to the same port. They are still not available in your localhost - that's why we use the ports section for.
Now, in your localhost you cannot bind the same port more than once. That's why docker failed when you try to set on the left side 8080 more than one time.
In your current compose file, the spark service port is mapped to 8080 (left side of 8080:8080) and the webserver service is mapped to 8082 (left side of 8082:8080).
if you want to access spark, goto: http://localhost:8080 and for the web server, go to http://localhost:8082

Docker compose isnt writing files or directories to host

I am following the digital ocean tutorial to install wordpress via docker
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-docker-compose
It says if the certbot is other than 0 I get the following error, there are no log files where I it says to look. Newish to docker thanks for helping all!
Edit: I’m noting none of the volumes that this docker-compose were created on the host
Name Command State Ports
-------------------------------------------------------------------------
certbot certbot certonly --webroot ... Exit 1
db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp
webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp
wordpress docker-entrypoint.sh php-fpm Up 9000/tcp
Docker-compose.yml here
version: '3'
services:
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress
volumes:
- dbdata:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app-network
wordpress:
depends_on:
- db
image: wordpress:5.1.1-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress
volumes:
- wordpress:/var/www/html
networks:
- app-network
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- app-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- wordpress:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email sammy#example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com
volumes:
certbot-etc:
wordpress:
dbdata:
networks:
app-network:
driver: bridge
The volumes being created here are named volumes.
To check named volumes run:
docker-compose volume ls
Also, per the comment above, you could check certbot logs with:
docker-compose logs certbot
The volumes and container logs won’t show up using docker unless you use the specific container and volume names which you can find with:
docker-compose ls and docker-compose volume ls
Or use the docker-compose variants above

jupyter fails to open a directory to run a docker container

The docker is running and I want to run a docker container in Windows 10. When I run the docker-compose from Windows power shell, some downloading jobs are completed, an error occurs, and the docker container cannot run. It seems that jupyter fails to build or open a directory. Anyone could help me about this problem? The command line and the error is as the following:
PS C:\Users\mmva> cd C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose
PS C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose> docker-compose up
Building jupyter
Step 1/19 : FROM jupyter/jupyterhub
latest: Pulling from jupyter/jupyterhub
efd26ecc9548: Extracting [==================================================>] 51.34MB/51.34MB
a3ed95caeb02: Download complete
298ffe4c3e52: Download complete
758b472747c8: Download complete
8b9809a68afc: Download complete
93b253b5483d: Download complete
ef8136abb53c: Download complete
ERROR: Service 'jupyter' failed to build: failed to register layer: re-exec error: exit status 1: output: Failed to OpenForBackup failed in Win32: open \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz: The filename, directory name, or volume label syntax is incorrect. (0x1f) \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz
My docker version is 17.09.0-ce-win33 (13620).
I think the docker-compose's version is 3.
The content of docker-compose file:
version: '3'
# IPTABLES RULES IF NECESSARY
#-A INPUT -i br+ -j ACCEPT
#-A INPUT -i docker0 -j ACCEPT
#-A OUTPUT -o br+ -j ACCEPT
#-A OUTPUT -o docker0 -j ACCEPT
# The .env file is for production use with server-specific configurations
services:
# Frontend web proxy for accessing services and providing TLS encryption
nginx:
build: ./nginx
container_name: md2k-nginx
restart: always
volumes:
- ./nginx/site:/var/www
- ./nginx/nginx-selfsigned.crt:/etc/ssh/certs/ssl-cert.crt
- ./nginx/nginx-selfsigned.key:/etc/ssh/certs/ssl-cert.key
ports:
- "443:443"
- "80:80"
links:
- apiserver
- grafana
- jupyter
apiserver:
build: ../CerebralCortex-APIServer
container_name: md2k-api-server
restart: always
expose:
- 80
links:
- mysql
- kafka
- minio
depends_on:
- mysql
environment:
- MINIO_HOST=${MINIO_HOST:-minio}
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- MYSQL_HOST=${MYSQL:-mysql}
- MYSQL_DB_USER=${MYSQL_ROOT_USER:-root}
- MYSQL_DB_PASS=${MYSQL_ROOT_PASSWORD:-random_root_password}
- KAFKA_HOST=${KAFKA_HOST:-kafka}
- JWT_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- FLASK_HOST=${FLASK_HOST:-0.0.0.0}
- FLASK_PORT=${FLASK_PORT:-80}
- FLASK_DEBUG=${FLASK_DEBUG:-False}
volumes:
- ./data:/data
# Data vizualizations
grafana:
image: "grafana/grafana"
container_name: md2k-grafana
restart: always
ports:
- "3000:3000"
links:
- influxdb
environment:
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s:%(http_port)s/grafana/
# - GF_INSTALL_PLUGINS=raintank-worldping-app,grafana-clock-panel,grafana-simple-json-datasource
volumes:
- timeseries-storage:/var/lib/grafana
# - timeseries-storage:/etc/grafana
influxdb:
image: "influxdb:alpine"
container_name: md2k-influxdb
restart: always
ports:
- "8086:8086"
volumes:
- timeseries-storage:/var/lib/influxdb
# Data Science Dashboard Interface
jupyter:
build: ./jupyterhub
container_name: md2k-jupyterhub
ports:
- 8000
restart: always
network_mode: "host"
pid: "host"
environment:
TINI_SUBREAPER: 'true'
volumes:
- ./jupyterhub/conf:/srv/jupyterhub/conf
command: jupyterhub --no-ssl --config /srv/jupyterhub/conf/jupyterhub_config.py
# Cerebral Cortex backend
kafka:
image: wurstmeister/kafka:0.10.2.0
container_name: md2k-kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${MACHINE_IP:-10.0.0.1}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_MESSAGE_MAX_BYTES: 2000000
KAFKA_CREATE_TOPICS: "filequeue:4:1,processed_stream:16:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- data-storage:/kafka
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
container_name: md2k-zookeeper
restart: always
ports:
- "2181:2181"
mysql:
image: "mysql:5.7"
container_name: md2k-mysql
restart: always
ports:
- 3306:3306 # Default mysql port
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-random_root_password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-cerebralcortex}
- MYSQL_USER=${MYSQL_USER:-cerebralcortex}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-cerebralcortex_pass}
volumes:
- ./mysql/initdb.d:/docker-entrypoint-initdb.d
- metadata-storage:/var/lib/mysql
minio:
image: "minio/minio"
container_name: md2k-minio
restart: always
ports:
- 9000:9000 # Default minio port
environment:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
command: server /export
volumes:
- object-storage:/export
cassandra:
build: ./cassandra
container_name: md2k-cassandra
restart: always
ports:
- 9160:9160 # Thrift client API
- 9042:9042 # CQL native transport
environment:
- CASSANDRA_CLUSTER_NAME=cerebralcortex
volumes:
- data-storage:/var/lib/cassandra
volumes:
object-storage:
metadata-storage:
data-storage:
temp-storage:
timeseries-storage:
user-storage:
log-storage

docker-compose - networks - /etc/hosts is not updated

I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up
If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...

Resources