I have a problem with my nextcloud docker stack.
I run fsck on every boot of my system. So the volume in the stack is not yet mounted when the stack starts.
Is there a simple way to wait starting the stack until /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/ is mounted??
My Stack looks like this...
version: "2"
services:
nextcloud:
image: linuxserver/nextcloud
container_name: nextcloud
networks:
- homeserver
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
volumes:
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/nextcloud/config:/config
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/nextcloud/data:/data
depends_on:
- mariadb
restart: unless-stopped
mariadb:
image: yobasystems/alpine-mariadb:armhf
container_name: mariadb
networks:
- homeserver
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=xxx
- MYSQL_ROOT_PASSWORD=
volumes:
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/mariadb/logs:/var/lib/mysql/mysql-bin
- /srv/dev-disk-by-uuid-77365390-c57e-4b8a-846f-42fa099bf411/docker/appdata/mariadb/mysql/_data:/var/lib/mysql
restart: unless-stopped
phpmyadmin:
container_name: phpmyadmin-nextcloud
image: phpmyadmin
networks:
- homeserver
restart: unless-stopped
environment:
- PMA_HOST=172.18.0.2
- PMA_PORT=3306
ports:
- 8182:80
networks:
homeserver:
external:
name: homeserver
Thanks a lot for your help!!
Afaik there is no native solution for this, I would recommend to take a look at the solution https://github.com/docker/compose/issues/374#issuecomment-310266246
copy-paste from the link
// start.sh
#!/bin/sh
set -eu
docker volume create --name=gql-sync
echo "Building docker containers"
docker-compose build
echo "Running tests inside docker container"
docker-compose up -d pubsub
docker-compose up -d mongo
docker-compose up -d botms
docker-compose up -d events
docker-compose up -d identity
docker-compose up -d importer
docker-compose run status
docker-compose run testing
exit $?
// status.sh
#!/bin/sh
set -eu pipefail
echo "Attempting to connect to bots"
until $(nc -zv botms 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to events"
until $(nc -zv events 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to identity"
until $(nc -zv identity 3000); do
printf '.'
sleep 5
done
echo "Attempting to connect to importer"
until $(nc -zv importer 8080); do
printf '.'
sleep 5
done
echo "Was able to connect to all"
exit 0
// in my docker compose file
status:
image: yikaus/alpine-bash
volumes:
- "./internals/scripts:/scripts"
command: "sh /scripts/status.sh"
depends_on:
- "mongo"
- "importer"
- "events"
- "identity"
- "botms"
Related
I have the following docker-compose.yml.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
I am scaling backend service so my startup command is sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=10.
The problem I am facing is command A, command B in service backend was running for all 10 containers startup(means they were being run 10 times).
But I want command A to run only once for all the backend service-related containers but Command B should run for all containers.
Any suggestions in accomplishing this?
I'm not entirely sure that there would be an out-of-the-box solution for your requirement.
However, I can suggest you a workaround like this. You can duplicate your backend service in docker-compose and run one backend service with both Command A and Command B, while the other backend has only Command B.
Then when you want to scale, you scale the backend which has only Command B.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend_default:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command B
&& ... "
restart: unless-stopped
Now you can use the scale option like below:
sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=9
Now if there happens to be a scenario, where you need only 1 backend to be run, you can use profiles in docker-compose to only run backend when there is a specific profile is given with docker-compose command. That means only default_backend will run if that profile is not given and hence the scale is 1.
Hope this helps you. Cheers 🍻 !!!
If BACKEND_IMAGE is being built by you, you should do RUN command A in your Dockerfile. The RUN line will be executed only once during build time — so you will also need to make sure that this meshes with your needs — while the ENTRYPOINT and CMD lines will only be run upon execution of the container. The command in the docker-compose file overrides the CMD line.
I am dockerizing my existing application. But there's a strange issue. When i start my application with
docker-compose up
each service in the docker-compose runs successfully with no issues. But there are some services which i don't want to run sometimes (celery, celerybeat etc). For that i run
docker-compose run nginx
the above command should run nginx, web, db services as configured with docker-compose.yml but it only runs web and db not nginx.
Here's my yml file
docker-compose.yml
version: '3'
services:
db:
image: postgres:12
env_file: .env
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- "5431:5432"
volumes:
- dbdata:/var/lib/postgresql/data
nginx:
image: nginx:1.14
ports:
- "443:443"
- "80:80"
volumes:
- ./config/nginx/:/etc/nginx/conf.d
- ./MyAPP/static:/var/www/MyAPP.me/static/
depends_on:
- web
web:
restart: always
build: ./MyAPP
command: bash -c "
python manage.py collectstatic --noinput
&& python manage.py makemigrations
&& python manage.py migrate
&& gunicorn --certfile=/etc/certs/localhost.crt --keyfile=/etc/certs /localhost.key MyAPP.wsgi:application --bind 0.0.0.0:443 --reload"
expose:
- "443"
depends_on:
- db
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
- ./config/nginx/certs/:/etc/certs
- ./MyAPP/static:/var/www/MyAPP.me/static/
broker:
image: redis:alpine
expose:
- "6379"
celery:
build: ./MyAPP
command: celery -A MyAPP worker -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
celery-beat:
build: ./MyAPP
command: celery -A MyAPP beat -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
comment-classifier:
image: codait/max-toxic-comment-classifier
volumes:
dbdata:
TL;dr: docker-compose up nginx
There's a distinct difference between docker-compose up and docker-compose run. The first builds, (re)creates, starts, and attaches to containers for a service. The second runs a one-time command against a service. When you do docker-compose run, it starts db and web because nginx depends on them, then it runs a single command on nginx and exits. So you have to use docker-compose up nginx in order to get what you want.
I try i setup a Shopware Docker Container for development. I setup a Dockerfile for the Shopware initialize process but every time i run the build process shopware return this error message:
mysql -u 'root' -p'root' -h 'dbs' --port='3306' -e "DROP DATABASE IF EXISTS `shopware6dev`"
ERROR 2005 (HY000): Unknown MySQL server host 'dbs' (-2)
i think docker setup the default network after all build processes are done but i need to connect before all containers are ready. The depends_on option brings nothing. I hope anyone have a idea to solve this problem.
This is my docker-compose file:
version: '3'
services:
shopwaredev:
build:
context: ./docker/web
dockerfile: Dockerfile
volumes:
- ./log:/var/log/apache2
environment:
- VIRTUAL_HOST=shopware6dev.test,www.shopware6dev.test
- HTTPS_METHOD=noredirect
restart: on-failure:10
depends_on:
- dbs
adminer:
image: adminer
restart: on-failure:10
ports:
- 8080:8080
dbs:
image: "mysql:5.7"
volumes:
- ./mysql57:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=shopware6dev
restart: on-failure:10
nginx-proxy:
image: solution360/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./ssl:/etc/nginx/certs
restart: on-failure:10
and this is my dockerfile for web shopwaredev container:
FROM solution360/apache24-php74-shopware6
WORKDIR /var/www/html
RUN rm index.html
RUN git clone https://github.com/shopware/development.git .
RUN cp .psh.yaml.dist .psh.yaml
RUN sed -i 's|DB_USER: "app"|DB_USER: "root"|g' .psh.yaml
RUN sed -i 's|DB_PASSWORD: "app"|DB_PASSWORD: "root"|g' .psh.yaml
RUN sed -i 's|DB_HOST: "mysql"|DB_HOST: "dbs"|g' .psh.yaml
RUN sed -i 's|DB_NAME: "shopware"|DB_NAME: "shopware6dev"|g' .psh.yaml
RUN sed -i 's|APP_URL: "http://localhost:8000"|APP_URL: "http://shopware6dev.test"|g' .psh.yaml
RUN ./psh.phar install
i run this command to pull images, up services
docker-compose -f dc-all.yml up
but i noticed the data keep in the images e.g. database data is gone once i down and up the docker
the command i tried to down the docker.
docker-compose -f dc-all.yml down
what is the best practice to keep the data?
or how to keep docker running without restart? e.g. windows restart does not
sample yml file
networks:
test:
services:
db:
networks:
- pm
image: microsoft/mssql-server-linux:2017-latest
container_name: mssql
hostname: mssql
volumes:
- ./.db:/var/opt/mssql/
- /var/opt/mssql/data
- ./sqlinit.sql:/scripts/sqlinit.sql
ports:
- 8010:1433
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=Test123!
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait for it to be available
echo "Waiting for MS SQL to be available"
/opt/mssql-tools/bin/sqlcmd -l 30 -S mssql -h-1 -V1 -U sa -P Test123! -Q "SET NOCOUNT ON SELECT \"YAY WE ARE UP\" , ##servername"
is_up=$$?
while [ $$is_up -ne 0 ] ; do
echo -e $$(date)
/opt/mssql-tools/bin/sqlcmd -l 30 -S mssql -h-1 -V1 -U sa -P Test123! -Q "SET NOCOUNT ON SELECT \"YAY WE ARE UP\" , ##servername"
is_up=$$?
sleep 5
done
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
#for foo in /scripts/*.sql
/opt/mssql-tools/bin/sqlcmd -S mssql -U sa -P Test123! -l 30 -e -i /scripts/sqlinit.sql
#done
# So that the container doesn't shut down, sleep this thread
sleep infinity
zookeeper:
networks:
- pm
image: wurstmeister/zookeeper
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ALLOW_ANONYMOUS_LOGIN: 1
kafka:
networks:
- pm
image: wurstmeister/kafka
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_ADVERTISED_HOST_NAME: kafka
schema-registry:
networks:
- pm
image: confluentinc/cp-schema-registry:5.2.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: 'schema-registry'
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
networks:
- pm
image: confluentinc/cp-kafka-rest:5.2.1
depends_on:
- zookeeper
- kafka
- schema-registry
ports:
- "8082:8082"
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: 'rest-proxy'
KAFKA_REST_BOOTSTRAP_SERVERS: 'kafka:9092'
KAFKA_REST_LISTENERS: "http://rest-proxy:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
KAFKA_REST_ZOOKEEPER_CONNECT: 'zookeeper:2181'
katalon:
networks:
- pm
image: katalonstudio/katalon:latest
container_name: katalon
hostname: katalon
depends_on:
- db
- zookeeper
- kafka
- schema-registry
- rest-proxy
volumes:
- ../katalon-service:/katalon/katalon/source
entrypoint: katalon-execute.sh
command:
- -browserType=Web Service
- -retry=0
- -statusDelay=15
- -testSuitePath=Test Suites/TS_IntegrationTestSuites_SQL
You can either mount the docker host directory as below in compose -
volumes:
- /data:/app
Using above, all the data generated inside your /app directory will show up in /data of your docker host.
OR
Use docker logical volumes -
volumes:
- mydata:/data
volumes:
mydata:
Above will create a new volume which can be shared with other services and will not be destroyed once you do a docker-compose down. The data on this logical volume stays on your host itself. You can get the directory details using below command -
docker inspect mydata
Sample output -
[
{
"CreatedAt": "2018-09-24T05:40:37Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "mydata",
"com.docker.compose.version": "1.22.0",
"com.docker.compose.volume": "data"
},
"Mountpoint": "/var/lib/docker/volumes/mydata/_data",
"Name": "mydata",
"Options": null,
"Scope": "local"
}
]
Mountpoint is where your data exists on the host.
Ref - https://docs.docker.com/compose/compose-file/#volume-configuration-reference
You use volumes like you do in Docker. See the full doc for all the details.
but basically you want:
services:
some_service:
volumes:
- $PWD/data:/path/to/data
I've got a problem with my docker-compose. I'm trying to create mulitple containers with one spring-cloud-config-server, and im trying to use spring-cloud server with my webapps container.
That's the list of my containers:
1 for the BDD (db)
1 for spring cloud config server (configproperties)
1 for some webapps (webapps)
1 nginx for reverse proxying (cnginx)
I already try to use localhost, my config-server name as host in environment variable : SPRING_CLOUD_CONFIG_URI and in boostrap.properties.
My 'configproperties' container, use 8085 in the server.xml.
That's my docker-compose.yml
version: '3'
services:
cnginx:
image: cnginx
ports:
- 80:80
restart: always
depends_on:
- configproperties
- cprocess
- webapps
db:
image: postgres:9.3
environment:
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
restart: always
command:
- -c
- max_prepared_transactions=100
configproperties:
restart: on-failure:2
image: configproperties
depends_on:
- db
expose:
- "8085"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/mydbName?currentSchema=public
webapps:
restart: on-failure:2
image: webapps
links:
- "configproperties"
environment:
- SPRING_CLOUD_CONFIG_URI=http://configproperties:8085/config-server
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- configproperties
expose:
- "8085"
When i run my docker-compose, my webapps are deploying with somme error like:
- Could not resolve placeholder 'repository.temps' in value "${repository.temps}"
But when i'm using postman and send a request like that :
http://localhost/config-server/myApplication/docker/latest
It's working properly, the request return all configuration for 'myApplication'.
I think i have miss somethink, but i don't find it...
Anyone can help me ?
Regards,
For using springcloud server, we just need to run our webapps after the initialisation of the springcloud server.
#!/bin/bash
CONFIG_SERVER_URL=$SPRING_CLOUD_CONFIG_URI/myApp/production/latest
#Check si le le serveur de config est bien lancé
echo 'Waiting for tomcat to be available'
attempt_counter=0
max_attempts=10
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)" != "200" ]]; do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached on $CONFIG_SERVER_URL"
exit 1
fi
attempt_counter=$(($attempt_counter+1))
echo "code retour $(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)"
echo "Attempt to connect to $CONFIG_SERVER_URL : $attempt_counter/$max_attempts"
sleep 15
done
echo 'Tomcat is available'
mv /usr/local/tomcat/webappsTmp/* /usr/local/tomcat/webapps/