We discovered that some of our containers write so much logging messages that it will be a problem in the future, if we don`t limit the size. I found this article which seems to be the perfect solution to our problem.
So I took this docker-compose file
version: "3.7"
services:
rabbit:
image: rabbitmq:1.0.4
container_name: rabbitmq
volumes:
- mysql-designer-rabbitmq:/var/lib/rabbitmq/mnesia
environment:
- "RABBITMQ_DEFAULT_USER=user"
- "RABBITMQ_DEFAULT_PASS=pw"
ports:
- "15674:15674"
- "15672:15672"
- "5672:5672"
- "1883:1883"
restart: unless-stopped
mysql:
image: mysql:1.0.5
container_name: mysql
volumes:
- mysql-designer-db:/var/lib/mysql
environment:
- "MYSQL_ALLOW_EMPTY_PASSWORD=true"
- "MYSQL_DATABASE=db"
- "MYSQL_USER=user"
- "MYSQL_PASSWORD=pw"
- "MYSQL_ROOT_PASSWORD="
depends_on:
- rabbit
restart: unless-stopped
ports:
- "3306:3306"
sitestructure:
image: sitestructure:319
container_name: sitestructure
volumes:
- ./.docker/sitestructure/appsettings.json:/app/appsettings.json
depends_on:
- mysql
- rabbit
links:
- mysql
- rabbit
ports:
- "5000:5000"
restart: unless-stopped
deploy:
restart_policy:
condition: on-failure
max_attempts: 10
and edited the sitestructure container and put these lines to it:
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
Now when I try to update the containers the Command Line just says
Recreating sitestructure ...
And this seems to never end. Only if I remove the lines from the compose file I can use it again
I got it working. But not per container but for all containers at the same time. I edited my daemon.json file and added
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
After a restart and removing and adding all containers it now works.
Related
I'm having a problem persisting data with docker-compose.
I want my service chatmysql to persist data I put inside a database, but everytime i run docker-compose down it all vanishes.
I checked directory /var/lib/docker/volumes to see if it stores data there when containers are running and the volume was completely empty.
I didn't have that issue when I was running containers with docker run command so I guess its fault of my docker-compose.yaml file. Can someone help me?
I'm running this on Ubuntu 20.04.
version: '3'
services:
chatmysql:
image: mysql/mysql-server
container_name: chatmysql
hostname: db
user: root
networks:
- chatnet
ports:
- 3307:3306
volumes:
- chatmysqlvolume:/lib/var/mysql
chatbackend:
depends_on:
- chatmysql
build:
context: backend/src
container_name: chatbackend
hostname: backend
networks:
- chatnet
ports:
- 8080:8080
environment:
- MYSQLUSERNAME=${MYSQLUSERNAME:-user}
- MYSQLPASSWORD=${MYSQLPASSWORD:?database password not set}
- MYSQLHOST=${MYSQLHOST:-db}
- MYSQLPORT=${MYSQLPORT:-3306}
- MYSQLDBNAME=${MYSQLDBNAME:-test}
restart: always
deploy:
restart_policy:
condition: on-failure
chatfrontend:
build: frontend
container_name: chatfrontend
hostname: front
networks:
- chatnet
ports:
- 3000:3000
volumes:
chatmysqlvolume:
networks:
chatnet:
driver: bridge
You need to change the mounted volume, try this :
version: '3.7'
services:
chatmysql:
image: mysql/mysql-server
container_name: chatmysql
hostname: db
user: root
networks:
- chatnet
ports:
- 3307:3306
volumes:
- chatmysqlvolume:/var/lib/mysql
chatbackend:
depends_on:
- chatmysql
build:
context: backend/src
container_name: chatbackend
hostname: backend
networks:
- chatnet
ports:
- 8080:8080
environment:
- MYSQLUSERNAME=${MYSQLUSERNAME:-user}
- MYSQLPASSWORD=${MYSQLPASSWORD:?database password not set}
- MYSQLHOST=${MYSQLHOST:-db}
- MYSQLPORT=${MYSQLPORT:-3306}
- MYSQLDBNAME=${MYSQLDBNAME:-test}
restart: always
deploy:
restart_policy:
condition: on-failure
chatfrontend:
build: frontend
container_name: chatfrontend
hostname: front
networks:
- chatnet
ports:
- 3000:3000
volumes:
chatmysqlvolume:
networks:
chatnet:
driver: bridge
OnlyOffice is not opening previously saved documents after doing docker-compose down. I needed to increase the memory of nextcloud instance (docker container) so I proceeded to stop all the containers, modify the docker-compose and set everything up again.
There are no issues with new documents so far but editing previously saved ones OnlyOffice opens a blank document besides the files sizes are intact (no errors in console), still showing KB in NextCloud.
version: "2.3"
services:
nextcloud:
container_name: nextcloud
image: nextcloud:latest
hostname: MYDOMAIN
stdin_open: true
tty: true
restart: always
expose:
- "80"
networks:
- cloud_network
volumes:
- /mnt/apps/nextcloud/data:/var/www/html
environment:
- MYSQL_HOST=mariadb
- PHP_MEMORY_LIMIT=-1
env_file:
- db.env
mem_limit: 8g
depends_on:
- mariadb
mariadb:
container_name: mariadb
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-read-only-compressed
restart: always
networks:
- cloud_network
volumes:
- mariadb_volume:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=SOMEPASSWORD
env_file:
- db.env
onlyoffice:
container_name: onlyoffice
image: onlyoffice/documentserver:latest
stdin_open: true
tty: true
restart: always
networks:
- cloud_network
expose:
- "80"
volumes:
#- /mnt/apps/onlyoffice/data:/var/www/onlyoffice/Data
- office_data_volume:/var/www/onlyoffice/Data
#- onlyoffice_log_volume:/var/log/onlyoffice
- office_db_volume:/var/lib/postgresql
caddy:
container_name: caddy
image: abiosoft/caddy:no-stats
stdin_open: true
tty: true
restart: always
ports:
- 80:80
- 443:443
networks:
- cloud_network
environment:
- CADDYPATH=/certs
- ACME_AGREE=true
# CHANGE THESE OR THE CONTAINER WILL FAIL TO RUN
- CADDY_LETSENCRYPT_EMAIL=MYEMAIL
- CADDY_EXTERNAL_DOMAIN=MYDOMAIN
volumes:
- /mnt/apps/caddy/certs:/certs:rw
- /mnt/apps/caddy/Caddyfile:/etc/Caddyfile:ro
networks:
cloud_network:
driver: "bridge"
volumes:
office_data_volume:
office_db_volume:
mariadb_volume:
Please also note that you must ALWAYS disconnect you users before stop/restart your container. See https://github.com/ONLYOFFICE/Docker-DocumentServer#document-server-usage-issues
sudo docker exec onlyoffice documentserver-prepare4shutdown.sh
It seems that every time the containers are mounted in a NextCloud + OnlyOffice setup it generates tokens to authorize the access to the documents thru headers.
I solved it by adding a third docker volume to preserve the documentserver files. Fortunately I had a backup of my files, I removed the containers and added them again and everything it's working now.
- office_config_volume:/etc/onlyoffice/documentserver
onlyoffice:
container_name: onlyoffice
image: onlyoffice/documentserver:latest
stdin_open: true
tty: true
restart: unless-stopped
networks:
- cloud_network
expose:
- "80"
volumes:
- office_data_volume:/var/www/onlyoffice/Data
- office_db_volume:/var/lib/postgresql
- office_config_volume:/etc/onlyoffice/documentserver
I am struggling to get a Docker swarm stack set up using traefik. I decided to try traefik as an alternative to jwolder/nginx-proxy, as unfortunately the latter does not seem to support Docker swrarm mode. But I'm finding traefik to be a problem (probably my fault!).
I have a WordPress container (replicated) and a MySQL container, alongside the traefik container. All of the containers in the swarm are created and start, and docker logs <container_id> reveals no errors, but when I visit 'example.org' (not the real domain) I just see 404 page not found. So it must be a communication issue between traefik and the containers I wish to proxy. However I also don't see the traefik dashboard, so perhaps soemthing else is going on.
Here is my docker-compose file:
version: '3'
services:
traefik:
image: traefik:latest
command: --api.insecure=true \
--providers.docker=true \
--providers.docker.exposedbydefault=false \
--providers.docker.swarmmode=true \
--providers.docker.watch=true \
--logLevel=DEBUG
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
deploy:
mode: global
placement:
constraints:
- node.role == manager
db:
image: mysql:5.7
volumes:
- ./db/initdb.d:/docker-entrypoint-initdb.d
networks:
- traefik
environment:
MYSQL_ROOT_PASSWORD: <root_password>
MYSQL_DATABASE: <db_name>
MYSQL_USER: <db_user>
MYSQL_PASSWORD: <user_password>
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
app:
image: my-repo/wordpress:latest
depends_on:
- db
networks:
- traefik
environment:
- VIRTUAL_PORT=80
- VIRTUAL_HOST=example.org
deploy:
replicas: 2
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:example.org"
networks:
traefik:
The orignal nginx-proxy setup works nicely, but, as I say, won't allow me to run a swarm. I have been experimenting with traefik for only a day, so it's probably a schoolboy error of some kind.
N.B: I am aliasing my actual .org domain to 127.0.0.1 in my /etc/hosts. Perhaps that's an issue? I can't imagine it would be, I've been running Docker containers with that setup for ages without a problem.
OK, so I got it to work in non-swarm mode with the following docker-compose file:
version: '3'
services:
traefik:
image: "traefik:v2.0.0-rc3"
container_name: "traefik"
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
db:
image: mysql:5.7
volumes:
- ./db/initdb.d:/docker-entrypoint-initdb.d
networks:
- traefik
environment:
MYSQL_ROOT_PASSWORD: <root_password>
MYSQL_DATABASE: <db_name>
MYSQL_USER: <db_user>
MYSQL_PASSWORD: <user_password>
app:
image: my-repo/wordpress:latest
depends_on:
- db
networks:
- traefik
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`example.org`)"
- "traefik.http.routers.app.entrypoints=web"
networks:
traefik:
And then I tried the following swarm configuration, which worked:
version: '3'
services:
traefik:
image: "traefik:v2.0.0-rc3"
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.swarmmode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
deploy:
mode: global
placement:
constraints: [node.role==manager]
db:
image: mysql:5.7
volumes:
- ./db/initdb.d:/docker-entrypoint-initdb.d
networks:
- traefik
environment:
MYSQL_ROOT_PASSWORD: <root_password>
MYSQL_DATABASE: <db_name>
MYSQL_USER: <db_user>
MYSQL_PASSWORD: <user_password>
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
app:
image: my-repo/wordpress:latest
networks:
- traefik
deploy:
replicas: 2
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`example.org`)"
- "traefik.http.routers.app.entrypoints=web"
- "traefik.http.services.app.loadbalancer.server.port=80"
networks:
traefik:
More specifically, I got it to work only after adding the command
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
and the proxied container label
- "traefik.http.services.app.loadbalancer.server.port=80"
... so I'm not really sure what I did right. Would be grateful for any light that could be shed on that.
It's working now, though, at least.
UPDATE: The Traefik docs state that the label
traefik.http.services.<service_name>.loadbalancer.server.port
is mandatory for Docker swarm mode (look under Services on that page). So it seems as if I was just missing that.
For past 3 days I have been trying to connect two docker containers generated by two separate docker-compose files.
I tried a lot of approaches none seem to be working. Currently after adding network to compose files, service with added network doesn't start. My goal is to access endpoint from another container.
Endpoint-API compose file:
version: "3.1"
networks:
test:
services:
mariadb:
image: mariadb:10.1
container_name: list-rest-api-mariadb
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=list-api
- MYSQL_USER=list-api
- MYSQL_PASSWORD=root
ports:
- "8003:3306"
webserver:
image: nginx:stable
container_name: list-rest-api-webserver
working_dir: /application
volumes:
- .:/application
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8005:80"
networks:
- test
php-fpm:
build: docker/php-fpm
container_name: list-rest-api-php-fpm
working_dir: /application
volumes:
- .:/application
- ./docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
Consumer compose file:
version: "3.1"
networks:
test:
services:
consumer-mariadb:
image: mariadb:10.1
container_name: consumer-service-mariadb
working_dir: /consumer
volumes:
- .:/consumer
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=consumer
- MYSQL_USER=consumer
- MYSQL_PASSWORD=root
ports:
- "8006:3306"
consumer-webserver:
image: nginx:stable
container_name: consumer-service-webserver
working_dir: /consumer
volumes:
- .:/consumer
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8011:80"
networks:
- test
consumer-php-fpm:
build: docker/php-fpm
container_name: consumer-service-php-fpm
working_dir: /consumer
volumes:
- .:/consumer
- ./docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
Could someone tell me best way of accessing API container from within consumer? I'm loosing my mind on this one.
Your two
networks:
test:
don't refer to the same network, but they are independent and don't talk to each other.
What you could do is having an external and pre-existing network that both compose file can talk and share. You can do that with docker network create, and then refer to that in your compose files with
networks:
default:
external:
name: some-external-network-here
I am setting up the following docker containers with the following 2 docker-compose files:
version: '3.7'
services:
mysql:
image: mysql:5.7
restart: on-failure
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
volumes:
- db_data:/var/lib/mysql:rw
ports:
- '${MYSQL_PORT}:3306'
networks:
- shared_mysql
php:
build:
context: .
dockerfile: docker/php/Dockerfile
restart: on-failure
volumes:
- '../:/usr/src/app'
user: ${LOCAL_USER}
networks:
- shared_mysql
api_nginx:
image: nginx:1.15.3-alpine
restart: on-failure
volumes:
- '../public/:/usr/src/app'
- './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
ports:
- '21180:80'
depends_on:
- php
networks:
- shared_mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: on-failure
ports:
- '${PHPMYADMIN_PORT}:80'
environment:
PMA_HOST: mysql
MYSQL_USERNAME: ${MYSQL_USER}
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
networks:
- shared_mysql
volumes:
db_data:
networks:
shared_mysql:
version: '3.7'
services:
php:
build:
context: .
dockerfile: docker/php/Dockerfile
restart: on-failure
volumes:
- '../:/usr/src/app'
user: ${LOCAL_USER}
networks:
- api_21s_shared_mysql
auth_nginx:
image: nginx:1.15.3-alpine
restart: on-failure
volumes:
- '../public/:/usr/src/app'
- './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
ports:
- '21181:80'
depends_on:
- php
networks:
- api_21s_shared_mysql
volumes:
db_data:
networks:
api_21s_shared_mysql:
external: true
When I visit http://localhost:21181/, I always get the correct website.
But when I visit http://localhost:21182/, I get http://localhost:21181/ or http://localhost:21182/ random.
I tried to set up the network sepperate.
I'd like it to work with the portnumbers, but I don't want them to be mixed up.
I am hoping someone can help me. Thank you in advance.
When services are started with docker-compose, they are discoverable within the docker network in different ways: by service name, by container name, by IP, etc.
In your case you use the discovery by service name, since in your nginx configuration you have the reference "php:9000".
At this point docker looks for a service named "php" and finds 2. It interprets them as replicas of the same service and sends traffic to them following a round-robin pattern (first request to first instance of the service, second to the second instance of the service, third to the first instance, forth to the second instance, etc.)
Solution
Name the services differently, just like you already do with your nginx services (auth_nginx and api_nginx).
Then in your default.conf for both services change the line referring to php:9000 accordingly.