Docker multi-container logging cap - docker

I'm trying to cap the max size of docker's log files. Each container's log file should max out at 100M. So each container such as the edge, worker, etc should only be allowed to have a log file that is 100MB.
I tried to insert:
log_opt:
max-size: 100m
At the end of my docker-compose.yml file below but i'm getting an error.
Where should I place it?. Also when I place it inside each container definition I'm getting an error. I read the docker docs but no where does it say where exactly to place the option.
This is my docker-compose.yml file:
version: '2.0'
services:
ubuntu:
image: ubuntu
volumes:
- box:/box
cache:
image: redis:3.0
rabbitmq:
image: rabbitmq:3-management
volumes:
- ${DATA}/rabbitmq:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5672"
placements-store:
image: redis:3.0
command: redis-server ${REDIS_OPTIONS}
ports:
- "6379:6379"
api:
image: ruby:2.3
command: bundle exec puma -C config/puma.rb
env_file:
- ./.env
working_dir: /app
volumes:
- .:/app/
- box:/box
expose:
- 3000
depends_on:
- cache
- placements-store
worker:
image: ruby:2.3
command: bundle exec sidekiq -C ./config/schedule.yml -q default -q high_priority,5 -c 10
env_file:
- ./.env
working_dir: /app
environment:
INSTANCE_TYPE: worker
volumes:
- .:/app/
- box:/box
depends_on:
- cache
- placements-store
sidekiq-monitor:
image: ruby:2.3
command: bundle exec thin start -R sidekiq.ru -p 9494
env_file:
- ./.env
working_dir: /app
volumes:
- .:/app/
- box:/box
depends_on:
- cache
expose:
- 9494
sneakers:
image: ruby:2.3
command: bundle exec rails sneakers:run
env_file:
- ./.env
working_dir: /app
environment:
INSTANCE_TYPE: worker
volumes:
- .:/app/
- box:/box
depends_on:
- cache
- placements-store
- rabbitmq
edge:
image: ruby:2.3
command: bundle exec thin start -R config.ru -p 3000
environment:
REDIS_URL: redis://placements-store
RACK_ENV: development
BUNDLE_PATH: /box
RABBITMQ_HOST: rabbitmq
working_dir: /app
volumes:
- ./edge:/app/
- box:/box
depends_on:
- placements-store
- rabbitmq
expose:
- 3000
proxy:
image: openresty/openresty:latest-xenial
ports:
- "80:80"
- "443:443"
volumes:
- ./config/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
volumes:
box:
# node_modules:
# bower_components:
# client_dist:
This is what I tried, for example inserting under the rabbitmq container:
version: '2.0'
services:
ubuntu:
image: ubuntu
volumes:
- box:/box
cache:
image: redis:3.0
rabbitmq:
image: rabbitmq:3-management
#volumes:
# - ${DATA}/rabbitmq:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5672"
log_opt:
max-size: 50m
placements-store:
image: redis:3.0
command: redis-server ${REDIS_OPTIONS}
ports:
- "6379:6379"
This is the error I get:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.rabbitmq: 'log_opt'
Tried to change log_opt: with options: and got the same error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.rabbitmq: 'options'
Also the docker version is:
docker --version && docker-compose --version
Docker version 1.11.2, build b9f10c9/1.11.2
docker-compose version 1.9.0, build 2585387
UPDATE:
Tried using the logging option like the doc says (for version 2.0):
version: '2.0'
services:
ubuntu:
image: ubuntu
volumes:
- box:/box
cache:
image: redis:3.0
rabbitmq:
image: rabbitmq:3-management
#volumes:
# - ${DATA}/rabbitmq:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5672"
logging:
driver: "json-file"
options:
max-size: 100m
max-file: 1
placements-store:
image: redis:3.0
command: redis-server ${REDIS_OPTIONS}
ports:
- "6379:6379"
Getting the error:
ERROR: for rabbitmq Cannot create container for service rabbitmq:
json: cannot unmarshal number into Go value of type string ERROR:
Encountered errors while bringing up the project.

Related

Celery worker don't download tasks from RabbitMQ after restart all containers in docker-compose

I migrate from Redis to RabbitMQ.
I start my project in docker-compose, I got problem with Celery tasks after I migrate from Redis to RabbitMQ broker.
Celery don't download old tasks when I reload all containers.
Celery got simple logs without downloading old tasks from RabbitMQ.
[2022-10-27 21:44:39,263: INFO/MainProcess] Connected to amqp://admin:**#rabbitmq:5672//
[2022-10-27 21:44:39,293: INFO/MainProcess] mingle: searching for neighbors
[2022-10-27 21:44:40,349: INFO/MainProcess] mingle: all alone
[2022-10-27 21:44:40,414: INFO/MainProcess] celery#29441ac7ffed ready.
docker-compose.yaml
version: "2.2"
services:
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
container_name: nginx
restart: always
ports:
- ${PUB_PORT}:80
volumes:
- static_volume:/var/www/static
- ./backend/mediafiles:/var/www/media
depends_on:
- django
django:
build:
context: ./backend
dockerfile: Dockerfile.prod
container_name: backend
restart: always
env_file:
- ./.env.prod
environment:
- IS_DOCKER=True
- DJANGO_SETTINGS_MODULE=core.settings.production
volumes:
- static_volume:/django/staticfiles
- ./backend/mediafiles:/django/mediafiles
- ./backend:/django # only for local development
depends_on:
postgres:
condition: service_healthy
aiogram:
build:
context: ./telegram_bot
dockerfile: Dockerfile
container_name: telegram_bot
restart: always # crash: not found token
command: ["python", "main.py"]
volumes:
- ./backend/mediafiles:/bot/mediafiles
env_file:
- ./.env.prod
environment:
- IS_DOCKER=True
depends_on:
- django
postgres:
image: postgres:13.0-alpine
container_name: project_db
restart: always
volumes:
- postgres_volume:/var/lib/postgresql/data
depends_on:
- redis
ports:
- 54321:5432
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
healthcheck:
test: ["CMD","pg_isready", "--username=${POSTGRES_USER}","-d", "{POSTGRES_DB}"]
redis:
build: ./redis
ports:
- ${REDIS_PORT}:6379
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
volumes:
- ./redis/redis.conf/:/usr/local/etc/redis.conf
- ./redis/data:/usr/local/redis/data
- ./redis/redis.log:/usr/local/redis/redis.log
restart: always
container_name: redis
# celery worker
celery:
container_name: celery
restart: always
build:
context: ./backend
dockerfile: Dockerfile.celery.prod
command: celery -A core worker -l info
environment:
- DJANGO_SETTINGS_MODULE=core.settings.production
env_file:
- ./.env.prod
depends_on:
- django
- redis
- postgres
- rabbitmq
# message broker for celery
rabbitmq:
container_name: rabbitmq
restart: always
image: rabbitmq:3.9-alpine
volumes:
- "./rabbitmq-data:/var/lib/rabbitmq"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
volumes:
postgres_volume:
static_volume:
redis_data:
Dockerfile.celery.prod
FROM python:3.8.5
WORKDIR /django
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Copy only requirements to cache them in docker layer
RUN pip install --upgrade pip
COPY ./requirements.txt /django/
RUN pip install -r requirements.txt
COPY . .
I tried to run delayed tasks with Celery worker after reload all containers in docker-compose including RabbitMQ too.

Pimcore Unsupported config option for services

In official pimcore document below docker-compose.yaml file is available but when i execute
following command. I am getting error.
ERROR: The Compose file './docker-compose.yaml' is invalid because:
Unsupported config option for services: 'nginx'
Unsupported config option for volumes: 'pimcore-database'
I checked all indentation. they are correct can anyone help below
Here is the official installation guide:
https://github.com/pimcore/demo
services:
redis:
image: redis:alpine
command: [ redis-server, --maxmemory 128mb, --maxmemory-policy volatile-lru, --save "" ]
db:
image: mariadb:10.7
working_dir: /application
command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci, --innodb-file-per-table=1]
volumes:
- pimcore-database:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=ROOT
- MYSQL_DATABASE=pimcore
- MYSQL_USER=pimcore
- MYSQL_PASSWORD=pimcore
nginx:
image: nginx:stable-alpine
ports:
- "8080:80"
volumes:
- .:/var/www/html:ro
- ./.docker/nginx.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- php-fpm
- php-fpm-debug
php-fpm:
user: '1000:1000' # set to your uid:gid
image: pimcore/pimcore:PHP8.1-fpm
environment:
COMPOSER_HOME: /var/www/html
depends_on:
- db
volumes:
- .:/var/www/html
- pimcore-tmp-storage:/tmp
php-fpm-debug:
user: '1000:1000' # set to your uid:gid
image: pimcore/pimcore:PHP8.1-fpm-debug
depends_on:
- db
volumes:
- .:/var/www/html
- pimcore-tmp-storage:/tmp
environment:
PHP_IDE_CONFIG: serverName=localhost
COMPOSER_HOME: /var/www/html
supervisord:
user: '1000:1000' # set to your uid:gid
image: pimcore/pimcore:PHP8.1-supervisord
depends_on:
- db
volumes:
- .:/var/www/html
- ./.docker/supervisord.conf:/etc/supervisor/conf.d/pimcore.conf:ro
volumes:
pimcore-database:
pimcore-tmp-storage:
David Maze comment was on point. You must also consider to specify a supported version, because 3.8 is the last version right now, but maybe won't work in your case. For example, in my case (the docker-compose.yaml for the Pimcore demo project) the version is 3.3.
The solution that I found it was adding the version tag in the YML file:
version: '3'
services:
redis:
image: redis:alpine
command: [ redis-server, --maxmemory 128mb, --maxmemory-policy volatile-lru, --save "" ]
...

Docker compose container fails to read the application running in different container and in same network

I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?

Docker Volume is Empty after Mounting

I try to set up a Docker-compose for my application(s) including a service based on the nginx image. I want to have the possibility to simply access the config from my Host. But when i mount the volume with
volumes:
- ./nginxConf:/etc/nginx
this volume is empty and the container crashes.
Full docker-compose.yml
version: '3'
services:
frontend:
image: myFrontend
restart: always
environment:
- API_URL=http://localhost:3000/api/v1
ports:
- "80:80"
- "443:443"
depends_on:
- "api"
volumes:
- ./nginxConf:/etc/nginx
api:
image: myApi
restart: always
command: bash -c "npm run build && npm run start"
ports:
- "3000:3000"
links:
- mongo
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"

Cannot launch django with celery in Docker Compose v3

Here is my docker-compose.yml:
version: '3.4'
services:
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
web:
restart: always
image: celery-with-docker-compose:latest
build: .
command: bash -c "python /code/manage.py collectstatic --noinput && python /code/manage.py migrate && /code/run_gunicorn.sh"
volumes:
- /static:/data/web/static
- /media:/data/web/media
- .:/code
env_file:
- ./.env
depends_on:
- db
volumes:
- ./app:/deploy/app
worker:
image: celery-with-docker-compose:latest
restart: always
build:
context: .
command: bash -c "pip install -r /code/requirements.txt && /code/run_celery.sh"
volumes:
- .:/code
env_file:
- ./.env
depends_on:
- redis
- web
db:
restart: always
image: postgres
env_file:
- ./.env
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
restart: always
image: redis:latest
privileged: true
command: bash -c "sysctl vm.overcommit_memory=1 && redis-server"
ports:
- "6379:6379"
volumes:
pgdata:
When I run docker stack deploy -c docker-compose.yml cryptex I got
Non-string key at top level: true
And docker-compose -f docker-compose.yml config gives me
ERROR: In file './docker-compose.yml', the service name True must be a quoted string, i.e. 'True'.
I'm using latest versions of docker and compose. Also I'm new to compose v3 and started to use it for getting availability of docker stack command. If you see any mistakes or redudants in config file please, let me know. Thanks
you have to validate you docker compose file, is probably that the have low value inside
Validating your file now is as simple as docker-compose -f
docker-compose.yml config. As always, you can omit the -f
docker-compose.yml part when running this in the same folder as the
file itself or having the

Resources