How do I create rabbitmq.conf volume in Dcoker container? - docker

I've been working on a Spring Boot project that uses RabbitMQ as a dev tool to create domain events. For that, I decided to use docker compose to let the people that download my code use the same tools that I used to make the project work.
The problem that i have is in my rabbitmq.conf file, when I run docker compose up Docker Desktop shows the following:
touch: /etc/rabbitmq/rabbitmq.conf: I/O error
touch: /etc/rabbitmq/rabbitmq.conf: I/O error
touch: /etc/rabbitmq/rabbitmq.conf: I/O error
touch: /etc/rabbitmq/rabbitmq.conf: I/O error
touch: /etc/rabbitmq/rabbitmq.conf: I/O error
touch: /etc/rabbitmq/rabbitmq.conf: I/O error
touch: /etc/rabbitmq/rabbitmq.conf: I/O error
Here's my docker-compose.yml:
version: "2.7"
services:
database:
image: mariadb/server
container_name: tour_system_db
restart: always
ports:
- "${DB_PORT}:3306"
environment:
- MARIADB_ROOT_PASSWORD=${DB_PASSWORD}
- MARIADB_PASSWORD=${DB_PASSWORD}
- MARIADB_USER=${DB_USERNAME}
- MARIADB_DATABASE=${DB_DATABASE}
volumes:
- ./volumes/mariadb:/var/lib/mysql
queue_management:
image: rabbitmq:3.8-management-alpine
container_name: tour_system_mq
restart: always
ports:
- "${RABBITMQ_PORT}:5672"
- "${RABBITMQ_GUI_PORT}:15672"
volumes:
- ./volumes/rabbitmq:/var/lib/rabbitmq/mnesia
- ./volumes/rabbitmq/definitions.json:/etc/rabbitmq/definitions.json
- ./volumes/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
And this is what I have inside my local rabbitmq.conf:
loopback_users.guest = false
rabbitmq_management.load_definitions = "etc/rabbitmq/definitions.json"
Am I failing in something when i create my volumes or this is an error from docker?
Thanks.

Related

Disable IPv6 in docker compose

I have a docker-compose project that I am trying to run on my server, which does not have IPv6 enabled. Whenever I try to run the container, I get the following error message:
nginx: [emerg] socket() [::]:80 failed (97: Address family not supported by protocol)
I figured that is because IPv6 is not enabled on my server (it is managed by a third party, so I can't touch that), so I tried disabling IPv6 for docker-compose, so far without any luck.
I tried adding
sysctls:
net.ipv6.conf.all.disable_ipv6: 1
on my config file, but then I received the following error
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: open /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory: unknown
How can I disable IPv6 in docker-compose, either for this particular container or system-wide to not have issues like this?
This is my current config
container_name: cont-nginx
networks:
- cont
image: nginx:latest
depends_on:
- cont-app
restart: always
ports:
- "880:880"
- "4443:4443"
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
cont:
driver: bridge
Disabling IPv6 for the docker's network should do the job:
networks:
cont:
driver: bridge
enable_ipv6: false
Also, maybe you should consider removing this from your nginx conf
listen [::]:80;
because [::] is for IPv6.

Docker-Compose Failed to register symlink

I´m trying to run an docker-compose file. But I always get the error message:
failed to register layer: symlink
../118db2348300daaa2443c22d8bd790d2985a25b5e42f49404e9f3b4333e776dd/diff/mnt/usb1/docker/fuse-overlayfs/l/NLEONPNG5QHTMTGHCWRQLIQ2DG: operation not permitted
It has something to do with the storage driver fuse-overlayfs which I use because I changed the data-root to my external HDD with vfat format.
Any hints?
Edit 1: The docker-compose file in question:
prometheus:
image: ajeetraina/prometheus-armv7
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- "-config.file=/etc/prometheus/prometheus.yml"
ports:
- "9090:9090"
grafana:
image: fg2it/grafana-armhf:v3.1.1
ports:
- "3000:3000"
The Fat file system does not support symlink, reformatting the HDD to ext4 solved the problem.

Azure Blob Storage error in Django - Failed to establish a new connection: [Errno 111]

I am setting up a local Azure Blob Storage using Docker container & Docker-compose.
However, when I start creating blob containers and uploading files it throws me the error as below.
azure.common.AzureException: HTTPConnectionPool(host='127.0.0.1', port=10000): Max retries exceeded with url: /devstoreaccount1/quickstartblobs?restype=container (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1068d0f748>: Failed to establish a new connection: [Errno 111] Connection refused',))
Here is my docker-compose:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- DEBUG=FALSE
- AZURE_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
- 5678:5678
depends_on:
- db
azurite:
image: mcr.microsoft.com/azure-storage/azurite
ports:
- "127.0.0.1:10000:10000"
Requirements.txt
djangorestframework==3.11.2
Django==3.1.8
Pygments==2.7.4
Markdown==3.2.1
coreapi==2.3.3
psycopg2-binary==2.8.4
dj-database-url==0.5.0
gunicorn==20.0.4
whitenoise==5.0.1
PyYAML==5.4
azure-storage-blob==2.1.0
ptvsd==4.3.2
azure-common==1.1.23
azure-storage-common==2.1.0
requests==2.25.1
six==1.11.0
urllib3==1.26.3
Code:
blob_service_client = BlockBlobService(
account_name='devstoreaccount1', account_key='Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==',is_emulated=True)
# Create a container called 'quickstartblobs'.
container_name = 'quickstartblobs'
blob_service_client.create_container(container_name)
You can remove the ports section for azurite service in your compose file and in your application provide the connection string and specify the blob endpoint (as mentioned here: https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azurite#connection-strings) as BlobEndpoint=http://azurite:10000
When you use docker local bridge (created for services where deployed using compose), container name if provided explicitly else the service name can be used to access the service.

Docker Compose up command times out frequently

I am using Docker For Windows and have a set of images that are already built on my docker host. When I try to use the docker-compose up -d command to start my environment, I frequently face i/o timeout errors for different containers. Usually, retrying helps but I am trying to automate this and can't expect to re-run the whole pipeline.
Versions:
Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.1, build b02f1306
Command line logs:
C:\workspace\AK_DOCKER_RISK\docker-compose>docker-compose up -d
Creating risk-svc ...
Creating risk-svc ... done
Creating risk-prc ...
Creating risk-web ...
Creating risk-web ... error
ERROR: for risk-web b'i/o timeout'
Creating risk-prc ... done
ERROR: for web b'i/o timeout'
Encountered errors while bringing up the project.
The error is observed randomly, sometimes for svc, or web or prc.
Can someone please explain why this error is occurring and more importantly - how to solve this issue?
This is my docker-compose.yml file:
version: '3'
services:
web:
image: iis-core-web:1910.252
build:
context: .
dockerfile: ./web/Dockerfile
container_name: risk-web
ports:
- "9111:8080"
tty: true
links:
- svc
volumes:
- ../RiskLogs/web:c:/RiskLogs
svc:
image: iis-core-svc:1910.252
build:
context: .
dockerfile: ./svc/Dockerfile
container_name: risk-svc
ports:
- "9112:8080"
tty: true
volumes:
- ../RiskLogs/svc:c:/RiskLogs
prc:
image: iis-core-prc:1910.252
build:
context: .
dockerfile: ./prc/Dockerfile
container_name: risk-prc
tty: true
links:
- svc
volumes:
- ../RiskLogs/prc:c:/RiskLogs
# prevent creation of new network and use existing nat
networks:
default:
external:
name: nat
Links is a legacy feature. Try user defined networks instead as per docs:
https://docs.docker.com/compose/compose-file/#links

Docker unable to mount nginx

I'm having issues setting up Docker for the first time on a Windows using the Docker Toolbox. Everything works except nginx at the moment.
Error message:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/c/wamp64/www/cathaypacific_career/ops/nginx/default.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/aufs/mnt/ff9b27a89b26b0e9091264d04d3a475f18469db3cf3be473c005e2d4c7d4b5ef\\\" at \\\"/mnt/sda1/var/lib/docker/aufs/mnt/ff9b27a89b26b0e9091264d04d3a475f18469db3cf3be473c005e2d4c7d4b5ef/etc/nginx/conf.d/default.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Docker-compose config:
version: '3'
services:
web:
container_name: web
image: nginx:1.13.3-alpine
networks:
- web_tier
ports:
- 80:80
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ../:/code
- /code/ops/
depends_on:
- app
app:
container_name: app
build: ./php/
networks:
- web_tier
- app_tier
expose:
- '9000'
volumes:
- ./php/settings.conf:/usr/local/etc/php-fpm.d/settings.conf
- ../:/code
- /code/ops/
working_dir: /code
entrypoint: "/bin/sh -c"
command:
- "php-fpm"
env_file: ../.env
depends_on:
- db
db:
container_name: db
image: mysql:5.6.39
networks:
- app_tier
- db_tier
expose:
- '3306'
ports:
- 3306:3306
volumes:
- db_data:/var/lib/mysql
- ./db:/etc/mysql/conf.d
restart: always
env_file: ../.env
networks:
web_tier:
driver: bridge
app_tier:
driver: bridge
db_tier:
driver: bridge
volumes:
db_data:
The issue seems to be related to Nginx with the default.conf not being accessible or the app thinkgs it's a folder and not a file.
I checked the issue online and people suggests to mount the C: folder so I tried to mount it on Oracle VirtualBox and re-run the docker-compose up command but it didn't solve the issue.
Any idea?
I solved same problem with sharing folder with Oracle VirtualBox VM default
Share your project folder and restart your vm.
You can do even with command like
docker-machine stop default & docker-machine stop default
Now, you need to use shared name (project) instead of . in your compose file (docker-compose.yml)
For your case,
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
should changed to
- /sharename/nginx/default.conf:/etc/nginx/conf.d/default.conf
Now try with docker-compose up.
It worked for me.

Resources