docker-compose not loading definitions.json for RabbitMQ - docker

I am experimenting with Docker to create a container for RabbitMQ on my Windows 11 laptop. Doing the basics I can get it to run without error. So, from this I tried to have expand it by adding to the compose yaml file the definitions.json. The definitions.json I simply downloaded the definitions straight from the UI.
My docker-compose.yml looks like this:
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3-management
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ./definitions.json:/etc/rabbitmq/definitions.json
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net
networks:
rabbitmq_go_net:
driver: bridge
Now, when I run the compose file, it runs without any error at all, but none of the queues are visible in the UI. I have tried various things, but it appears as though the definitions.json is being ignored. As a further check, I did reload the definitions through the UI and queues reappeared.
So, how do you configure the docker compose file to load the definitions.json when creating a container from docker compose up?

Actually, the problem was the location where the definitions.json is meant to be stored. Some websites I have read have it located in the rabbitmq folder. However, I followed this link https://thomasdecaux.medium.com/deploy-rabbitmq-with-docker-static-configuration-23ad39cdbf39 and it worked. The other point to make is ensuring there is a rabbitmq.conf file to load the definitions.json file - this is critical to load the file

Related

symfony docker-compose output log to host

I am setting up a Symfony 6/Next js app with docker-compose and would like to have the logs from Monolog that in Symfony are output to /var/log/ show on a host file within the app code, so that I can easily review it with VSCode instead of the awkward process of opening a shell.
Note that the logs are correctly output with the container, which is fine.
So I have attempted to create a bind mount, but either it just does not work or it throws an error with docker-compose up.
Something like:
services:
php:
build:
context: ./api
target: api_platform_php
depends_on:
- database
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- ./api/var/log:/srv/api/var/log ***HERE***
In this specific instance it will error: setfacl: var/log: Not supported
I did find a number of related questions on SO, including this one, but they either concern overall docker logs or are not directly applicable.
Not what you asked, but its common practice to pipe the logs to the container:
Monolog config:
path: "php://stderr"
then you can use
docker-compose logs -f --tail="all" php
from outside the container

Grafana on Docker

I am using docker to run prometheus, grafana and node exporter. I am trying to use named volumes and I am having some issues with that. My docker-compose code is:
version: "3.7"
volumes:
grafana_ini:
prometheus_data:
grafana_data:
dashboards_data:
services:
grafana:
build: ./grafana
volumes:
- grafana_ini:/etc/grafana/grafana.ini
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
- dashboards_data:/etc/grafana/provisioning/dashboards
- ./dashboards/linux_dashboard.json:/etc/grafana/provisioning/dashboards/linux_dashboard.json
ports:
- 3000:3000
links:
- prometheus
prometheus:
build: ./prometheus
volumes:
- prometheus_data:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
node-exporter:
image: prom/node-exporter:latest
container_name: node_exporter
restart: unless-stopped
expose:
- 9100
and my dockerfile for grafana is:
FROM grafana/grafana:latest
COPY ./Ini/grafana.ini /etc/grafana/grafana.ini
COPY datasource.yml /etc/grafana/provisioning/datasources/datasource.yml
COPY ./dashboards/dashboard.yml /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/linux_dashboard.json /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/windows_dashboard.json /etc/grafana/provisioning/dashboards
EXPOSE 3000:3000
and I am getting this error while building it
ERROR: for 2022_grafana_1 Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
ERROR: for grafana Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
Can anybody please help me.
It looks like there are some problems with the volume configuration in your Grafana container:
First, I think this was simply a typo in your question:
- grafana_ini:/etc/grafana/grafana.inianticipated location in container
I suspect that you were actually intending this:
- grafana_ini:/etc/grafana/grafana.ini
Which doesn't make any sense: grafana.ini is a file, but a volume is
a directory. Docker won't allow you to mount a directory on top of a
file, hence the error:
ERROR: .../etc/grafana/grafana.ini is not directory
You have the same problem with the grafana_data volume, which you're
attempting to mount on top of datasource.yml:
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
I think you may be approaching this configuration in the wrong way;
you may want to read through these documents:
https://grafana.com/docs/grafana/latest/installation/docker/
https://grafana.com/docs/grafana/latest/administration/configure-docker/
https://grafana.com/docs/grafana/latest/administration/provisioning/
It is possible to configure Grafana (and Prometheus!) using only bind
mounts and environment variables (this includes installing plugin,
data sources, and dashboards), so you don't need to build your own
custom images.
Unrelated to this particular problem, there are some other things in
your docker-compose.yml that are worth changing. You should no
longer be using the links directive...
links:
- prometheus
...because Docker maintains DNS for you automatically; your containers
can refer to each other by name with no additional configuration.

Docker-Compose binding volumes, file sharing issue

I am new to docker and had a problem that hope you could help.
I have defined multiple services (HTTPSERV, IMED, etc...) in my docker-compose file and these services have a python code inside and a docker file for running them. The dockerfile also copies the required files into a host path defined in docker-compose. The HTTPSERV and IMED must share a text file and expose it to an external user sending a GET request to HTTPSERV.
In docker-compose I have defined a local host directory and bind it to a named volume. The services and docker files are meant to share each service files and run.
As soon as I run the docker-compose, the files related to the first service copies files in the PATH directory where "src" and change the permission right of the "src" folder not letting the other services copy their files. This causes the next services to fail to find the appropriate files and the whole orchestration process fails.
version: "3.9"
networks:
default:
ipam:
config:
- subnet: 172.28.0.2/20
services:
httpserv:
user: root
container_name: httpserver
build: ./HTTPSERV
volumes:
- myapp:/httpApp:rw
networks:
default:
ipv4_address: 172.28.0.5
ports:
- "8080:3000"
rabitQ:
user: root
container_name: rabitQ
image: rabbitmq:3.8-management
networks:
default:
ipv4_address: 172.28.0.4
ports:
- "9000:15672"
imed:
user: root
container_name: IMED-Serv
build: ./IMED
volumes:
- myapp:/imed:rw
networks:
- default
# restart: on-failure
orig:
user: root
container_name: ORIG-Serv
build: ./ORIG
volumes:
- myapp:/orig:rw
networks:
- default
# restart: on-failure
obse:
container_name: OBSE-Serv
build: ./OBSE
volumes:
- myapp:/obse:rw
networks:
- default
# restart: on-failure
depends_on:
- "httpserv"
links:
- httpserv
volumes:
myapp:
driver: local
driver_opts:
type: none
o: bind
device: /home/dockerfiles/hj/a3/src
The content of the docker file is similar for most of the services and is as follow:
FROM python:3.8-slim-buster
WORKDIR /imed
COPY . .
RUN pip install --no-cache-dir -r imed-requirements.txt
RUN chmod 777 ./imed.sh
CMD ["./imed.sh"]
The code has root access and the UserID and GroupID are set
I also used the anonymously named volumes but the same issue happens.
In Docker it's often better to avoid "sharing files" as a first-class concept. Imagine running this in a clustered system like Kubernetes; if you have three copies of each service, and they're running in a cloud of a hundred systems, "sharing files" suddenly becomes difficult.
Given the infrastructure you've shown here, you have a couple of straightforward options:
Add an HTTP endpoint to update the file. You'd need to protect this endpoint in some way, since you don't want external callers accessing it; maybe filter it in an Nginx reverse proxy, or use a second HTTP server in the same process. The service that has the updated content would then call something like
r = requests.post('http://webserv/internal/file.txt', data=contents)
r.raise_for_status()
Add an HTTP endpoint to the service that owns the file. When the Web server service starts up, and periodically after that, it makes a request
r = requests.get('http://imed/file.txt')
You already have RabbitMQ in this stack; add a RabbitMQ consumer in a separate thread, and post the updated file content to a RabbitMQ topic.
There's some potential trouble with the "push" models if the Web server service restarts, since it won't be functional until the other service sends it the current version of file.
If you really do want to do this with a shared filesystem, I'd recommend creating a dedicated directory and dedicated volume to do this. Do not mount a volume over your entire application code. (It will prevent docker build from updating the application, and in the setup you've shown, you're mounting the same application-code volume over every service, so you're running four copies of the same service instead of four different services.)
Adding in the shared volume, but removing a number of other unnecessary options, I'd rewrite the docker-compose.yml file as:
version: '3.9'
services:
httpserv:
build: ./HTTPSERV
ports:
- "8080:3000"
volumes:
- shared:/shared
rabitQ:
image: rabbitmq:3.8-management
hostname: rabitQ # (RabbitMQ specifically needs this setting)
ports:
- "9000:15672"
imed:
build: ./IMED
volumes:
- shared:/shared
orig:
build: ./ORIG
obse:
build: ./OBSE
depends_on:
- httpserv
volumes:
shared:
I'd probably have the producing service unconditionally copy the file into the volume on startup, and have the consuming service block on it being present. (Don't depend on Docker to initialize the volume; it doesn't work on Docker bind mounts, or on Kubernetes, or if the underlying image has changed.) So long as the files are world-readable the two services' user IDs don't need to match.

How can I store data with Docker Compose containers?

I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.

Docker for symfony and nginx without mounting the source code

We have a develop and a production system that use symfony 5 + nginx + MySQL services running in a docker environment.
At the moment the nginx webserver runs in the same container as the symfony service because of this issue:
On our develop environment we are able to mount the symfony sourcecode into the docker container (by a docker-compose file).
In our production environment we need to deliver containers that contains all the source code inside because we must not put our source code on the server. So there is no folder on the server from which we can mount our source code.
Unfortunately nginx needs the sourceode as well to make his routing decisions so we decided to put the symfony and the nginx services together in one container.
Now we want to clean this up to have a better solution by run every service in its own container:
version: '3.5'
services:
php:
image: docker_sandbox
build: ../.
...
volumes:
- docker_sandbox_src:/var/www/docker_sandbox // <== VOLUME
networks:
- docker_sandbox_net
...
nginx:
image: nginx:1.19.0-alpine
...
volumes:
- ./nginx/server.conf:/etc/nginx/conf.d/default.conf:ro
- docker_sandbox_src:/var/www/docker_sandbox <== VOLUME
...
networks:
- docker_sandbox_net
depends_on:
- php
mysql:
...
volumes:
docker_sandbox_src:
networks:
docker_sandbox_net:
driver: bridge
One possible solution is to use a named volume that connects the nginx service with the symfony service. The problem with that is that on an update of our symfony image the volume keeps the old changes. So there is no update until we manually delete this volume.
Is there a better way to handle this issue? May be a volume that is able to overwrite its content when a new image is deployed. Or an nginx config that does not require the symfony source code in its own container.

Resources