How to configure rabbitmq.config inside Docker containers? - docker

I'm using the official RabbitMQ Docker image (https://hub.docker.com/_/rabbitmq/)
I've tried editing the rabbitmq.config file inside the container after running
docker exec -it <container-id> /bin/bash
However, this seems to have no effect on the rabbitmq server running in the container. Restarting the container obviously didn't help either since Docker starts a completely new instance.
So I assumed that the only way to configure rabbitmq.config for a Docker container was to set it up before the container starts running, which I was able to partly do using the image's supported environment variables.
Unfortunately, not all configuration options are supported by environment variables. For instance, I want to set {auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']} in rabbitmq.config.
I then found the RABBITMQ_CONFIG_FILE environment variable, which should allow me to point to the file I want to use as my conifg file. However, I've tried the following with no luck:
docker service create --name rabbitmq --network rabbitnet \
-e RABBITMQ_ERLANG_COOKIE='mycookie' --hostname = "{{Service.Name}}{{.Task.Slot}}" \
--mount type=bind,source=/root/mounted,destination=/root \
-e RABBITMQ_CONFIG_FILE=/root/rabbitmq.config rabbitmq
The default rabbitmq.config file containing:
[ { rabbit, [ { loopback_users, [ ] } ] } ]
is what's in the container once it starts
What's the best way to configure rabbitmq.config inside Docker containers?

the config file lives in /etc/rabbitmq/rabbitmq.config so if you mount your own config file with something like this (I'm using docker-compose here to setup the image)
volumes:
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
that should do it.
In case you are having issues that the configuration file get's created as directory, try absolute paths.

I'm able to run RabbitMQ with a mounted config using the following bash script:
#RabbitMQ props
env=dev
rabbitmq_name=dev_rabbitmq
rabbitmq_port=5672
#RabbitMQ container
if [ "$(docker ps -aq -f name=${rabbitmq_name})" ]; then
echo Cleanup the existed ${rabbitmq_name} container
docker stop ${rabbitmq_name} && docker rm ${rabbitmq_name}
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
else
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
fi
I also have the following config files in my rabbitmq/dev dir
definitions.json
{
"rabbit_version": "3.7.3",
"users": [{
"name": "welib",
"password_hash": "su55YoHBYdenGuMVUvMERIyUAqJoBKeknxYsGcixXf/C4rMp",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": ""
}, {
"name": "admin",
"password_hash": "x5RW/n1lq35QfY7jbJaUI+lgJsZp2Ioh6P8CGkPgW3sM2/86",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}],
"vhosts": [{
"name": "/"
}, {
"name": "dev"
}],
"permissions": [{
"user": "welib",
"vhost": "dev",
"configure": ".*",
"write": ".*",
"read": ".*"
}, {
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}],
"topic_permissions": [],
"parameters": [],
"global_parameters": [{
"name": "cluster_name",
"value": "rabbit#98c821300e49"
}],
"policies": [],
"queues": [],
"exchanges": [],
"bindings": []
}
rabbitmq.config
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "/opt/definitions.json"}
]}
].

Related

AWS ECS EC2 ECR not updating files after deployment with docker volume nginx

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

How to add a RabbitMQ user while RabbitMQ is not live

I run a RabbitMQ service and Celery in a Docker container for my server. Workers are celery instances which connect to the server via RabbitMQ.
I set up and run RabbitMQ like this:
sudo service rabbitmq-server start
rabbitmqctl add_user bunny password
rabbitmqctl add_vhost bunny_host
rabbitmqctl set_permissions -p bunny_host bunny ".*" ".*" ".*"
This has a problem: if a worker tries to connect between the service being started and the bunny user being created and given permissions, then the worker's celery instance will terminate.
I tried adding this to the Dockerfile for my server to add the user before "live" startup:
RUN sudo service rabbitmq-server start && \
rabbitmqctl add_user bunny password && \
rabbitmqctl add_vhost bunny_host && \
rabbitmqctl set_permissions -p bunny_host bunny ".*" ".*" ".*" && \
sudo service rabbitmq-server stop
But when I restarted the rabbitmq-server service within the container, the user bunny did not exist.
(If I try to use rabbitmqctl to add a user when the service is not running, it errors out.)
Any help would be much appreciated.
You can not run rabbitmqctl during the build time. Instead, you can achieve the same by using the bootstrap files. you need to update your files like the below snippets
Dockerfile
FROM rabbitmq:3.6.11-management-alpine
ADD rabbitmq.config /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
RUN chmod 666 /etc/rabbitmq/*
rabbitmq.config
[
{
rabbit,
[
{ loopback_users, [] }
]
},
{
rabbitmq_management,
[
{ load_definitions, "/etc/rabbitmq/definitions.json" }
]
}
].
definitions.json
{
"rabbit_version": "3.6.14",
"users": [
{
"name": "user",
"password_hash": "0xZBvBD2JOGWrVO84nZ62EJuQIRehcILEiPVFB9mD4zhFcAo",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "/"
}
],
"permissions": [
{
"user": "community",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"parameters": [],
"global_parameters": [
{
"name": "cluster_name",
"value": "rabbit#rabbitmq"
}
],
"policies": [],
"queues": [],
"exchanges": [],
"bindings": []
}

vscode -- How to run `docker` in a task ? -- Docker-Build-Task does not work

Situation and Problem
I am running macOS Mojave 10.14.5, upgraded bash like described here and have a TeXlive docker container (basically that one), that I want to call to typeset LaTeX files. This does work very well and also execution with this following tasks.json worked flawlessly up unti some recent update (that I cannot pin down, as I am not using this daily).
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "runit",
"group": {
"kind": "build",
"isDefault": true
},
"command": "docker",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
"problemMatcher": []
},
{
"type": "shell",
"label": "test",
"command": "echo",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
}
]
}
Trying to run docker yields a "command not found" :
> Executing task: docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
/usr/local/bin/bash: docker: command not found
The terminal process command '/usr/local/bin/bash -c 'docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex'' failed to launch (exit code: 127)
... while trying to echo, works just fine.
> Executing task: echo run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex
Even though, it once worked just like described above and the very same command works in the terminal, it fails now if I execute it as a build-task. Hence, my
Question
How to use docker in a build-task ?
or fix the problem in the above set up.
additional notes
Trying the following yielded the same "command not found"
{
"type": "shell", "label": "test",
"command": "which", "args": ["docker"]
}
... even though this works:
bash$ /usr/local/bin/bash -c 'which docker'
/usr/local/bin/docker
bash$ echo $PATH
/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
edit: One more note:
I am using a context entry to start vscode with an automator script that runs the following bash command with the element 'right-clicked' passed as the variable:
#!/bin/sh
/usr/local/bin/code -n "$1"
So since there hasn't been any progress here and I got help on GitHub: I will just answer myself such that others led here searching for a solution won't be let down.
Please give all the acknowledgement to joaomoreno for his answer here
Turns out that by starting vscode via a context-entry there is some issue with an enviroment variable. Starting like this fixed that problem thus far:
#!/bin/sh
VSCODE_FORCE_USER_ENV=1 /usr/local/bin/code -n "$1"

Permission errors running jenkins inside docker using persistent volumes with marathon and mesos

I am trying to get jenkins running inside docker using marathon and mesos to lunch a jenkins docker image.
I used the create application button which produces the following json
{
"type": "DOCKER",
"volumes": [
{
"containerPath": "/var/jenkins_home",
"hostPath": "jenkins_home",
"mode": "RW"
},
{
"containerPath": "jenkins_home",
"mode": "RW",
"persistent": {
"size": 200
}
}
],
"docker": {
"image": "jenkins",
"network": "HOST",
"privileged": false,
"parameters": [],
"forcePullImage": false
}
}
stdout shows
--container="mesos-c8bd5b26-6e71-4e18-b490-821dbf7edd9d-S0.ac0b4dbb-10e4-4684-a4df-9539258d77ee" --docker="docker" --docker_socket="/var/run/docker.sock" --help="false" --initialize_driver_logging="true" --launcher_dir="/home/ajazam/mesos-0.28.0/build/src" --logbufsecs="0" --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" --quiet="false" --sandbox_directory="/var/lib/mesos/data/slaves/c8bd5b26-6e71-4e18-b490-821dbf7edd9d-S0/frameworks/6079a596-90a8-4fa5-9c92-9215558737d1-0000/executors/jenkins-t7.9be44260-f99c-11e5-b0ac-e4115bb26fcc/runs/ac0b4dbb-10e4-4684-a4df-9539258d77ee" --stop_timeout="0ns"
Registered docker executor on slave4
Starting task jenkins-t7.9be44260-f99c-11e5-b0ac-e4115bb26fcc
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
stderr shows
I0403 14:04:51.026866 6569 exec.cpp:143] Version: 0.28.0
I0403 14:04:51.032097 6585 exec.cpp:217] Executor registered on slave c8bd5b26-6e71-4e18-b490-821dbf7edd9d-S0
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
I am using
marathon 1.0.0 RC1
mesos 0.28.0
docker 1.10.3
OS is ubuntu 14.04.4 LTS
Does anybody have any pointers to where I'm going wrong? My feeling is that the problem is todo with the persistent volume and the mapping of it into the jenkins container.
I got it working.
git clone https://github.com/jenkinsci/docker.git on to your agent nodes. I've done it on all mine
insert # in front of lines 16 and 17 in Dockerfile e.g
# RUN groupadd -g ${gid} ${group} \
# && useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
run sudo docker build .
use sudo docker tag xyz jenkins to rename the repo to jenkins and then create an application using docker, jenkins and persistent volumes.

Docker Compose does not bind ports

I have the following Dockerfile for my container:
FROM centos:centos7
# Install software
RUN yum -y update && yum clean all
RUN yum install -y tar gzip wget && yum clean all
# Install io.js
RUN mkdir /root/iojs
RUN wget https://iojs.org/dist/v1.1.0/iojs-v1.1.0-linux-x64.tar.gz
RUN tar -zxvf iojs-v1.1.0-linux-x64.tar.gz -C /root/iojs
RUN rm -f iojs-v1.1.0-linux-x64.tar.gz
# add io.js to path
RUN echo "PATH=$PATH:/root/iojs/iojs-v1.1.0-linux-x64/bin" >> /root/.bashrc
# go to /src
WORKDIR /src
CMD /bin/bash
I build this container and start the image with docker run -i -t -p 8080:8080 -v /srv/source:/usr/src/app -w /usr/src/app --rm iojs-dev bash. Docker binds the port 8080 to the host port 8080, so that I can access the iojs-application from my client. Everything works fine.
Now I want to start my container with docker-compose, using the following docker-compose.yml
webfrontend:
image: iojs-dev
links:
- db
command: bash -c "iojs test.js"
ports:
- "127.0.0.1:8080:8080"
volumes:
- /srv/source:/usr/src/app
- /logs:/logs
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: 12345
When I now run docker-compose run webfrontend bash I can not access the port 8080 on my host. No port was binded. The result of docker ports is empty and also the result of docker inspect is empty at the port settings:
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.51",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:33",
"PortMapping": null,
"Ports": {
"8080/tcp": null
}
},
"HostConfig": {
"Binds": [
"/srv/source:/usr/src/app:rw",
"/logs:/logs:rw"
],
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": null,
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": [
"/docker_db_1:/docker_webfrontend_run_34/db",
"/docker_db_1:/docker_webfrontend_run_34/db_1",
"/docker_db_1:/docker_webfrontend_run_34/docker_db_1"
],
"LxcConf": null,
"NetworkMode": "bridge",
"PortBindings": null,
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": []
},
This is intentional behavior for docker-compose run, as per documentation:
When using run, there are two differences from bringing up a container normally:
...
by default no ports will be created in case they collide with already opened ports.
One way to overcome this is to use up instead of run, which:
Builds, (re)creates, starts, and attaches to containers for a service.
Another way, if you're using version 1.1.0 or newer, is to pass the --service-ports option:
Run command with the service's ports enabled and mapped to the host.
P.S. Tried editing the original answer, got rejected, twice. Stay classy, SO.
This is intentional behavior for fig run.
Run a one-off command on a service.
One-off commands are started in new containers with the same config as a normal container for that service, so volumes, links, etc will all be created as expected. The only thing different to a normal container is the command will be overridden with the one specified and no ports will be created in case they collide.
source.
fig up is probably the command you're looking for, it will (re)create all containers based on your fig.yml and start them.

Resources