Export RabbitMQ Docker image with vhost and queues - docker

I have a rabbitMQ docker container that I started using the following command:
docker run -d --name myrabbit1 -p 15672:15672 rabbitmq:3-management
I then loggin to the management plugin and create users, vhosts, queues, etc.
I want to save all those settings so they can be loaded up again. To do that I tried committing to a new image:
docker commit myrabbit1 vbrabbit:withVhostAndQueues
I then start up my new container (after stopping the old one):
docker run -d --name vbrabbit2 -p 15672:15672 -p 5672:5672 vbrabbit:withVhostAndQueues
I expect that all the queues, vhosts, etc would be saved, but they are not.
What am I missing?
Result from docker ps -a:

I want to save all those settings so they can be loaded up again
are you needing to create a copy of the container, with the same settings?
or are you just looking to docker stop myrabbit1 and then later docker start myrabbit to run the same container, again?

TL;DR
The RabbitMQ instance within the container is looking for data in a different place. The default configuration changes the data storage/load location per container creation. Thus the OPs data existed in the created "final" image but rabbitmq wasn't loading it.
To fix statically set RABBITMQ_NODENAME which likewise might requiring adding another line to /etc/hosts for RabbitMQ to affirm the node is active.
Details
This happened to me with docker rabbit:3.8.12-management
This is caused by RabbitMQ's default configuration impacting how it does data storage. By default RabbitMQ starts a node on UNIX system with a name of rabbit#$HOSTNAME (see RABBITMQ_NODENAME on config docs). In Docker the $HOSTNAME changes per container run it defaults to the container id (e.g. something like dd84759287560).
In #jhilden's case is when the vbrabbit:withVhostAndQueues image is booted as a new container the RABBITMQ_NODENAME becomes a different value then what was used to create and store the original vhosts, user, queues, etc. And as RabbitMQ stores data inside a directory named after the RABBITMQ_NODENAME the existing data isn't loaded on boot of vbrabbit:withVhostAndQueues. As when the $HOSTNAME changes the RABBITMQ_NODENAME changes. Thus the booting RabbitMQ instance cannot find any existing data. (e.g. the existing data is there in the image but for a different RABBITMQ_NODENAME and isn't loaded).
Note: I've only looked into solving this for a local development single instance cluster. If you're using RabbitMQ docker for a production deployment you'd probably need to look into customized hostnames
To fix this issue we set a static RABBITMQ_NODENAME for the container.
In docker-compose v3 file we updated from:
# Before fix
rabbitmq:
image: "rabbitmq:$RABBITMQ_VERSION"
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
- "61613:61613"
volumes:
- "./etc/rabbit-plugins:/etc/rabbitmq/enabled_plugins"
- type: volume
source: rabbitmq-data
target: /var/lib/rabbitmq
Into after fix:
rabbitmq:
image: "rabbitmq:$RABBITMQ_VERSION"
container_name: rabbitmq
# Why do we set NODENAME and extra_hosts?
#
# It codifies that we're using the same RabbitMQ instance between container rebuilds.
# If NODENAME is not set it defaults to "rabbit#$HOST" and because $HOST is dynamically
# created in docker it changes per container deployment. Why is a changing host an issue?
# Well because under the hood Rabbit stores data on a per node basis. Thus without the
# static RABBITMQ_NODENAME the directory the data is stored within changes per restart.
# Going from "rabbit#7745942c559e" to "rabbit#036834720485" the next. Okay, but why do we
# need extra_hosts? We'll Rabbit wants to resolve itself to affirm it's management UI is
# functioning post deployment and does that with an HTTP call. Thus to resolve the static
# host from RABBITMQ_NODENAME we need to add it to the containers /etc/hosts file.
environment:
RABBITMQ_NODENAME: "rabbit#staticrabbit"
extra_hosts:
- "staticrabbit:127.0.0.1"
ports:
- "5672:5672"
- "15672:15672"
- "61613:61613"
volumes:
- "./etc/rabbit-plugins:/etc/rabbitmq/enabled_plugins"
- type: volume
source: rabbitmq-data
target: /var/lib/rabbitmq

Related

How to configure RabbitMQ for message persistence in Docker swarm?

How can I configure RabbitMQ to retain messages on node restart in docker swarm?
I've marked the queues as durable and I'm setting the message's delivery mode to 2. I'm mounting /var/lib/rabbitmq/mnesia to a persistent volume. I've docker exec'd to verify that rabbitmq is indeed creating files in said folder, and all seems well. Everything works in my local machine using docker-compose.
However, when the container crashes, docker swarm creates a new one, and this one seems to initialize a new Mnesia database instead of using the old one. The database's name seems to be related to the container's id. It's just a single node, I'm not configuring any clustering.
I haven't changed anything in rabbitmq.conf, except for the cluster_name, since it seemed to be related to the folder created, but that didn't solve it.
Relevant section of the docker swarm configuration:
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
networks:
- default
environment:
- RABBITMQ_DEFAULT_PASS=password
- RABBITMQ_ERLANG_COOKIE=cookie
- RABBITMQ_NODENAME=rabbit
volumes:
- rabbitmq:/var/lib/rabbitmq/mnesia
- rabbitmq-conf:/etc/rabbitmq
deploy:
placement:
constraints:
- node.hostname==foomachine

Docker-compose redis: start with fixture?

I am using docker-compose to create a redis container. However, I need it to start with some default key values. Is this possible?
You need to modify your DockerCompose file, You can also add from some file which contains key value but here is the simplest example that adds and get key in DockerCompose file.
version: '2'
services:
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
command:
- /bin/sh
- -c
- |
nohup redis-server &
sleep 5
echo "adding some default key value"
redis-cli SET docker awesome
echo "Get docker key value"
redis-cli GET docker
# this will keep container running
tail -f /dev/null
There are several approaches but be aware that, by default, services start in an arbitrary order using Docker Compose and, even if you use depends_on this only checks that containers are running (e.g. redis) and not that they've completed some initialization process.
1. Easiest: Pre-create
See the option to run the redis image with persistent storage:
https://hub.docker.com/_/redis/
Using this approach, you'd either mount a local directory into the container's /data directory or create a (data) volume and use that. Then, you'd pre-populate the redis server by running the redis-cli against it.
One hack to doing this is to your planned docker-compose.yml file but docker-compose --file=/path/to/docker-compost.yaml up redis where redis is the name of the redis service too. You'll need to ensure the redis service is accessible from the host --ports: 6379:6379 perhaps so that the external redis-cli can access it.
This approach works well for local-only use but does not facilitate deploying the solution elsewhere.
2. Resilient: Test for keys
Docker Compose -- to my knowledge -- doesn't offer an elegant equivalent to Kubernetes' init containers which are run before the dependent container.
With Docker Compose, you could include an initialization (run once) redis-cli to populate the server but you must then augment any clients to check that this has completed or for the existence of this data before starting (successfully).
The simplest solution for this is for the redis clients to fail and restart: always if the redis keys aren't present.
A more advanced solution would be to define a healthcheck for the existence of the redis keys and then depends_upon: ... condition: service_healthy (see link)
See also startup order in Docker Compose described here

How to implement changes made to docker-compose.yml to detached running containers

The project is currently running in the background from this command:
docker-compose up -d
I need to make two changes to their docker-compose.yml:
Add a new container
Update a previous container to have a link to the new container
After changes are made:
NOTE the "<--" arrows for my changes
web:
build: .
restart: always
command: ['tini', '--', 'rails', 's']
environment:
RAILS_ENV: production
HOST: example.com
EMAIL: admin#example.com
links:
- db:mongo
- exim4:exim4.docker # <-- Add link
ports:
- 3000:3000
volumes:
- .:/usr/src/app
db:
image: mongo
restart: always
exim4: # <-------------------------------- Add new container
image: exim4
restart: always
ports:
- 25:25
environment:
EMAIL_USER: user#example.com
EMAIL_PASSWORD: abcdabcdabcdabcd
After making the changes, how do I apply them? (without destroying anything)
I tried docker-compose down && docker-compose up -d but this destroyed the Mongo DB container... I cannot do that... again... :sob:
docker-compose restart says it won't recognize any changes made to docker-compose.yml
(Source: https://docs.docker.com/compose/reference/restart/)
docker-compose stop && docker-compose start sounds like it'll just startup the old containers without my changes?
Test server:
Docker version: 1.11.2, build b9f10c9/1.11.2
docker-compose version: 1.8.0, build f3628c7
Production server is likely using older versions, unsure if that will be an issue?
If you just run docker-compose up -d again, it will notice the new container and the changed configuration and apply them.
But:
(without destroying anything)
There are a number of settings that can only be set at container startup time. If you change these, Docker Compose will delete and recreate the affected container. For example, links are a startup-only option, so re-running docker-compose up -d will delete and recreate the web container.
this destroyed the Mongo DB container... I cannot do that... again...
db:
image: mongo
restart: always
Add a volumes: option to this so that data is stored outside the container. You can keep it in a named volume, possibly managed by Docker Compose, which has some advantages, but a host-system directory is probably harder to accidentally destroy. You will have to delete and restart the container to change this option. But note that you will also have to delete and restart the container if, for example, there is a security update in MongoDB and you need a new image.
Your ideal state here is:
Actual databases (like your MongoDB container) store data in named volumes or host directories
Applications (like your Rails container) store nothing locally, and can be freely destroyed and recreated
All code is in Docker images, which can always be rebuilt from source control
Use volumes as necessary to inject config files and extract logs
If you lose your entire /var/lib/docker directory (which happens!) you shouldn't actually lose any state, though you will probably wind up with some application downtime.
Just docker-compose up -d will do the job.
Output should be like
> docker-compose up -d
Starting container1 ... done
> docker-compose up -d
container1 is up-to-date
Creating container2 ... done
As a side note, docker-compose is not really for production. You may want to consider docker swarm.
the key here is that up is idempotent.
if you update configuration in docker-compose.yaml
docker compose up -d
If compose is building images before run it, and you want to rebuild them:
docker compose up -d --build

docker rabbitmq how to expose port and reuse container with a docker file

Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.

How to configure a dockerfile and docker-compose for Jenkins

Im absolutely new in Docker and Jenkins as well. I have a question about the configuration of Dockerfile and docker-compose.yml file. I tried to use the easiest configuration to be able to set-up these files correctly. Building and pushing is done correctly, but the jenkins application is not running on my localhost (127.0.0.1).
If I understand it correctly, now it should default running on port 50000 (ARG agent_port=50000 in jenkins "official" Dockerfile). I tried to use 50000, 8080 and 80 as well, nothing is working. Do you have any advice, please? Im using these files: https://github.com/fdolsky321/Jenkins_Docker
The second question is, whats the best way to handle the crashes of the container. Lets say, that if the container crashes, I want to recreate a new container with the same settings. Is the best way just to create a new shell file like "crash.sh" and provide there the information, that I want to create new container with the same settings? Like is mentioned in here: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Thank you for any advice.
docker-compose for Jenkins
docker-compose.yml
version: '2'
services:
jenkins:
image: jenkins:latest
ports:
- 8080:8080
- 50000:50000
# uncomment for docker in docker
privileged: true
volumes:
# enable persistent volume (warning: make sure that the local jenkins_home folder is created)
- /var/wisestep/data/jenkins_home:/var/jenkins_home
# mount docker sock and binary for docker in docker (only works on linux)
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
Replace the port 8080, 50000 as you need in your host.
To recreate a new container with the same settings
The volumne mounted jenkins_home, is the placewhere you store all your jobs and settings etc..
Take the backup of the mounted volume jenkins_home on creating every job or the way you want.
Whenever there is any crash, run the Jenkins with the same docker-compose file and replace the jenkins_home folder with the backup.
Rerun/restart jenkins again
List the container
docker ps -a
Restart container
docker restart <Required_Container_ID_To_Restart>
I've been using a docker-compose.yml that looks like the following:
version: '3.2'
volumes:
jenkins-home:
services:
jenkins:
image: jenkins-docker
build: .
restart: unless-stopped
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- jenkins-home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
container_name: jenkins-docker
My image is a locally built Jenkins image, based off of jenkins/jenkins:lts, that adds in some other components like docker itself, and I'm mounting the docker socket to allow me to run commands on the docker host. This may not be needed for your use case. The important parts for you are the ports being published, which for me is only 8080, and the volume for /var/jenkins_home to preserve the Jenkins configuration between image updates.
To recover from errors, I have restart: unless-stopped inside the docker-compose.yml to configure the container to automatically restart. If you're running this in swarm mode, that would be automatic.
I typically avoid defining a container name, but in this scenario, there will only ever be one jenkins-docker container, and I like to be able to view the logs with docker logs jenkins-docker to gather things like the initial administrator login token.
My Dockerfile and other dependencies for this image are available at: https://github.com/bmitch3020/jenkins-docker
HyperV with docker for Windows.
In that case, you must be sure you port-forward any published port (like 5000).
Open HyperV manager, and right-click on the machine defined there: you will be able to add port-forwarding rules in order for localhost:5000 to access your VM:5000.

Resources