I have a RabbitMQ broker with some exchanges and queues already defined. I know I can export and import these definitions via the HTTP API. I want to Dockerize it, and have all the broker definitions imported when it starts.
Ideally, it would be done as easily as it is done via the API. I could write a bunch of rabbitmqctl commands, but with a lot of definitions this might take quite a some time. Also, every change somebody else makes through the web interface will have to be inserted.
I have managed to do what I want by writing a script that sleeps a curl request and starts the server, but this seems to be error prone and really not elegant. Are there any better ways to do definition importing/exporting
, or is this the best that can be done?
My Dockerfile:
FROM rabbitmq:management
LABEL description="Rabbit image" version="0.0.1"
ADD init.sh /init.sh
ADD rabbit_e6f2965776b0_2015-7-14.json /rabbit_config.json
CMD ["/init.sh"]
init.sh
sleep 10 && curl -i -u guest:guest -d #/rabbit_config.json -H "content-type:application/json" http://localhost:15672/api/definitions -X POST &
rabbitmq-server $#
Export definitions using rabbitmqadmin export rabbit.definitions.json.
Add them inside the image using your Dockerfile: ADD rabbit.definitions.json /tmp/rabbit.definitions.json
Add an environment variable when starting the container, for example, using docker-compose.yml:
environment:
- RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbitmq_management load_definitions "/tmp/rabbit.definitions.json"
There is a simple way to upload definitions to Docker container.
Use preconfigured node to export the definitions to json file.
Then move this file to the same folder where you have the Dockerfile and create a rabbitmq.config in the same folder too. Here is the context of rabbitmq.config:
[
{ rabbit, [
{ loopback_users, [ ] },
{ tcp_listeners, [ 5672 ] },
{ ssl_listeners, [ ] },
{ hipe_compile, false }
] },
{ rabbitmq_management, [ { listener, [
{ port, 15672 },
{ ssl, false }
] },
{ load_definitions, "/etc/rabbitmq/definitions.json" }
] }
].
Then prepare an appropriate Dockerfile:
FROM rabbitmq:3.6.14-management-alpine
ADD definitions.json /etc/rabbitmq
ADD rabbitmq.config /etc/rabbitmq
EXPOSE 4369 5672 25672 15672
The definitions will be loaded during image build. When you run the container all definitions will be already applied.
You could start your container with RabbitMQ, configure the resources (queues, exchanges, bindings) and then commit your configured container as a new image. This image can be used to start new containers.
More details at https://docs.docker.com/articles/basics/#committing-saving-a-container-state
I am not sure that this is an option, but the absolute easiest way to handle this situation is to periodically create a new, empty RabbitMQ container and have it join the first container as part of the RabbitMQ cluster. The configuration of the queues will be copied to the second container.
Then, you can stop the container and create a versioned image in your docker repository of the new container using docker commit. This process will only save the changes that you have made to your image, and then it would enable you to not have to worry about re-importing the configuration each time. You would just have to get the latest image to have the latest configuration!
Modern releases support definition import directly in the core, without the need to preconfigure the management plugin.
# New in RabbitMQ 3.8.2.
# Does not require management plugin to be enabled.
load_definitions = /path/to/definitions/file.json
From Schema Definition Export and Import RabbitMQ documentation.
If you use official rabbitmq image you can mount definition file in /etc/rabbitmq shown as below, rabbitmq will load this definitions on when daemon started
docker run -v ./your_local_definitions_file.json:/etc/rabbitmq/definitions.json ......
Related
I am looking for a docker-compose file that will include Postgre, Cordite, Corda Nodes, and the respective backend server (spring boot). I can run Postgre, Cordite, and Corda nodes individually but I want to group them together in a single file. I think it will be easy for deployment purposes.
Cordite do a good open source example of this. This is the docker-compose.yml file: https://gitlab.com/cordite/cordite/blob/master/test/docker-compose.yml
This is the spin up script:
https://gitlab.com/cordite/cordite/-/blob/master/test/build_env.sh
This an example of the "spin up a service and wait for it to be ready" pattern:
docker-compose -p ${ENVIRONMENT_SLUG} up -d network-map
until docker-compose -p ${ENVIRONMENT_SLUG} logs network-map | grep -q "io.cordite.networkmap.NetworkMapApp - started"
do
echo -e "waiting for network-map to start"
sleep 5
done
So the up command starts the service and then the bash script reads the logs from the container every 5 seconds and checks for io.cordite.networkmap.NetworkMapApp - started in the log. Obviously you'll need to update what it looks for depending on which docker image you are spinning up.
Edit:
Some questions raised offline:
"How to place the cordapp or mount the cordapp?"
You have 2 options, either create your own Docker image which extends from the cordite one and place your CorDapps in the CorDapp folder. Or use a volume mount to allow the running container to see your CorDapps (which will exist on the host). The details are here https://hub.docker.com/r/cordite/cordite/ Specifically you can mount into /opt/corda/cordapps to override the CorDapps.
"How to add my springboot service for each node?"
Just add new services to the docker-compose.yml file which use your Springboot server docker image. There are 3 "normal" nodes in the example docker-compose.yml emea, apac and amer. I guess you want to create a new Springboot service for each one.
I have a spring boot application with some properties as below in my application.properties.
server.ssl.keyStore=/users/admin/certs/appcert.jks
server.ssl.keyStorePassword=certpwd
server.ssl.trustStore=/users/admin/certs/cacerts
server.ssl.trustStorePassword=trustpwd
Here the cert paths are hardcoded to some path. But, I dont want to hard code this as the path will not be known in Mesos or Kubernetes world.
I have a docker file as follows.
FROM docker.com/base/jdk1.8:latest
MAINTAINER Application Engineering [ https://docker.com/ ]
RUN mkdir -p /opt/docker/svc
COPY application/weather-service.war /opt/docker/svc/
CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml
Here, I can use the volume mount option in kubernetes so as to place the application.proerties.
How can i achieve the same thing for cert files in the application.properties?
Here, the cert props are optional for few applications and mandatory for few applications.
I need options to integrate within the docker images and having the cert files outside the docker image.
Approach 1. Within the docker image
Remove the property "server.ssl.keyStore" from application.properties and pass it as a environment variable like the below one.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
Now, the cert should be places in secrets and use the volume mount options with kubernetes.
Approach 2. No need to have -Dserver.ssl.keyStore=/certs/appcert.jks in the docker file, but still remove the property "server.ssl.keyStore" from application.properties and do as follows.
a. Create secret
kubectl create secret generic svc-truststore-cert
--from-file=./cacerts
b. Create one env variable as below.
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.trustStore=/certs/truststore/cacerts"
}
c. Create Volume mounts under container for pod.
"volumeMounts": [
{
"name": "truststore-cert",
"mountPath": "/certs/truststore"
}
]
d. Create a volume under spec.
{
"name": "truststore-cert",
"secret": {
"secretName": "svc-truststore-cert",
"items": [
{
"key": "cacerts",
"path": "cacerts"
}
]
}
}
Approach 3.
Using Kubernetes Persistent Volume.
Created a persistent volume on Kubernetes .
Mount the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/certs' path.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
I have taken the second approach. Is this correct? Is there any other better approach?
Thanks
Yes, the second approach is the best one, and it is the only way if you are storing some sensitive data like certificates, keys, etc. That topic is covered in the documentation.
Moreover, you can encrypt your secrets to add another level of protection.
I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.
I'm trying to make a Dockerfile based on the RabbitMQ repository with a customized policy set. The problem is that I can't useCMD or ENTRYPOINT since it will override the base Dockerfile's and then I have to come up with my own and I don't want to go down that path. Let alone the fact if I don't use RUN, it will be a part of run time commands and I want this to be included in the image, not just the container.
Other thing I can do is to use RUN command but the problem with that is the RabbitMQ server is not running at build time and also there's no --offline flag for the set_policycommand of rabbitmqctl program.
When I use docker's RUN command to set the policy, here's the error I face:
Error: unable to connect to node rabbit#e06f5a03fe1f: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#e06f5a03fe1f]
rabbit#e06f5a03fe1f:
* connected to epmd (port 4369) on e06f5a03fe1f
* epmd reports: node 'rabbit' not running at all
no other nodes on e06f5a03fe1f
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-136#e06f5a03fe1f'
- home dir: /var/lib/rabbitmq
- cookie hash: /Rw7u05NmU/ZMNV+F856Fg==
So is there any way I can set a policy for the RabbitMQ without writing my own version of CMD and/or ENTRYPOINT?
You're in a slightly tricky situation with RabbitMQ as it's mnesia data path is based on the host name of the container.
root#bf97c82990aa:/# ls -1 /var/lib/rabbitmq/mnesia
rabbit#bf97c82990aa
rabbit#bf97c82990aa-plugins-expand
rabbit#bf97c82990aa.pid
For other image builds you could seed the data files, or write a script that RUN calls to launch the application or database and configure it. With RabbitMQ, the container host name will change between image build and runtime so the image's config won't be picked up.
I think you are stuck with doing the config on container creation or at startup time.
Options
Creating a wrapper CMD script to do the policy after startup is a bit complex as /usr/lib/rabbitmq/bin/rabbitmq-server runs rabbit in the foreground, which means you don't have access to an "after startup" point. Docker doesn't really do background processes so rabbitmq-server -detached isn't much help.
If you were to use something like Ansible, Chef or Puppet to setup the containers. Configure a fixed hostname for the containers startup. Then start it up and configure the policy as the next step. This only needs to be done once, as long as the hostname is fixed and you are not using the --rm flag.
At runtime, systemd could complete the config to a service with ExecStartPost. I'm sure most service managers will have the same feature. I guess you could end up dropping messages, or at least causing errors at every start up if anything came in before configuration was finished?
You can configure the policy as described here.
Docker compose:
rabbitmq:
image: rabbitmq:3.7.8-management
container_name: rabbitmq
volumes:
- ~/rabbitmq/data:/var/lib/rabbitmq:rw
- ./rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json
ports:
- "5672:5672"
- "15672:15672"
I have four containers that was node ,redis, mysql, and data. when i run docker-compose rm,it will remove all of my container that include the container data.my data of mysql is in the the container and i don't want to rm the container data.
why i must rm that containers?
Sometime i must change some configure files of node and mysql and rebuild.So
,I must remove containers and start again.
I have searched using google again over again and got nothing.
As things stand, you need to keep your data containers outside of Docker Compose for this reason. A data container shouldn't be running anyway, so this makes sense.
So, to create your data-container do something like:
docker run --name data mysql echo "App Data Container"
The echo command will complete and the container will exit immediately, but as long as you don't docker rm the container you will still be able to use it in --volumes-from commands, so you can do the following in Compose:
db:
image: mysql
volumes-from:
- data
And just remove any code in docker-compose.yml to start up the data container.
An alternative to docker-compose, in Go (https://github.com/michaelsauter/crane), let's you create contianer groups -- including overriding the default group so that you can ignore your data containers when rebuilding your app.
Given you have a "crane.yaml" with the following containers and groups:
containers:
my-app:
...
my-data1:
...
my-data2:
...
groups:
default:
- "my-app"
data:
- "my-data1"
- "my-data2"
You can build your data containers once:
# create your data-only containers (safe to run several times)
crane provision data # needed when building from Dockerfile
crane create data
# build/start your app.
crane lift -r # similar to docker-compose build && docker compose up
# Force re-create off your data-only containers...
crane create --recreate data
PS! Unlike docker-compose, even if building from Dockerfile, you MUST specify an "image" -- when not pulling, this is the name docker will give the image locally! Also note that the container names are global, and not prefixed by the folder name the way they are in docker-compose.
Note that there is at least one major pitfall with crane: It simply ignores misplaced or wrongly spelled fields! This makes it harder to debug that docker-compose yaml.
#AdrianMouat Now , I can specify a *.yml file when I starting all container with the new version 1.2rc of docker-compose (https://github.com/docker/compose/releases). just like follows:
file:data.yml
data:
image: ubuntu
volumes:
- "/var/lib/mysql"
thinks for your much useful answer