RabbitMQ cluster by docker-compose on different hosts and different projects - docker

I have 3 projects, that deploys on different hosts. Every project have it's own RabbitMQ container. But I need to create cluster with this 3 hosts, using the same vhost, but different user/login pair.
I was tried Swarm and overlay networks, but swarm is aimed to run solo containers and with compose it doesn't work. Also, I was tried docker-compose bundle, but this is not work as expected :(
I assumed that it would work something like this:
1) On manager node I create overlay network
2) In every compose file I extend networks config for rabbitmq container with my overlay network.
3) They work as expected and I don't publish to Internet rabbitmq port.
Any idea, how can I do this?

Your approach is right, but Docker Compose doesn't work with Swarm Mode at the moment. Compose just runs docker commands, so you could script up what you want instead. For each project you'd have a script like this:
docker network create -d overlay app1-net
docker service create --network app1-net --name rabbit-app1 rabbitmq:3
docker service create --network app1-net --name app1 your-app-1-image
...
When you run all three scripts on the manager, you'll have three networks, each network will have its own RabbitMQ service (just 1 container by default, use --replicas to run more than one). Within the network other services can reach the message queue by the DNS name rabbit-appX. You don't need to publish any ports, so Rabbit is not accessible outside of the Docker network.

Related

Why docker container name has an random number at the end after creating docker sevice

after I create a docker service using the below code
docker service create --name db --network era-networkkk -p 3306:3306 --mount type=bind,source=$(pwd)/data/mysql,destination=/var/lib/mysql schema
and when I check the services using
docker services ls
it shows the name as db
but when I use the command
docker ps
container name have some randomly generated numbers after the name
How can I solve this problem?
I think that behaviour is absolutely intended. What if your swarm is configured to start multiple containers of the same image on a single swarm node? These containers can't have the same name. So there has to be a suffix on the container names so there is no name collision. Why would you want to influence the container's names? Normally when working with clusters you are working on service level instead of container level.
I think the reason for this is that when you create a service you don't necessarily care what the container names are. You would usually create a service when docker is in swarm mode. With swarm mode you set up a cluster of nodes, I guess you only have one node for dev purposes. However when you have more than one cluster then the service would create as many containers as you specify with the --replicas option. Any requests to your application would then be load balanced across the containers in your cluster via the service.
Have a look at the docker documentation https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/ it may help to clarify how all of this works.

Choosing range of ports in spark

From spark documentation I know that the ports that executors, i.e. workers (because by default there is just one executor per a worker) use for establishing connection with master are randomly determined, but how could I setup their range to publish those ports in docker. Also, if a worker establishes a connection with another container (which is not part of the distributed system), do I need to publish the port on which the worker would get returned data from the container (e.g. via an https request)?
Just to note, I do not use docker-compose.yml because I do not need the containers to be set as services and I want to add/remove containers when needed by increase/decrease in number of customers.
You should use the same docker network for all containers which will communicate with each other. Containers can reach others using container name (on all ports) just like if different hosts on a network.
Create a network (needed only once)
docker network create <network_name>
when you launch a container use --network to connect container to the network
docker run --network=<network_name> --name <container_name> <image>
You can also connect existing containers to networks
docker network connect <network_name> <container_name>
Reference:
https://docs.docker.com/engine/reference/commandline/network_create/
https://docs.docker.com/engine/reference/run/

Adding new containers to existing cluster (sworm)

I am having a problem trying to implement the best way to add new container to an existing cluster while all containers run in docker.
Assuming I have a docker swarm, and whenever a container stops/fails for some reason, the swarm bring up new container and expect it to add itself to the cluster.
How can I make any container be able to add itself to a cluster?
I mean, for example, if I want to create a RabbitMQ HA cluster, I need to create a master, and then create slaves, assuming every instance of RabbitMQ (master or slave) is a container, let's now assume that one of them fails, we have 2 options:
1) slave container has failed.
2) master container has failed.
Usually, a service which have the ability to run as a cluster, it also has the ability to elect a new leader to be the master, so, assuming this scenerio is working seemlesly without any intervention, how would a new container added to the swarm (using docker swarm) will be able to add itself to the cluster?
The problem here is, the new container is not created with new arguments every time, the container is always created as it was deployed first time, which means, I can't just change it's command line arguments, and this is a cloud, so I can't hard code an IP to use.
Something here is missing.
Maybe trying to declare a "Service" in the "docker Swarm" level, will acctualy let the new container the ability to add itself to the cluster without really knowing anything the other machines in the cluster...
There are quite a few options for scaling out containers with Swarm. It can range from being as simple as passing in the information via a container environment variable to something as extensive as service discovery.
Here are a few options:
Pass in IP as container environment variable. e.g. docker run -td -e HOST_IP=$(ifconfig wlan0 | awk '/t addr:/{gsub(/.*:/,"",$2);print$2}') somecontainer:latest
this would set the internal container environment variable HOST_IP to the IP of the machine it was started on.
Service Discovery. Querying a known point of entry to determine the information about any required services such as IP, Port, ect.
This is the most common type of scale-out option. You can read more about it in the official Docker docs. The high level overview is that you set up a service like Consul on the masters, which you have your services query to find the information of other relevant services. Example: Web server requires DB. DB would add itself to Consul, the web server would start up and query Consul for the databases IP and port.
Network Overlay. Creating a network in swarm for your services to communicate with each other.
Example:
$ docker network create -d overlay mynet
$ docker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebapp
$ docker service create –name redis –network mynet redis:latest
This allows the web app to communicate with redis by placing them on the same network.
Lastly, in your example above it would be best to deploy it as 2 separate containers which you scale individually. e.g. Deploy one MASTER and one SLAVE container. Then you would scale each dependent on the number you needed. e.g. to scale to 3 slaves you would go docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS> which would start the additional slaves. In this scenario if one of the scaled slaves fails swarm would start a new one to bring the number of tasks back to 3.
https://docs.docker.com/engine/reference/builder/#healthcheck
Docker images have a new layer for health check.
Use a health check layer in your containers for example:
RUN ./anyscript.sh
HEALTHCHECK exit 1 or (Any command you want to add)
HEALTHCHECK check the status code of command 0 or 1 and than result as
1. healthy
2. unhealthy
3. starting etc.
Docker swarm auto restart the unhealthy containers in swarm cluster.

docker-compose swarm without docker-machine

After looking through docker official swarm explanations, github issues and stackoverflow answers im still at a loss on why i am having the problem that i have.
Issue at hand: docker-compose up starts services not in the swarm even though swarm is active and has 2 nodes.
Im using 1.12.1 docker version.
Looking at swarm tutorial i was able to start and scale my swarm using docker service create without any issues.
running docker-compose up with version 2 docker-compose.yml results in services starting outside of swarm, i can see them through docker ps but not docker service ls
I can see that docker-machine as the tool that solves this problems, but then again it needs virtual box to be installed.
so my questions would be
Can i use docker-compose with docker-swarm (NOT docker-engine) without docker-machine and without experimental build bundle functionality?
If docker service create can start a service on any nodes is it an indication that network configuration of the swarm is correct ?
What is the advantages/disadvantages of docker-machine versus experimental build functionality
1) No. Docker Compose isn't integrated with the new Swarm Mode yet. Issue 3656 in GitHub is tracking that. If you start containers on a swarm with Docker Compose at the moment, it uses docker run to start containers, which is why you see them all on one node.
2) Yes. Actually you can use docker node ls on the manager to confirm all the nodes are up and active, and docker node inspect to check a particular node, you don't need to create a service to validate the swarm.
3) Docker Machine is also behind the 1.12 release, so if you start a swarm with Docker Machine it will be the 'old' type of swarm. The old Docker Swarm product needed a whole lot of extra setup for a key-value store, TLS etc. which Swarm Mode does for free.
1) You can't start services using docker-compose on the new Docker "Swarm Mode". There's a feature to convert a docker-compose file to the new dab format which is understood by the new swarm mode but that's incomplete and experimental at this point. You basically need to use bash scripts to start services at the moment.
2) The nodes in a swarm (swarm mode) interact using their own overlay network. It's the one named ingress when you do docker network ls. You need to setup your own overlay network to run services in. eg:
docker network create -d overlay mynet
docker service create --name serv1 --network mynet nginx
3) I'm not sure what feature you mean by "experimental build'. docker-machine is just a way to create hosts (the nodes). It facilitates the setting up of the docker daemon on each host, the certificates and allows some basic maintenance (renewing the certs, stopping/starting a host if you're the one who created it). It doesn't create services, volumes, networks or manages them. That's the job of the docker api.

Creating multiple Docker container

I have to create a huge number of Docker container on different hosts (e.g. 50 container each on 3 hosts). These container all have the same image, configuration etc. and only the network address and ID of each container should be different (so basically I want to create a huge virtual container network).
Is there a way to achieve this?
I have looked at technologies like Helios and Kubernetes but they seem to only deploy one container on each agent. I thought about just creating a lot of different jobs in Helios and then deploy each one of them to its agent, but that seems a little dirty to me.
This is exactly the type of use case that Kubernetes is well suited for.
You should use a Replica Set. When creating your Replica Set, you specify a template that tells the system how to create each container instance along with a desired number of replicas. It will then create that number of replicas in the available number of nodes in your cluster.
One caveat is that by default, Kubernetes will only allow you to have ~100 pods / node, but you can change this number with a command line flag if you need more.
For the Docker specific solution, you can use Swarm and Compose.
Create your Docker Swarm cluster of 3 nodes and change your environment to that Swarm. (The below assumes each host is listening on 2375, which is ok for a private network, but you'll want TLS setup and switch over to 2376 for more security.)
cat >cluster.txt <<EOF
node1:2375
node2:2375
node3:2375
EOF
docker run -d -P --restart=always --name swarm-manager \
-v ./cluster.txt:/cluster.txt \
swarm manage file:///cluster.txt
export DOCKER_HOST=$(docker port swarm-manager 2375)
Define your service inside of a docker-compose.yml, and then run docker-compose scale my-service=150. If your Swarm is setup with the default spread strategy, it will distribute them across the 3 hosts based on the number of containers running (or stopped) on each.
cat >docker-compose.yml <<EOF
my-app:
image: my-app
EOF
docker-compose scale my-app=150
Note that the downside of docker-compose over the other tools out there is that it doesn't correct for outages until you rerun it.

Resources