I am running Docker Swarm 1.12 on 3 CoreOS machines via Vagrant.
What is the best way to start a Solr cloud on the cluster?
Do I need Zookeeper?
I have gotten as far as this:
docker service create --mode=global --name solr -p 8983:8983 solr:5.3.1 bash -c "/opt/solr/bin/solr start -f -c"
But then the Cloud is empty because it does not know about the other 2 machines, how can I use Swarm's power here?
Documentation
The container image documentation describes how interconnect Zookeeper as a backing store for a distributed Solr setup:
You can also run a distributed Solr configuration, with Solr nodes in
separate containers, sharing a single ZooKeeper server:
Run ZooKeeper, and define a name so we can link to it:
$ docker run --name zookeeper -d -p 2181:2181 -p 2888:2888 -p 3888:3888 jplock/zookeeper
Run two Solr nodes, linked to the zookeeper container:
$ docker run --name solr1 --link zookeeper:ZK -d -p 8983:8983 \
solr \
bash -c '/opt/solr/bin/solr start -f -z $ZK_PORT_2181_TCP_ADDR:$ZK_PORT_2181_TCP_PORT'
$ docker run --name solr2 --link zookeeper:ZK -d -p 8984:8983 \
solr \
bash -c '/opt/solr/bin/solr start -f -z $ZK_PORT_2181_TCP_ADDR:$ZK_PORT_2181_TCP_PORT'
Running distributed Solr on Docker Swarm
Starting with a 3 node swarm:
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
2wkpmybf4wni8wia1s46lm2ml * node1 Ready Active Leader
ajrnf5oibgm7b12ayy0hg5i32 node3 Ready Active
bbe8n1hybhruhhrhmswn7fjmd node2 Ready Active
Create a network
$ docker network create --driver overlay my-net
Start a zookeeper service and wait for it to start
$ docker service create --name zookeeper --replicas 1 --network my-net jplock/zookeeper
Start 2 solr instances configured to connect to the DNS address "zookeeper". For more information on swarm mode networking you can read the documentation
$ docker service create --name solr --replicas 2 --network my-net -p 8983:8983 \
solr \
bash -c '/opt/solr/bin/solr start -f -z zookeeper:2181'
The web UI will be available on any node in the cluster
http://192.168.99.100:8983/
http://192.168.99.101:8983/
http://192.168.99.102:8983/
If you check the services you'll notice the containers are spread across all 3 nodes in the cluster. This is the default scheduling behaviour
$ docker service ps zookeeper
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
3fhipbsd4jdazmx8d7zum0ohp zookeeper.1 jplock/zookeeper node1 Running Running 7 minutes ago
$ docker service ps solr
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
bikggwpyz5q6vdxrpqwevlwsr solr.1 solr node2 Running Running 43 seconds ago
cutbmjsmcxrmi1ld75eox0s9m solr.2 solr node3 Running Running 43 seconds ago
Related
Consider following scenario. I have 2 containers on 2 different networks with the same alias app.
$ docker network create net1
$ docker network create net2
$ docker run --name app1 -d --rm --network net1 --network-alias app traefik/whoami
$ docker run --name app2 -d --rm --network net2 --network-alias app traefik/whoami
Now, I am running a third container that is connected to both networks and query the DNS resolver for the alias app.
$ docker run --name dig --rm -d tutum/dnsutils sleep infinity
$ docker network connect net1 dig
$ docker network connect net2 dig
$ docker exec dig bash -c 'for _ in {1..5}; do dig +short app; done'
172.26.0.2
172.26.0.2
172.26.0.2
172.26.0.2
172.26.0.2
I did expect to get DNS round-robin from this kind of setup. Meaning, dig should get 2 A records back. Since, is connected to both networks and can resolve the apps by their names.
$ docker exec dig dig +short app1 app2
172.26.0.2
172.27.0.2
The above output shows that when using the alias for the query, only app1 is resolved. It seems like because it was the network the dig container was connected to first.
Is there a way to solve this, or at least specify in which network I am looking for the alias? And is the above behaviour deterministic?
The docker daemon is running on an Ubuntu machine. I'm trying to start up a zookeeper ensemble in a swarm. The zookeeper nodes themselves can talk to each other. However, from the host machine, I don't seem to be able to access the published ports.
If I start the container with -
docker run \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
It works like a charm. On my host machine I can say echo conf | nc localhost 2181 and zookeeper says something back.
However if I do,
docker service create \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
and run the same command echo conf | nc localhost 2181,
it just gets stuck. I don't even get a new prompt on my terminal.
This works just as expected on the Docker Playground on the official Zookeeper Docker Hub page. So I expect it should for me too.
But... If I docker exec -it $container sh and then try the command in there, it works again.
Aren't published ports supposed to be accessible even by the host machine for a service?
Is there some trick I'm missing about working with overlay networks?
Try to use docket service create --publish 2181:2181 instead.
I believe the container backing the service is not directly exposed and has to go through the Swarm networking.
Otherwise, inspect your service to check which port are published: docker service inspect <service_name>
Source: documentation
I am new to Docker world and trying to run ElasticSearch stack on Docker. I am able to start the ELK as an Container and it works perfectly.
docker run -v /var/lib/docker/volumes/elk-data:/var/lib/elasticsearch \
-v /var/lib/docker/volumes/elk-data:/var/log/elasticsearch \
-p 5601:5601 -p 9200:9200 -p 5044:5044 \
--name elk sebp/elk
I am using journalbeat to forward the metrics to ElasticSearch service and do visualization in Kibana.
I was able to run journalbeat as a service using the following command:
sudo docker service create --replicas 2 --mount type=bind,source=/opt/apps/shared/dev/docker/volumes/journalbeat/config/journalbeat.yml,target=/journalbeat.yml --mount type=bind,source=/run/log/journal,target=/run/log/journal --mount type=bind,source=/etc/machine-id,target=/etc/machine-id --constraint node.labels.nodename==devlabel --name journalbeat-svc mheese/journalbeat:v5.5.2
Is there a way can we run ELK as a service? so that we can start 2 containers - 1 one on Master Swarm and other on Worker Node.
An example of running the full ELK stack as separate docker containers is available here: https://github.com/elastic/examples/tree/master/Miscellaneous/docker/full_stack_example
This uses docker-compose so you can easily bring the containers up and down.
ELK means Elasticsearch, Logstash, and Kibana, so there are 3 services that must be running. In Docker swarm a service has zero or more instances, but every instance is a container that is based on the same Dockerfile.
So, in order to run ELK as a service you would have to start Elasticsearch, Logstash, and Kibana in the same container. Although theoretically it is possible, this is not recommended (there should be one process per container).
Instead, you should create 3 services, one for Elasticsearch, Logstash, and Kibana.
I have a docker swarm setup with 4 nodes and deployed Pumba in global mode to the swarm setup. After that, I have run my application containers on the swarm with replicas on different nodes. I want to send kill or netem command to all the Pumba containers in all the nodes.
Right now the only way I am able to do it is either by specifying the command when creating the service:
docker service create --name PUMBA --mode=global --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock gaiaadm/pumba:master pumba --random --interval 10s kill re2:"^customer-api*" --signal SIGTERM
Or by going in each host and doing a exec to the container and running the command:
docker exec -i $(docker ps| grep pumba|cut -d 'g' -f1 ) pumba netem --duration 60s delay --time 3000 --jitter 40 --distribution normal re2:"^${name[i]}*" > /dev/null 2>&1 &
I am creating a bash script for this. Is there a way I could pass the command to the service so that it's reflected in all of its global replicas?
I am able to send a new command to Pumba container using the following command
docker service update PUMBA --args "pumba --random --interval 5s kill re2:"^customer*" --signal SIGTERM"
I finished this documentation:
https://docs.docker.com/swarm/install-w-machine/
It works fine.
Now I tried to setup this EC2 instances by following this documentation:
https://docs.docker.com/swarm/install-manual/
I am in Step 4. Set up a discovery backend
I cannot understand the steps what I need to do further.
I created 5 nodes in EC2: manager0, manager1, consul0, node0, node1. Now I need to know how to setup service discovery with swarm.
In that document they ask us to connect manager0 and consul0 then ifconfig, then they given as etc0 instance. I don't know where this is coming from.
Ultimately I need to know where (in which node?) to run this command:
$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
Any suggestion for me How to clear this step?
Consul will run on the consul0 server you created. So basically you first need to be able to run docker on worker0 and worker1 remotely, this is step 3. A better way of doing this is editing the daemon directly with the command:
sudo echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"' > /etc/default/docker`
Then restart docker. Afterwards you will find that you can run docker remotely from master0, master1 or any other instance behind your firewall with docker commands that start with:
docker -H $WORKER0_IPADDRESS:2375
For example if your workers ip address was 1.2.3.4 this would run the docker ps command remotely:
docker -H 1.2.3.4:2375 ps
This is what swarm runs on. Then start up your consul server with the command you want to run, you got that one right and thats it you wont do anything else with the consul0 server except use its IP address when you run your swarm commands.
So if $CONSUL0 represented the IP address of your consul server this is how you would set up the rest of swarm. If you ran each of them on the local machine of each node:
On consul0:
docker run -d -p 8500:8500 --restart=unless-stopped --name=consul progrium/consul -server -bootstrap
On master0 and master1:
docker run --name=master -d -p 4000:4000 swarm manage -H :4000 --replication --advertise $(hostname -i):4000 consul://$CONSUL0:8500
On worker0 and worker1:
docker run -d --name=worker swarm join --advertise=$(hostname -i):2375 consul://$CONSUL0:8500/