Elasticsearch cluster doesn't work on Docker Swarm - docker

The docker-compose YAML below file brings up a 3-node Elasticsearch cluster when used with the docker compose command. That is OK for debug but I want to move to deployment, so I want to deploy on a swarm where the containers can run on different systems.
So
docker compose up
works, but
docker stack deploy -c docker-compose.yml p3es
creates the same containers (although on different systems) and the overlay network, but the elasticsearch instances are not able to talk to each other via port 9300. So a master never gets assigned and although elasticsearch responds to HTTP requests they just error out.
In the logs the following exception/stack trace appears on each container:
p3es_es01.1.sv26uqp4i4s3#carbon | "stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es03][10.0.12.9:9300][internal:cluster/coordination/join]",
p3es_es01.1.sv26uqp4i4s3#carbon | "Caused by: org.elasticsearch.transport.ConnectTransportException: [es01][10.0.0.53:9300] connect_exception",
(etc)
The cause of the exception turns out to be:
p3es_es01.1.sv26uqp4i4s3#carbon | "Caused by: java.io.IOException: connection timed out: 10.0.0.53/10.0.0.53:9300",
So here are some things I have tried:
I invoke a shell on one of the containers. I can ping each of the other containers. I can also do a curl -XGET on each of the containers and get a response from port 9200.
If I do a curl -XGET on port 9300 on one of the containers I get a "Not an HTTP Port" message. But at least it was able to resolve the address.
Docker stack likes to put prefixes on names for objects. So if you name a network xyz the network actually gets named project_xyz. So I changed the environment variables that tell elasticsearch who is part of the cluster to include the project name prefix. No luck.
I've run out of ideas. Any suggestions?
version: '3.9'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- HOSTNAME=es01
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es9300
volumes:
- nfs-es01:/usr/share/elasticsearch/data
ports:
- 9200:9200
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- HOSTNAME=es02
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es9300
volumes:
- nfs-es02:/usr/share/elasticsearch/data
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- HOSTNAME=es03
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es9300
volumes:
- nfs-es03:/usr/share/elasticsearch/data
volumes:
nfs-es01:
driver_opts:
type: nfs
o: addr=10.2.0.1,rw,nfsvers=4,local_lock=all
device: :/sbn/process3/elasticsearch01
nfs-es02:
driver_opts:
type: nfs
o: addr=10.2.0.1,rw,nfsvers=4,local_lock=all
device: :/sbn/process3/elasticsearch02
nfs-es03:
driver_opts:
type: nfs
o: addr=10.2.0.1,rw,nfsvers=4,local_lock=all
device: :/sbn/process3/elasticsearch03
networks:
es9300:
driver: overlay
attachable: true

As it turns out, Elasticsearch discovery gets confused when Docker provides it with multiple overlay networks. So the directive:
ports:
- 9200:9200
causes each es0* service to be on the ingress overlay network in addition to the overlay network specified (in this case es9300). For some reason when Elasticsearch is running in the containers, it gets the wrong IP address when resolving the service/DNS es01.
I haven't determined why that is, but removing the ports directive to publish port 9200 resolves the issue.
Hopefully this posting will help someone encountering the same issue.

Related

Dockerized elasticsearch and fscrawler: failed to create elasticsearch client, disabling crawler… Connection refused

I received the following error when attempting to connect Dockerized fscrawler to Dockerized elasticsearch:
[f.p.e.c.f.c.ElasticsearchClientManager] failed to create
elasticsearch client, disabling crawler… [f.p.e.c.f.FsCrawler] Fatal
error received while running the crawler: [Connection refused]
When fscrawler is run for the fist time (i.e., docker-compose run fscrawler) it creates /config/{fscrawer_job}/_settings.yml with the following default setting:
elasticsearch:
nodes:
- url: "http://127.0.0.1:9200"
This will cause fscrawler to attempt to connect to localhost (i.e., 127.0.0.1). However, this will fail when fscrawler is located within a docker container because it is attempting to connect with the localhost of the CONTAINER. This was particularly confusing in my case because elasticsearch WAS accessible as localhost, but on the localhost of my physical computer (and NOT localhost of the container). Changing the url allowed fscrawler to connect to network address where elasticsearch actually resides.
elasticsearch:
nodes:
- url: "http://elasticsearch:9200"
I used the following docker image: https://hub.docker.com/r/toto1310/fscrawler
# FILE: docker-compose.yml
version: '2.2'
services:
# FSCrawler
fscrawler:
image: toto1310/fscrawler
container_name: fscrawler
volumes:
- ${PWD}/config:/root/.fscrawler
- ${PWD}/data:/tmp/es
networks:
- esnet
command: fscrawler job_name
# Elasticsearch Cluster
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch2
- cluster.initial_master_nodes=elasticsearch,elasticsearch2
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,elasticsearch2
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
Ran docker-compose up elasticsearch elasticsearch2
to bring up elasticsearch nodes.
Ran docker-compose run fscrawler to create _settings.yml
Edited _settings.yml to
elasticsearch:
nodes:
- url: "http://elasticsearch:9200"
Started fscrawler docker-compose up fscrawler

how to create ES cluster and store data if elastic search and kibana is installed using docker

I've installed Elastic Search and Kibana using docker image in ubuntu machine and the commands used to run are:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.5.1
docker run --link 36c0ea06f9e3:elasticsearch -p 5601:5601 docker.elastic.co/kibana/kibana:7.5.1
and it gets successfully run in my local machine.
Now the major concern for me when you try to create a cluster node with one master and slave node, user need to edit the /etc/elasticsearc/elasticsearch.yml.
But in this case (installed using docker image in ubuntu).
No such file or folder is created. Do let me know how can I create a cluster and store data in such case.
You already have part of the solution in your first docker run statement. The docker image of elasticsearch is configured using environment variables. For example your run statement sets the node to single-node mode using the -e "discovery.type=single-node" flag.
A full list of configuration options you can find here.
You may also share the config directory with the host using a host-shared volume:
docker run -p 9200:9200 -p 9300:9300 -v <your-dir>/elasticsearch.yml/:/usr/share/elasticsearch/config/elasticsearch.yml docker.elastic.co/elasticsearch/elasticsearch:7.5.1
This allows you to edit the config at your_dir/elasticsearch.yml directly from your hosts filesystem. Just make sure the config file exists in the specified directory before attempting to start the container.
Not sure if this is useful but here you go
https://blog.patricktriest.com/text-search-docker-elasticsearch/
from is official guide
you can use envirmental varible to pass option to /etc/elasticsearc/elasticsearch.yml
this is sample docker-compose.yml:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge

Cannot setup docker-compose file to launch kibana at version 7.3.2

I am looking for a working version of a docker-compose file that starts up kibana and elasticsearch together on docker for mac at version 7.3.2. I've followed the most recent instructions on kibana and elasticsearch's 7.3.2 documentation and my docker-compose.yml file below is the union of what I gathered from both docs. (The kibana doc was the most vague with respect to the docker compose config). I've also tried following other stack overflow articles (written for older versions) but they don't seem to work with the latest versions. I now suspect I'm missing something version specific. 7.3.1 didn't work with the same config either.
I should note that the elasticsearch portion of the file works fine; I can hit http://localhost:9200 and I get a json response. However Kibana's url (http://localhost:5601) returns Kibana server is not ready yet with this error:
kibana | {"type":"log","#timestamp":"2019-09-12T21:45:04Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"Unable to revive connection: http://elasticsearch:9200/"}
This is my best attempt so far:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.3.2
ports:
- 5601:5601
networks:
- esnet
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
Docker Compose automatically creates a private Docker network for you, and within that, the names of the service: blocks are valid hostnames.
When you set
ELASTICSEARCH_URL: http://elasticsearch:9200
None of your containers are named elasticsearch so the hostname lookup fails, but if you pick either node es01 or es02 it will work
ELASTICSEARCH_URL: http://es01:9200
(Note that you don’t explicitly need a networks: definition for this to work, Compose will create a network named default for you. You also don’t need to explicitly set container_name: unless you’re planning on trying to manage the same containers with non-Compose tooling.)
Use ELASTICSEARCH_HOSTS: http://es01:9200 instead of ELASTICSEARCH_URL to update the environment from the docker-compose.yml file. Here is the Elasticsearch documentation about environment variable configuration https://www.elastic.co/guide/en/kibana/current/docker.html#environment-variable-config.
You need to add network configuration also in kibana service like
networks:
- esnet
and in ELASTICSEARCH_HOST: http://es01:9200
note es01 is your container name

How to setup a 3-node Elasticsearch cluster on a single AWS EC2 instance?

I am currently trying to deploy a 3-node Elasticsearch cluster on a single EC2 instance (i.e. using ONE instance only) using a docker-compose file. The problem is I could not get the 3 nodes to communicate with each other to form the cluster.
On my Windows 10 machine, I used the official Elasticsearch:6.4.3 image while for AWS EC2, I am using a custom Elasticsearch:6.4.3 image with ec2-discovery plugin installed where I build using the "docker build -t mdasri/eswithec2disc ." command. Refer to dockerfile below.
The dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.4.3
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
I was successful in setting up the 3-node Elasticsearch cluster locally using docker-compose on my Windows 10 machine. In my docker-compose file, I have 3 different Elasticsearch services to make up the 3-nodes: es01, es02, es03. I was hoping to use the same docker-compose file to set up the cluster on AWS EC2 instance but I was hit with error.
I am using the "ecs-cli compose -f docker-compose.yml up" command to deploy to AWS EC2. The status of the ecs-cli compose was: "Started container...".
So to check the cluster status, I typed x.x.x.x/_cluster/health?pretty, but was hit with this error:
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
When I assess each of the docker container logs in the EC2 instance after I ssh in, this is the error I face in ALL 3 containers:
[2019-06-24T06:19:43,880][WARN ][o.e.d.z.UnicastZenPing ] [es01]
failed to resolve host [es02]
This is my docker-compose file for the respective AWS EC2 service:
version: '2'
services:
es01:
image: mdasri/eswithec2disc
container_name: es01
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "9200:9200"
- "9300:9300"
environment:
- "cluster.name=aws-cluster"
- "node.name=es01"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "discovery.zen.minimum_master_nodes=2"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es02:
image: mdasri/eswithec2disc
container_name: es02
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es02"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es03:
image: mdasri/eswithec2disc
container_name: es03
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es03"
- "node.master=false"
- "node.data=true"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01,es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
networks:
esnet:
Please help me as I've been stuck on this problem for the past 1-2 weeks.
P.S: Please let me know what other information do you guys need. Thanks!
you need to configure links in your docker-compose to be able to resolvable:
from docker-compose Docs:
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
web:
links:
- db
- db:database
- redis
and see the comment also from #Mishi.Srivastava

Using Docker to set up an Elasticsearch cluster across multiple machines

I have currently tried to use docker-compose to spin up multiple elasticsearch containers but these are all deployed on my local machine. What would be the process for deploying multiple containers to several different machines using Docker?
I know that docker swarm exists but I am not sure how to utilize it along with docker compose so that I spin up multiple elasticsearch nodes that are located in the difference virtual machines that I have spun up.
So far I know that in the elasticsearch.yml config file I need to specify the virtual machine addresses with
discovery.zen.ping.unicast.hosts: ["vm_ip1:9200", "vm_ip2:9200"] but I am not sure how to enforce this change onto the docker containers that are created. To spin up the containers I used the sample docker-compose.yml file that is shown in the elasticsearch docs
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch2
environment:
- cluster.name=elasticsearchr
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- "index.number_of_shards: 2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
Please let me know if any more details are needed, thanks!

Resources