kibana opendistro can't connect to ElasticSearch open distro container on Docker - docker

I am trying to run Kibana opendistro in Elasticsearch opendistro through a docker-compose in a virtual machine in AZURE when i run the docker-compose i can access kibana on browser with : http://myipadress:5601/app/kibana but i can't for ElasticSearch .
my docker-compose :
version: '3'
services:
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.7.0
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1
- cluster.initial_master_nodes=odfe-node1
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-elasticdata:/usr/share/elasticsearch/data
- odfe-elasticconfig:/usr/share/elasticsearch/config
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.7.0
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
volumes:
- odfe-kibanaconfig:/usr/share/kibana/config
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-elasticdata:
odfe-elasticconfig:
odfe-kibanaconfig:
networks:
odfe-net:
Error messages :
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:23:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nGET https://odfe-node1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 172.22.0.3:9200"}
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:32:24Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:32:24Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:32:24Z","tags":["error","elasticsearch-service"],"pid":1,"message":"Unable to retrieve version information from Elasticsearch nodes."}
If I do docker ps and curl test, it gives me following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41ded49c03e5 amazon/opendistro-for-elasticsearch:1.7.0 "/usr/local/bin/dock…" 48 minutes ago Up 2 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9600->9600/tcp, 9300/tcp odfe-node1
84bed086ab5c amazon/opendistro-for-elasticsearch-kibana:1.7.0 "/usr/local/bin/kiba…" 48 minutes ago Up 2 seconds 0.0.0.0:5601->5601/tcp odfe-kibana
-------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200 -u admin:admin --insecure
{
"name" : "odfe-node1",
"cluster_name" : "odfe-cluster",
"cluster_uuid" : "Ax2q2FrEQgCQHKZoDT7C0Q",
"version" : {
"number" : "7.6.1",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
--------------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200/_cat/nodes?v -u admin:admin --insecure
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.22.0.3 22 72 4 0.16 0.81 0.86 dim * odfe-node1
--------------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200/_cat/plugins?v -u admin:admin --insecure
name component version
odfe-node1 opendistro-anomaly-detection 1.7.0.0
odfe-node1 opendistro-job-scheduler 1.7.0.0
odfe-node1 opendistro-knn 1.7.0.0
odfe-node1 opendistro_alerting 1.7.0.0
odfe-node1 opendistro_index_management 1.7.0.0
odfe-node1 opendistro_performance_analyzer 1.7.0.0
odfe-node1 opendistro_security 1.7.0.0
odfe-node1 opendistro_sql 1.7.0.0
---------------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200/_cat/indices?pretty -u admin:admin --insecure
yellow open security-auditlog-2020.05.28 6xPW0yPyRGKG1owKbBl-Gw 1 1 18 0 144.6kb 144.6kb
green open .kibana_92668751_admin_1 mgAiKHNKQJ-sgFDXw7Iwyw 1 0 1 0 3.7kb 3.7kb
green open .kibana_92668751_admin_2 VvRiV16jRlualCWJvyYFTA 1 0 1 0 3.7kb 3.7kb
green open .opendistro_security NHxbWWv0RJu8kScOtsejTw 1 0 7 0 36.3kb 36.3kb
green open .kibana_1 s2DBw7Y_SUS9Go-u5qOrjg 1 0 1 0 4.1kb 4.1kb
green open .tasks 0kVxFOcqQzOxyAYTGUIWDw 1 0 1 0 6.3kb 6.3kb
anyone can help please

Ok, I was able to get a single node elastic & kibana working with this docker-compose.yml:
version: '3'
services:
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.8.0
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- odfe-net
volumes:
odfe-data1:
networks:
odfe-net:
I started with this yaml file & changed the elastic environment variables to:
- cluster.name=odfe-cluster
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
I also overrode the kibana.yml file:
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
with this:
server.name: kibana
server.host: "0"
elasticsearch.hosts: https://odfe-node1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: admin
elasticsearch.password: admin
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
newsfeed.enabled: false
telemetry.optIn: false
telemetry.enabled: false
I extract the default kibana.yml & changed:
elasticsearch.hosts: https://odfe-node1:9200
elasticsearch.username: admin
elasticsearch.password: admin
But the 2 node example in the documentation still doesn't work for me.
Hope that helps

Related

ElasticSearch - cannot run two es docker containers at the same time

ElasticSearch - cannot run two es docker containers at the same time
I'm trying to run 2 services of ElasticSearch using docker-compose.yaml
Every time I run docker-compose up -d only one service is working at time. When I try to start stopped service it runs but the first one which was working before, stops immediately.
This is how my docker-compose.yaml looks like:
version: '3.1'
services:
es-write:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-write
environment:
- discovery.type=single-node
- TAKE_FILE_OWNERSHIP=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
es-read:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-read
environment:
- discovery.type=single-node
- TAKE_FILE_OWNERSHIP=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9201:9200
sqs:
image: "roribio16/alpine-sqs:latest"
container_name: "sqs"
ports:
- "9324:9324"
- "9325:9325"
volumes:
- "./.docker-configuration:/opt/custom"
stdin_open: true
tty: true
Tldr;
I believe you get the known <container name> exited with code 137 error.
Which is docker way of telling you it is OOM (out of memory).
To solve
Define a maximum amount of Ram each container is allowed to use.
I allowed 4GB, but you choose what suits you.
version: '3.1'
services:
es-write:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-write
environment:
- discovery.type=single-node
ports:
- 9200:9200
deploy:
resources:
limits:
memory: 4GB # Use at most 50 MB of RAM
es-read:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-read
environment:
- discovery.type=single-node
ports:
- 9201:9200
deploy:
resources:
limits:
memory: 4GB # Use at most 50 MB of RAM

How to connect distro elasticsearch service to another service defined in docker compose

hi i want to connect to Elasticsearch inside my app which is defined as "cog-app" service in docker-compose.yml along with ditsro elasticsearch and kibana
i am not able to connect to elasticsearch when i run docker file, can you please tell me how i can connect elasticsearch service to app service
i have defined elasticsearch in cog-app service, and im getting connection failure with elasticsearch
version: "3"
services:
cog-app:
image: app:2.0
build:
context: .
dockerfile: ./Dockerfile
stdin_open: true
tty: true
ports:
- "7111:7111"
environment:
- LANG=C.UTF-8
- LC_ALL=C.UTF-8
- CONTAINER_NAME=app
volumes:
- /home/developer/app:/app
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms2g -Xmx2g" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
odfe-node2:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node2
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node2
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- odfe-data2:/usr/share/elasticsearch/data
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.13.2
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-data1:
odfe-data2:
networks:
odfe-net:
please tell me how two services can communicate with each other
As the elasticsearch service is running in another container, localhost is not valid. You should use odfe-node1:9200 as the url

Elastic Search with Docker

I ran the following docker compose script and I am expecting two nodes to be up, however this only one. There seems to be some obvious error.
Taken from the documentation
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
http://127.0.0.1:9200/_cat/health
1598033352 18:09:12 docker-cluster green 1 1 0 0 0 0 0 0 - 100.0%
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
elasticsearch /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 9300/tcp
elasticsearch2 /usr/local/bin/docker-entr ... Up 9200/tcp, 9300/tcp
Ok, I found out the issue. It seems both the containers are somehow not aware of each other and trying to create the clusters on their own.
I introduced delay with depends, it works fine now. Thanks
It is up. But you need to map the port to some local unused port. Like you have mapped for first one 9200:9200. Add the same for other one too like 8200:9200. Then try hitting second one from local at 8200 port. It should work.

How to setup a 3-node Elasticsearch cluster on a single AWS EC2 instance?

I am currently trying to deploy a 3-node Elasticsearch cluster on a single EC2 instance (i.e. using ONE instance only) using a docker-compose file. The problem is I could not get the 3 nodes to communicate with each other to form the cluster.
On my Windows 10 machine, I used the official Elasticsearch:6.4.3 image while for AWS EC2, I am using a custom Elasticsearch:6.4.3 image with ec2-discovery plugin installed where I build using the "docker build -t mdasri/eswithec2disc ." command. Refer to dockerfile below.
The dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.4.3
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
I was successful in setting up the 3-node Elasticsearch cluster locally using docker-compose on my Windows 10 machine. In my docker-compose file, I have 3 different Elasticsearch services to make up the 3-nodes: es01, es02, es03. I was hoping to use the same docker-compose file to set up the cluster on AWS EC2 instance but I was hit with error.
I am using the "ecs-cli compose -f docker-compose.yml up" command to deploy to AWS EC2. The status of the ecs-cli compose was: "Started container...".
So to check the cluster status, I typed x.x.x.x/_cluster/health?pretty, but was hit with this error:
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
When I assess each of the docker container logs in the EC2 instance after I ssh in, this is the error I face in ALL 3 containers:
[2019-06-24T06:19:43,880][WARN ][o.e.d.z.UnicastZenPing ] [es01]
failed to resolve host [es02]
This is my docker-compose file for the respective AWS EC2 service:
version: '2'
services:
es01:
image: mdasri/eswithec2disc
container_name: es01
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "9200:9200"
- "9300:9300"
environment:
- "cluster.name=aws-cluster"
- "node.name=es01"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "discovery.zen.minimum_master_nodes=2"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es02:
image: mdasri/eswithec2disc
container_name: es02
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es02"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es03:
image: mdasri/eswithec2disc
container_name: es03
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es03"
- "node.master=false"
- "node.data=true"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01,es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
networks:
esnet:
Please help me as I've been stuck on this problem for the past 1-2 weeks.
P.S: Please let me know what other information do you guys need. Thanks!
you need to configure links in your docker-compose to be able to resolvable:
from docker-compose Docs:
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
web:
links:
- db
- db:database
- redis
and see the comment also from #Mishi.Srivastava

Elasticsearch service from docker image does not connect to webapp or kibana

I have a docker-compose.yml file which declares webapp, postgres database, a two node elasticsearch, and a kibana container.
version: '3'
services:
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: MyWebApp-dev
image: 'localhost:443/123'
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- db
- elasticsearch
- kibana
links:
- db
- elasticsearch
- kibana
db:
image: postgres:10
container_name: db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mine_dev
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
ports:
- "5601:5601"
container_name: kibana
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
They all build successfully, but kibana cannot get a live connection to elasticsearch
kibana | {"type":"log","#timestamp":"2019-05-08T23:36:13Z","tags":["status","plugin:searchprofiler#7.0.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
kibana | {"type":"log","#timestamp":"2019-05-09T00:02:46Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
and the index "products" cannot be created with elixir/ecto mix
MyWebApp-dev | (elixir) lib/calendar/datetime.ex:537: DateTime.to_unix/2
MyWebApp-dev | (elasticsearch) lib/elasticsearch/indexing/index.ex:287: Elasticsearch.Index.build_name/1
MyWebApp-dev | (elasticsearch) lib/elasticsearch/indexing/index.ex:31: Elasticsearch.Index.hot_swap/2
MyWebApp-dev | (elasticsearch) lib/mix/elasticsearch.build.ex:86: Mix.Tasks.Elasticsearch.Build.build/3
MyWebApp-dev |
MyWebApp-dev | ** (Mix) Index products could not be created.
MyWebApp-dev |
MyWebApp-dev | %HTTPoison.Error{id: nil, reason: :econnrefused}
All the while, I can connect to the elasticsearch server:
A68MD-PRO:~# curl http://localhost:9200/_cat/health
1557359160 23:46:00 docker-cluster green 2 2 2 1 0 0 0 0 - 100.0%
Even from the container inside, curling yields:
A68MD-PRO:~# docker exec elasticsearch curl http://elasticsearch:9200/_cat/health
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 66 100 66 0 0 6969 0 --:--:-- --:--:-- --:--:-- 7333
1557373042 03:37:22 docker-cluster green 2 2 2 1 0 0 0 0 - 100.0%
Does anyone know what this problem is about and how to solve it?
Update: If I do
docker exec -it MyWebApp-dev curl -XPUT 'http://elasticsch:9200/something/example/1' -d ' { "type": "example", "quantity": 2 }' -H'Content-Type: application/json'
it works perfectly well. So it must have something to do with httpoison, I think.
the elaticsearch containers are on a different docker network than the kibana container.
Please verify this network configuration:
networks:
- esnet
Remove it for the elastic nodes or apply the very same network config for kibana.

Resources