Elastic Search with Docker - docker

I ran the following docker compose script and I am expecting two nodes to be up, however this only one. There seems to be some obvious error.
Taken from the documentation
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
http://127.0.0.1:9200/_cat/health
1598033352 18:09:12 docker-cluster green 1 1 0 0 0 0 0 0 - 100.0%
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
elasticsearch /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 9300/tcp
elasticsearch2 /usr/local/bin/docker-entr ... Up 9200/tcp, 9300/tcp

Ok, I found out the issue. It seems both the containers are somehow not aware of each other and trying to create the clusters on their own.
I introduced delay with depends, it works fine now. Thanks

It is up. But you need to map the port to some local unused port. Like you have mapped for first one 9200:9200. Add the same for other one too like 8200:9200. Then try hitting second one from local at 8200 port. It should work.

Related

How to connect distro elasticsearch service to another service defined in docker compose

hi i want to connect to Elasticsearch inside my app which is defined as "cog-app" service in docker-compose.yml along with ditsro elasticsearch and kibana
i am not able to connect to elasticsearch when i run docker file, can you please tell me how i can connect elasticsearch service to app service
i have defined elasticsearch in cog-app service, and im getting connection failure with elasticsearch
version: "3"
services:
cog-app:
image: app:2.0
build:
context: .
dockerfile: ./Dockerfile
stdin_open: true
tty: true
ports:
- "7111:7111"
environment:
- LANG=C.UTF-8
- LC_ALL=C.UTF-8
- CONTAINER_NAME=app
volumes:
- /home/developer/app:/app
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms2g -Xmx2g" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
odfe-node2:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node2
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node2
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- odfe-data2:/usr/share/elasticsearch/data
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.13.2
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-data1:
odfe-data2:
networks:
odfe-net:
please tell me how two services can communicate with each other
As the elasticsearch service is running in another container, localhost is not valid. You should use odfe-node1:9200 as the url

About docker container's exit when docker-compose up

I'm trying to use Elasticsearch with docker.
And you can see the guide here -> https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
my docker-compose.yml below
version: '2.2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch1
environment:
- node.name=master-node
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data01:/usr/share/elasticsearch/data
ports:
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
networks:
- elastic
stdin_open: true
tty: true
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch2
environment:
- node.name=data-node1
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
ports:
- 127.0.0.1:9301:9300
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data02:/usr/share/elasticsearch/data
networks:
- elastic
stdin_open: true
tty: true
volumes:
es-data01:
driver: local
es-data02:
driver: local
networks:
elastic:
# driver: bridge
the problem is
I cannot connect by curl -XGET localhost:9200
docker container exits automatically after few seconds
can you help me?
ps : when I try docker run it works. what is the difference between them?
docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch -it --rm -v els:/usr/share/elasticsearch/data -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.7.0
Please check the container logs by using docker logs <your stopped container-id>, here you can get the container id using docker ps -a command.
Also please follow this SO answer and set the memory requirements
which would help you run the Elasticsearch in docker. if it doesn't help then provide the logs which you can get as explained earlier.
Based on comments adding the updated docker-compose
version: '2.2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch1
environment:
- node.name=master-node
- node.master=true
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.initial_master_nodes=master-node"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data01:/usr/share/elasticsearch/data
ports:
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
networks:
- elastic
stdin_open: true
tty: true
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch2
environment:
- node.name=data-node1
- node.master=false
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.initial_master_nodes=master-node"
ports:
- 127.0.0.1:9301:9300
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data02:/usr/share/elasticsearch/data
networks:
- elastic
stdin_open: true
tty: true
volumes:
es-data01:
driver: local
es-data02:
driver: local
networks:
elastic:
# driver: bridge
As you are following this article, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
worth checking the second section with limits and memory resources as the containers in docker-compose is exiting due to low resources.
Exited with code 137 error is because of resource limitation (usually RAM) on the host machine. You can resolve this problem by adding this line to the environment variables of your docker-compose file:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
You can read more about heap size settings, on official Elasticsearch documentation, in this link.

Dockerized Elasticsearch nodes unavailable for Liferay 7.1

Liferay is not able to recognize my Elasticsearch cluster when starting. Here is my docker-compose configuration:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=liferay-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- "9299:9200"
- "9399:9300"
expose:
- "9299"
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=liferay-cluster2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9298:9200"
- "9398:9300"
expose:
- "9298"
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config file content
transportAddresses="127.0.0.1:9299"
logExceptionsOnly="false"
operationMode="REMOTE"
indexNamePrefix="myprefix-"
clusterName="liferay-cluster"
When starting docker-compose, I'm able to access my two ES clusters on: http://127.0.0.1:9299/ and http://127.0.0.1:9298/
However, when liferay starts it is unable to access ES nodes:
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{vUNCF_HNRtu_tYUjkqhXvg}{127.0.0.1}{127.0.0.1:9299}]]
Anyone tried this configuration ? Any help would be appreciated. Thanks :-)
I've found a solution. It could help if someone is trying to do the same.
As, I said in my comment to #ibexit, I'm running two dockerized ES clusters and two separate Liferay portals (not in containers) on the same machine (development mode).
I changed th transport address in Liferay OSGi config file, since it must match the transport tcp port where ES is running:
transportAddresses="127.0.0.1:9301"
logExceptionsOnly="false"
operationMode="REMOTE"
indexNamePrefix="myprefix-"
clusterName="liferay-cluster"
I also added the property network.publish_host=127.0.0.1 in my ES clusters (without this property Liferay was not able to detect ES nodes)
Here is my docker-compose.yml:
Using ES 6.1.4
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4
container_name: es01
environment:
- node.name=es01
- cluster.name=liferay-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- transport.tcp.port=9301
- network.publish_host=127.0.0.1
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- "9201:9200"
- "9301:9301"
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4
container_name: es02
environment:
- node.name=es02
- cluster.name=liferay-cluster2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- transport.tcp.port=9302
- network.publish_host=127.0.0.1
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9202:9200"
- "9302:9302"
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
network.publish_host did the trick !
Per default, elasticsearch binds the transport and http ports to localhost (local) only. So your ports exposed by docker are not working. You need to bind to a specific ip or using 0.0.0.0 for all or site as explained here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#network-interface-values
Please have in mind, that enabling this will startup the node in production mode followed by several bootstrap checks. Please see the docs if you need more informations on this topics or search SO.
My working local setup with docker compose and two elasticsearch nodes in docker containers and Liferay running on host. I am using elasticsearch 6.8.2 images modified according to Liferay docs available from docker hub with url: https://hub.docker.com/repository/docker/ktorek/liferay7-elasticsearch.
I am using gradle workspace. So I've configured: configs/local/osgi/configs/com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config:
operationMode=REMOTE
clusterName=docker-cluster
transportAddresses=127.0.0.1:9300,127.0.0.1:9301
I've killed a lot of time with transportAddresses configuration as it's documented to use square brackets and square quotes transportAddresses=["192.168.1.1:9300","192.168.1.2:9300"] but it does not work. Configuration listed above contains actual working configuration syntax.
My docker-compose.yml:
version: '3.7'
services:
es01:
container_name: "es01"
image: ktorek/liferay7-elasticsearch:latest
environment:
- node.name=es01
- node.data=true
- cluster.name=docker-cluster
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es02"
ports:
- "9300:9300"
- "9200:9200"
networks:
- mynetwork
volumes:
- es01-data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
es02:
container_name: "es02"
image: ktorek/liferay7-elasticsearch:latest
environment:
- node.name=es02
- node.data=true
- cluster.name=docker-cluster
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01"
ports:
- "9301:9300"
- "9201:9200"
networks:
- mynetwork
volumes:
- es02-data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
networks:
mynetwork:
name: mynetwork
driver: bridge
ipam:
config:
- subnet: 172.30.29.0/24

Elastic connection ERROR -> Elasticsearch 6 + Kibana + Docker Compose

Below is my docker.compose.yml.
After executing it, it shows:
kibana | {"type":"log","#timestamp":"2018-04-24T18:27:43Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://localhost:9200/"}
kibana | {"type":"log","#timestamp":"2018-04-24T18:27:43Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
I believe the problem might be related to the platinum release of elasticsearch I am excuting, related to the fact I might not be setting the right parameters to use it with Xpath.
Am I forgetting to set up anything to make platinum work?
I tried even going to kitematic and linking manually Kibana to Elasticsearch container but the same problem continues.
Nothing that I tried worked. How can I fix this?
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4
container_name: elasticsearch
environment:
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=MagicWord
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4
container_name: elasticsearch2
environment:
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=MagicWord
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana
environment:
- ELASTICSEARCH_URL="http://localhost:9200"
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=MagicWord
- "xpack.monitoring.ui.container.elasticsearch.enabled=true"
ports:
- 5601:5601
networks:
- esnet
depends_on:
- elasticsearch
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
Anyone can help?

How to change default elasticsearch password in docker-compose?

Elasticsearch's official docker image documentation provides this docker-compose.yml example:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: elasticsearch1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
However, it doesn't explain how to customize the password. It does direct us to a X-Pack documentation page, but I refuse to believe I have to go through all that trouble just to change a password. Is there any simpler, canonical way of configuring a custom password for elasticsearch on a Docker Compose file?
Starting from 6.0 elasticsearch docker images has the ability to configure the password using the following environment variable - ELASTIC_PASSWORD.
For example:
docker run -e ELASTIC_PASSWORD=MagicWord docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.3
See:
https://www.elastic.co/guide/en/elasticsearch/reference/6.1/docker.html

Resources