Elasticsearch Cluster deployed with docker, ansible, and authentication - docker

This is going to be an odd post with how basic it is, but I'm stuck. I have turned this docker-compose file into an ansible job that spins up the cluster:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
The problem is when I add security to the cluster, then the cluster returns a master_not_discovered_exception. All I'm adding extra to es01 is xpack.security.enabled: true, ELASTIC_PASSWORD: "password"
and xpack.security.transport.ssl.enabled: true
Any idea where to go from here?

Adding
xpack.security.transport.ssl.enabled: true
requires you first to generate and add certificates to the services in order to encrypt inter-node communication. There are several steps to go:
Get some certificates, at least generate SSL certificates by you own
Distribute certificates to all nodes via volumes
Configure the certificates and passwords via keystore
Please have a look at the general documentation of encrypting communications and then the docker relevant steps you need to go.
Just for the case you don't want to encrypt the inter-node communication but the http-endpoint (rest-api), that´s what
xpack.security.http.ssl.enabled: true
is made for. The procedure to get this working is very similar to the previous one and covered by the mentioned docs.

Related

Start up elastic search on multiple hosts using docker

Following this guide: Install ES using Docker, I was able to start up an ES cluster with three instances. Unfortunately, these containers are all running in the same host. How could I distribute cluster instances to different hosts(with different IP addresses) and also enable communications between these es instances? Any suggestions? thanks in advance!
Here is the docker-compose.yaml file.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
distribute cluster instances to different hosts(with different IP addresses) and also enable communications between these es instances?
This is exactly the use case for Kubernetes or Docker Swarm (if you must have containers).
https://www.elastic.co/blog/introducing-elastic-cloud-on-kubernetes-the-elasticsearch-operator-and-beyond

volumes value ' do not match any of the regexes: u'^[a-zA-Z0-9._-]+$' - Error in elasticsearch docker-compose

Trying to start elastic search cluster using docker-compose given here
I am not sure what is the issue here. I am running this docker-compose.yaml file in ubuntu 18.04 LTS. Tried find this error online but no help.
docker-compose.yaml
##############################################################################################
# LINKS - https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
##############################################################################################
version: "2.2"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: es01_dev
environment:
- node.name=es01
- cluster.name="es-docker-cluster"
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /data/var/lib/elasticsearch-server_01:/usr/share/elasticsearch/data
# - /data/var/elasticsearch-server-backup:/var/elasticsearch-backup
# - /opt/elasticsearch-server/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: es02_dev
environment:
- node.name=es02
- cluster.name="es-docker-cluster"
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /data/var/lib/elasticsearch-server_02:/usr/share/elasticsearch/data
# - /data/var/elasticsearch-server-backup:/var/elasticsearch-backup
# - /opt/elasticsearch-server/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: es03
environment:
- node.name=es03
- cluster.name="es-docker-cluster"
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock:=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /data/var/lib/elasticsearch-server_03:/usr/share/elasticsearch/data
# - /data/var/elasticsearch-server-backup:/var/elasticsearch-backup
# - /opt/elasticsearch-server/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
networks:
- elastic
volumes:
/data/var/lib/elasticsearch-server_01:
driver: local
/data/var/lib/elasticsearch-server_02:
driver: local
/data/var/lib/elasticsearch-server_03:
driver: local
networks:
elastic:
driver: bridge
But docker-compose up given me this error
visionary#instance-2:/opt/elasticsearch-server$ docker-compose up
ERROR: The Compose file './docker-compose.yml' is invalid because:
volumes value '/data/var/lib/elasticsearch-server_01', '/data/var/lib/elasticsearch-server_02', '/data/var/lib/elasticsearch-server_03' do not match any of the regexes: u'^[a-zA-Z0-9._-]+$'
services.es01.ports value ['9200:9200', '9200:9200'] has non-unique elements
Not sure what is the issue here. Any help is appreciated.

Elasticsearch cluster with docker / docker-compose on multiple hosts

I'd like to run docker cluster on multiple hosts using docker / docker-compose
I could define 3 containers es1, es2, es3 and then run each containers in each host I guess.
(I'm not sure how I'll make them discover each other)
Of course, it'll be huge pain to start/restart cluster sshing onto 3 machines.
is it possible to manage multi-host docker somehow?
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-compose-file gives an example but it runs on single host I believe
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge

Dockerized Elasticsearch nodes unavailable for Liferay 7.1

Liferay is not able to recognize my Elasticsearch cluster when starting. Here is my docker-compose configuration:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=liferay-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- "9299:9200"
- "9399:9300"
expose:
- "9299"
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=liferay-cluster2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9298:9200"
- "9398:9300"
expose:
- "9298"
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config file content
transportAddresses="127.0.0.1:9299"
logExceptionsOnly="false"
operationMode="REMOTE"
indexNamePrefix="myprefix-"
clusterName="liferay-cluster"
When starting docker-compose, I'm able to access my two ES clusters on: http://127.0.0.1:9299/ and http://127.0.0.1:9298/
However, when liferay starts it is unable to access ES nodes:
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{vUNCF_HNRtu_tYUjkqhXvg}{127.0.0.1}{127.0.0.1:9299}]]
Anyone tried this configuration ? Any help would be appreciated. Thanks :-)
I've found a solution. It could help if someone is trying to do the same.
As, I said in my comment to #ibexit, I'm running two dockerized ES clusters and two separate Liferay portals (not in containers) on the same machine (development mode).
I changed th transport address in Liferay OSGi config file, since it must match the transport tcp port where ES is running:
transportAddresses="127.0.0.1:9301"
logExceptionsOnly="false"
operationMode="REMOTE"
indexNamePrefix="myprefix-"
clusterName="liferay-cluster"
I also added the property network.publish_host=127.0.0.1 in my ES clusters (without this property Liferay was not able to detect ES nodes)
Here is my docker-compose.yml:
Using ES 6.1.4
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4
container_name: es01
environment:
- node.name=es01
- cluster.name=liferay-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- transport.tcp.port=9301
- network.publish_host=127.0.0.1
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- "9201:9200"
- "9301:9301"
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4
container_name: es02
environment:
- node.name=es02
- cluster.name=liferay-cluster2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- transport.tcp.port=9302
- network.publish_host=127.0.0.1
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9202:9200"
- "9302:9302"
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
network.publish_host did the trick !
Per default, elasticsearch binds the transport and http ports to localhost (local) only. So your ports exposed by docker are not working. You need to bind to a specific ip or using 0.0.0.0 for all or site as explained here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#network-interface-values
Please have in mind, that enabling this will startup the node in production mode followed by several bootstrap checks. Please see the docs if you need more informations on this topics or search SO.
My working local setup with docker compose and two elasticsearch nodes in docker containers and Liferay running on host. I am using elasticsearch 6.8.2 images modified according to Liferay docs available from docker hub with url: https://hub.docker.com/repository/docker/ktorek/liferay7-elasticsearch.
I am using gradle workspace. So I've configured: configs/local/osgi/configs/com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config:
operationMode=REMOTE
clusterName=docker-cluster
transportAddresses=127.0.0.1:9300,127.0.0.1:9301
I've killed a lot of time with transportAddresses configuration as it's documented to use square brackets and square quotes transportAddresses=["192.168.1.1:9300","192.168.1.2:9300"] but it does not work. Configuration listed above contains actual working configuration syntax.
My docker-compose.yml:
version: '3.7'
services:
es01:
container_name: "es01"
image: ktorek/liferay7-elasticsearch:latest
environment:
- node.name=es01
- node.data=true
- cluster.name=docker-cluster
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es02"
ports:
- "9300:9300"
- "9200:9200"
networks:
- mynetwork
volumes:
- es01-data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
es02:
container_name: "es02"
image: ktorek/liferay7-elasticsearch:latest
environment:
- node.name=es02
- node.data=true
- cluster.name=docker-cluster
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01"
ports:
- "9301:9300"
- "9201:9200"
networks:
- mynetwork
volumes:
- es02-data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
networks:
mynetwork:
name: mynetwork
driver: bridge
ipam:
config:
- subnet: 172.30.29.0/24

How to change default elasticsearch password in docker-compose?

Elasticsearch's official docker image documentation provides this docker-compose.yml example:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: elasticsearch1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
However, it doesn't explain how to customize the password. It does direct us to a X-Pack documentation page, but I refuse to believe I have to go through all that trouble just to change a password. Is there any simpler, canonical way of configuring a custom password for elasticsearch on a Docker Compose file?
Starting from 6.0 elasticsearch docker images has the ability to configure the password using the following environment variable - ELASTIC_PASSWORD.
For example:
docker run -e ELASTIC_PASSWORD=MagicWord docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.3
See:
https://www.elastic.co/guide/en/elasticsearch/reference/6.1/docker.html

Resources