Kibana can't connect to Elasticsearch using Letsencrypt signed certs
I am trying to run a 3 node elasticsearch cluster with kibana using letsencrypt certs. I have copy pasted the standard docker-compose.yml and environment file from the official elasticsearch documentation here, and successfully got it working with self signed certs without error. When i try and swap out the self signed certs for lets encrypt signed certs the elasticsearch cluster works but kibana stops working with the error
kibana_1 | [2022-09-12T18:52:55.669+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. unable to get issuer certificate
The letsencrypt signed certs are all properly mounted into the container with the same owner/group and permissions as the self signed certs, for example
-rw-r----- 1 root root 991 Sep 12 14:03 bundle.zip
drwxr-x--- 2 root root 4096 Sep 12 13:38 ca
-rw-r----- 1 root root 2512 Sep 12 13:38 ca.zip
-rw-r----- 1 root root 1899 Sep 12 14:03 cert.pem
-rw-r----- 1 root root 7610 Sep 12 13:38 certs.zip
-rw-r----- 1 root root 3749 Sep 12 14:03 chain.pem
drwxr-x--- 2 root root 4096 Sep 12 13:38 es01
drwxr-x--- 2 root root 4096 Sep 12 13:38 es02
drwxr-x--- 2 root root 4096 Sep 12 13:38 es03
-rw-r----- 1 root root 5648 Sep 12 14:03 fullchain.pem
-rw-r----- 1 root root 272 Sep 12 13:38 instances.yml
-rw-r----- 1 root root 1826 Sep 12 14:03 intermediary.pem
-rw-r----- 1 root root 1704 Sep 12 14:03 privkey.pem
-rw-r----- 1 root root 1923 Sep 12 14:03 root.pem
Which i achieved by running these commands as suggested in the official docs
sudo find certs/ -type f -exec chmod 640 "{}" \;
sudo find certs -type d -exec chmod 750 "{}" \;
The intermediary.pem and root.pem certs were split out from the fullchain.pem cert and tried as part of the CA cert bundle as suggested in this other SO question. I have tried many different combinations of these certs in both the elastic search and kibana config and although more than one way works for the elastic search nodes, none of them work with kibana.
This is the final attempt i made, using privkey.pem as the key, fullchain.pem as the cert, and chain.pem as the CA, as suggested here in the elasticsearch docs. The below file omits the "setup" container that you will see in the official docs.
docker-compose.yml
version: "2.2"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- esdata01:/usr/share/elasticsearch/data
- ./certs:/usr/share/elasticsearch/config/certs
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.http.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.http.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.transport.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.transport.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert /usr/share/elasticsearch/config/certs/chain.pem https://dev.mysite.com:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es02:
depends_on:
- es01
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- esdata02:/usr/share/elasticsearch/data
- ./certs:/usr/share/elasticsearch/config/certs
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.http.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.http.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.transport.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.transport.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert /usr/share/elasticsearch/config/certs/chain.pem https://dev.mysite.com:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es03:
depends_on:
- es02
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- esdata03:/usr/share/elasticsearch/data
- ./certs:/usr/share/elasticsearch/config/certs
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.http.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.http.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.transport.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.transport.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert /usr/share/elasticsearch/config/certs/chain.pem https://dev.mysite.com:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
es02:
condition: service_healthy
es03:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
volumes:
- kibanadata:/usr/share/kibana/data
- ./certs:/usr/share/kibana/config/certs
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVER_HOST=0.0.0.0
- SERVERNAME=dev.mysite.com
- ELASTICSEARCH_HOSTS=https://dev.mysite.com:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=/usr/share/kibana/config/certs/chain.pem
${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://dev.mysite.com:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
.env
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=asdf1234
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=asdf1234
# Version of Elastic products
STACK_VERSION=8.4.1
# Set the cluster name
CLUSTER_NAME=docker-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200
# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80
# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824
# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
I have verified that elasticsearch is working both by the lack of errors in the output of the containers and also by running
curl -u elastic:asdf1234 https://dev.mysite.com:9200/_cluster/health
which gives me the output
{"cluster_name":"docker-cluster","status":"green","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":11,"active_shards":22,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}
Unlike Elasticsearch for some reason in Kibana you cannot use any of the certs that letsencrypt gives you as the CA. You must find the public root CA cert of letsencrypt itself, which is isrgrootx1.pem and can be downloaded from letsencrypt.org/certs/isrgrootx1.pem. Unfortunately none of this is clear in any documentation anywhere and after days of fruitless searching I stumbled upon this in another SO question!
Once you have that cert you can bind it into the container and then update the config to look like this
version: "2.2"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- esdata01:/usr/share/elasticsearch/data
- ./certs:/usr/share/elasticsearch/config/certs
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.http.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.http.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.transport.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.transport.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert /usr/share/elasticsearch/config/certs/chain.pem https://dev.mysite.com:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es02:
depends_on:
- es01
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- esdata02:/usr/share/elasticsearch/data
- ./certs:/usr/share/elasticsearch/config/certs
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.http.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.http.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.transport.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.transport.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert /usr/share/elasticsearch/config/certs/chain.pem https://dev.mysite.com:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es03:
depends_on:
- es02
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- esdata03:/usr/share/elasticsearch/data
- ./certs:/usr/share/elasticsearch/config/certs
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.http.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.http.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=/usr/share/elasticsearch/config/certs/privkey.pem
- xpack.security.transport.ssl.certificate=/usr/share/elasticsearch/config/certs/fullchain.pem
- xpack.security.transport.ssl.certificate_authorities=/usr/share/elasticsearch/config/certs/chain.pem
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert /usr/share/elasticsearch/config/certs/chain.pem https://dev.mysite.com:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
es02:
condition: service_healthy
es03:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
volumes:
- kibanadata:/usr/share/kibana/data
- ./certs:/usr/share/kibana/config/certs
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVER_HOST=0.0.0.0
- SERVERNAME=dev.mysite.com
- ELASTICSEARCH_HOSTS=https://dev.mysite.com:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=/usr/share/kibana/config/certs/isrgrootx1.pem
- SERVER_SSL_ENABLED="true"
- SERVER_SSL_KEY=/usr/share/kibana/config/certs/privkey.pem
- SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/certs/fullchain.pem
- SERVER_SSL_CERTIFICATEAUTHORITIES=/usr/share/kibana/config/certs/chain.pem
${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://dev.mysite.com:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
Now everything should work fine!
Related
I'm trying to replicate https://www.elastic.co/guide/en/elasticsearch/reference/7.x/configuring-tls-docker.html
The example shows how to turn on ssl for ES cluster with docker.
it's running the instances in one machine
I am running docker container on multiple hosts and having trouble sharing the volume for certificate
relevant parts are
// create certification files and save in certs volume
// create-certs.yml
services:
create_certs:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: create_certs
command: >
bash -c '
yum install -y -q -e 0 unzip;
if [[ ! -f /certs/bundle.zip ]]; then
bin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip;
unzip /certs/bundle.zip -d /certs;
fi;
chown -R 1000:0 /certs
'
working_dir: /usr/share/elasticsearch
volumes:
- certs:/certs
- .:/usr/share/elasticsearch/config/certificates
# networks:
# - elastic
volumes:
certs:
driver: local
# networks:
# elastic:
# driver: bridge
docker-compose.yml
version: '2.2'
services:
es0001:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es0001
environment:
- node.name=es0001
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es0002,es0003
- cluster.initial_master_nodes=es0001,es0002,es0003
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=$ELASTIC_PASSWORD
- xpack.license.self_generated.type=trial # <1>
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true # <2>
- xpack.security.http.ssl.key=$CERTS_DIR/es0001/es0001.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es0001/es0001.crt
- xpack.security.transport.ssl.enabled=true # <3>
- xpack.security.transport.ssl.verification_mode=certificate # <4>
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es0001/es0001.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es0001/es0001.key
- http.port=9500
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
ports:
- 9500:9500
networks:
- elastic
healthcheck:
test: curl --cacert $CERTS_DIR/ca/ca.crt -s https://localhost:9500 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
es0002:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es0002
environment:
- node.name=es0002
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es0001,es0003
- cluster.initial_master_nodes=es0001,es0002,es0003
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=$ELASTIC_PASSWORD
- xpack.license.self_generated.type=trial
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es0002/es0002.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es0002/es0002.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es0002/es0002.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es0002/es0002.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
networks:
- elastic
es0003:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es0003
environment:
- node.name=es0003
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es0001,es0002
- cluster.initial_master_nodes=es0001,es0002,es0003
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=$ELASTIC_PASSWORD
- xpack.license.self_generated.type=trial
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es0003/es0003.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es0003/es0003.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es0003/es0003.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es0003/es0003.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es0001": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es0001:9500
ELASTICSEARCH_HOSTS: https://es0001:9500
ELASTICSEARCH_USERNAME: kibana
ELASTICSEARCH_PASSWORD: $ELASTIC_PASSWORD
ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES: $CERTS_DIR/ca/ca.crt
SERVER_SSL_ENABLED: "true"
SERVER_SSL_KEY: $CERTS_DIR/kib01/kib01.key
SERVER_SSL_CERTIFICATE: $CERTS_DIR/kib01/kib01.crt
volumes:
- certs:$CERTS_DIR
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
certs:
driver: local
networks:
elastic:
driver: bridge
I get a feeling that driver: local for certs: means the volume exists localy .. and can't be shared among containers across multiple hosts.
Please correct me if I'm wrong
The volumes are indeed local(one local volume on each node that has a container mounting that volume).
One option is to create a NFS that is reachable by all your nodes and declare the volume with type: nfs. This way, each node will still create a local volume, but all the local volumes will read/write to the same location:
volumes:
certs:
driver: local
driver_opts:
type: nfs
o: nfsvers=4,addr=<NFS-ServerIpAddress>,rw
device: ":/directory-on-nfs"
I am trying to run Kibana opendistro in Elasticsearch opendistro through a docker-compose in a virtual machine in AZURE when i run the docker-compose i can access kibana on browser with : http://myipadress:5601/app/kibana but i can't for ElasticSearch .
my docker-compose :
version: '3'
services:
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.7.0
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1
- cluster.initial_master_nodes=odfe-node1
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-elasticdata:/usr/share/elasticsearch/data
- odfe-elasticconfig:/usr/share/elasticsearch/config
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.7.0
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
volumes:
- odfe-kibanaconfig:/usr/share/kibana/config
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-elasticdata:
odfe-elasticconfig:
odfe-kibanaconfig:
networks:
odfe-net:
Error messages :
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:23:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nGET https://odfe-node1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 172.22.0.3:9200"}
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:32:24Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:32:24Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2020-05-28T18:32:24Z","tags":["error","elasticsearch-service"],"pid":1,"message":"Unable to retrieve version information from Elasticsearch nodes."}
If I do docker ps and curl test, it gives me following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41ded49c03e5 amazon/opendistro-for-elasticsearch:1.7.0 "/usr/local/bin/dock…" 48 minutes ago Up 2 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9600->9600/tcp, 9300/tcp odfe-node1
84bed086ab5c amazon/opendistro-for-elasticsearch-kibana:1.7.0 "/usr/local/bin/kiba…" 48 minutes ago Up 2 seconds 0.0.0.0:5601->5601/tcp odfe-kibana
-------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200 -u admin:admin --insecure
{
"name" : "odfe-node1",
"cluster_name" : "odfe-cluster",
"cluster_uuid" : "Ax2q2FrEQgCQHKZoDT7C0Q",
"version" : {
"number" : "7.6.1",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
--------------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200/_cat/nodes?v -u admin:admin --insecure
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.22.0.3 22 72 4 0.16 0.81 0.86 dim * odfe-node1
--------------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200/_cat/plugins?v -u admin:admin --insecure
name component version
odfe-node1 opendistro-anomaly-detection 1.7.0.0
odfe-node1 opendistro-job-scheduler 1.7.0.0
odfe-node1 opendistro-knn 1.7.0.0
odfe-node1 opendistro_alerting 1.7.0.0
odfe-node1 opendistro_index_management 1.7.0.0
odfe-node1 opendistro_performance_analyzer 1.7.0.0
odfe-node1 opendistro_security 1.7.0.0
odfe-node1 opendistro_sql 1.7.0.0
---------------------------------------
[root#ServerEFK _data]# curl -XGET https://localhost:9200/_cat/indices?pretty -u admin:admin --insecure
yellow open security-auditlog-2020.05.28 6xPW0yPyRGKG1owKbBl-Gw 1 1 18 0 144.6kb 144.6kb
green open .kibana_92668751_admin_1 mgAiKHNKQJ-sgFDXw7Iwyw 1 0 1 0 3.7kb 3.7kb
green open .kibana_92668751_admin_2 VvRiV16jRlualCWJvyYFTA 1 0 1 0 3.7kb 3.7kb
green open .opendistro_security NHxbWWv0RJu8kScOtsejTw 1 0 7 0 36.3kb 36.3kb
green open .kibana_1 s2DBw7Y_SUS9Go-u5qOrjg 1 0 1 0 4.1kb 4.1kb
green open .tasks 0kVxFOcqQzOxyAYTGUIWDw 1 0 1 0 6.3kb 6.3kb
anyone can help please
Ok, I was able to get a single node elastic & kibana working with this docker-compose.yml:
version: '3'
services:
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.8.0
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- odfe-net
volumes:
odfe-data1:
networks:
odfe-net:
I started with this yaml file & changed the elastic environment variables to:
- cluster.name=odfe-cluster
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
I also overrode the kibana.yml file:
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
with this:
server.name: kibana
server.host: "0"
elasticsearch.hosts: https://odfe-node1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: admin
elasticsearch.password: admin
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
newsfeed.enabled: false
telemetry.optIn: false
telemetry.enabled: false
I extract the default kibana.yml & changed:
elasticsearch.hosts: https://odfe-node1:9200
elasticsearch.username: admin
elasticsearch.password: admin
But the 2 node example in the documentation still doesn't work for me.
Hope that helps
I am following the official ES documentation here for configuring a basic dev 3 node cluster with TLS using docker compose but am stuck at the 5th step - creating user passwords using the elasticsearch-setup-passwords tool.
I have so far been able to get a 3 node cluster working without TLS. I have also torn it down and restarted with certs created and TLS enabled as the docs say and see that the various containers output looks fine. Any attempt to run
docker exec es01 /bin/bash -c "bin/elasticsearch-setup-passwords auto --batch -Expack.security.http.ssl.certificate=certificates/es01/es01.crt -Expack.security.http.ssl.certificate_authorities=certificates/ca/ca.crt -Expack.security.http.ssl.key=certificates/es01/es01.key --url https://es01:9200"
as mentioned in the docs always returns
Sets the passwords for reserved users
Non-option arguments:
command
Option Description
------ -----------
-E <KeyValuePair> Configure a setting
-h, --help Show help
-s, --silent Show minimal output
-v, --verbose Show verbose output
ERROR: setting [xpack.security.http.ssl.certificate_authorities] already set, saw [certificates/ca/ca.crt] and [/usr/share/elasticsearch/config/certificates/ca/ca.crt]
Any one of the settings designated in the command above will throw this "already set" error, yet it is listed in the official documentation. If I run the command as just
docker exec es01 /bin/bash -c "bin/elasticsearch-setup-passwords auto --batch --url https://es01:9200"
It will generate the passwords as expected.
If I do not specify the xpack security settings will the proper certificated be used? There are some other default certificates that exist in the containers that I do not want to use for this, is there a way I can verify that the correct set was used? Is there a way to overwrite the "already set" settings?
The official documentation does not explain this clearly and I have not been able to find anything for this specifically on SO or the web.
I am using Docker Desktop version 2.2.0.5 for MacOS Catalina version 10.15.4 with Docker Compose version 1.25.4 and ES version 7.7.0
My docker compose file looks like:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es01/es01.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
ports:
- 9200:9200
networks:
- elastic
healthcheck:
test: curl --cacert $CERTS_DIR/ca/ca.crt -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es02/es02.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es03/es03.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es03/es03.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es03/es03.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es01": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es01:9200
ELASTICSEARCH_HOSTS: https://es01:9200
ELASTICSEARCH_USERNAME: kibana
ELASTICSEARCH_PASSWORD: CHANGEME
ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES: $CERTS_DIR/ca/ca.crt
SERVER_SSL_ENABLED: "true"
SERVER_SSL_KEY: $CERTS_DIR/kib01/kib01.key
SERVER_SSL_CERTIFICATE: $CERTS_DIR/kib01/kib01.crt
volumes:
- certs:$CERTS_DIR
networks:
- elastic
volumes:
data01:
driver: local
name: data01
data02:
driver: local
name: data02
data03:
driver: local
name: data03
certs:
driver: local
name: certs
networks:
elastic:
driver: bridge
name: elastic
nothing wrong with your method, just type the below commands in given sequence;
docker exec es01 /bin/bash -c "bin/elasticsearch-setup-passwords \
auto --batch \
--url https://localhost:9200"
docker-compose down
docker-compose -f elastic-docker-tls.yml up -d
restart your browser and wait until reload.
I 'm want to run ElasticSearch on docker and connect rails with it.
this is my docker-compose.yml file
version: '3'
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: P#ssw0rd
MYSQL_DATABASE: chatsystem
MYSQL_USER: root
MYSQL_PASSWORD: P#ssw0rd
ports:
- "3307:3306"
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
app:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- ".:/ChatSystem"
depends_on:
- db
- es01
- es02
- es03
ports:
- "3000:3000"
links:
- db
- es01
- es02
- es03
environment:
DB_USER: root
DB_NAME: chatsystem
DB_PASSWORD: P#ssw0rd
DB_HOST: db
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
When I run docker-compose up and access localhost:9200 through the browser. I get this response
{
"name": "es01",
"cluster_name": "es-docker-cluster",
"cluster_uuid": "fNDAvsI6QUyHkzy919PHhw",
"version": {
"number": "7.4.2",
"build_flavor": "default",
"build_type": "docker",
"build_hash": "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
"build_date": "2019-10-28T20:40:44.881551Z",
"build_snapshot": false,
"lucene_version": "8.2.0",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
When i try create new "message" i get this error
Errno::EADDRNOTAVAIL (Failed to open TCP connection to localhost:9200 (Cannot assign requested address - connect(2) for "localhost" port 9200))
message.rb file
class Message < ApplicationRecord
include Elasticsearch::Model
include Elasticsearch::Model::Callbacks
settings do
mappings dynamic: false do
indexes :message, type: :text
end
end
end
Message.import force: true
I 'm using this gems
gem 'elasticsearch-model', git: 'git://github.com/elastic/elasticsearch-rails.git', branch: 'master'
gem 'elasticsearch-rails', git: 'git://github.com/elastic/elasticsearch-rails.git', branch: 'master'
On your environment you must add ELASTICSEARCH_URL=http://elasticsearch:9200 and then rebuild your docker compose. You can take this article as reference.
I have a docker-compose.yml file which declares webapp, postgres database, a two node elasticsearch, and a kibana container.
version: '3'
services:
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: MyWebApp-dev
image: 'localhost:443/123'
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- db
- elasticsearch
- kibana
links:
- db
- elasticsearch
- kibana
db:
image: postgres:10
container_name: db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mine_dev
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
ports:
- "5601:5601"
container_name: kibana
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
They all build successfully, but kibana cannot get a live connection to elasticsearch
kibana | {"type":"log","#timestamp":"2019-05-08T23:36:13Z","tags":["status","plugin:searchprofiler#7.0.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
kibana | {"type":"log","#timestamp":"2019-05-09T00:02:46Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
and the index "products" cannot be created with elixir/ecto mix
MyWebApp-dev | (elixir) lib/calendar/datetime.ex:537: DateTime.to_unix/2
MyWebApp-dev | (elasticsearch) lib/elasticsearch/indexing/index.ex:287: Elasticsearch.Index.build_name/1
MyWebApp-dev | (elasticsearch) lib/elasticsearch/indexing/index.ex:31: Elasticsearch.Index.hot_swap/2
MyWebApp-dev | (elasticsearch) lib/mix/elasticsearch.build.ex:86: Mix.Tasks.Elasticsearch.Build.build/3
MyWebApp-dev |
MyWebApp-dev | ** (Mix) Index products could not be created.
MyWebApp-dev |
MyWebApp-dev | %HTTPoison.Error{id: nil, reason: :econnrefused}
All the while, I can connect to the elasticsearch server:
A68MD-PRO:~# curl http://localhost:9200/_cat/health
1557359160 23:46:00 docker-cluster green 2 2 2 1 0 0 0 0 - 100.0%
Even from the container inside, curling yields:
A68MD-PRO:~# docker exec elasticsearch curl http://elasticsearch:9200/_cat/health
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 66 100 66 0 0 6969 0 --:--:-- --:--:-- --:--:-- 7333
1557373042 03:37:22 docker-cluster green 2 2 2 1 0 0 0 0 - 100.0%
Does anyone know what this problem is about and how to solve it?
Update: If I do
docker exec -it MyWebApp-dev curl -XPUT 'http://elasticsch:9200/something/example/1' -d ' { "type": "example", "quantity": 2 }' -H'Content-Type: application/json'
it works perfectly well. So it must have something to do with httpoison, I think.
the elaticsearch containers are on a different docker network than the kibana container.
Please verify this network configuration:
networks:
- esnet
Remove it for the elastic nodes or apply the very same network config for kibana.