I have an ElasticSearch cluster running somewhere and I though to spin a Kibana container on my local machine and connect to the cluster, but it's not working. It looks like it's looking for a local ES.
kibana_1 | {"type":"log","#timestamp":"2022-08-31T09:06:05Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2022-08-31T09:06:05Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
This is docker-compose.yml I'm using
version: "3"
services:
kibana:
image: kibana:7.0.1
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_URL=https://esinstance.us-east-1.es.amazonaws.com/
- ELASTICSEARCH_USERNAME=admin
- ELASTICSEARCH_PASSWORD=pass123
You need edit ENV ELASTICSEARCH_URL to ELASTICSEARCH_HOSTS.
File docker-compose.yml will look like this:
version: "3"
services:
kibana:
image: kibana:7.0.1
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS='["https://esinstance.us-east-1.es.amazonaws.com"]'
- ELASTICSEARCH_USERNAME=admin
- ELASTICSEARCH_PASSWORD=pass123
Related
I ran a docker-compose file to setup elasticsearch and Kibana on Ubuntu 18.04LTS. Kibana container is up and running just fine but elasticsearch goes down after about 10secs. I have restarted the containers and docker service several times and still got the same result. Been on this all day and hoping that I get some help.
Docker-Compose file.
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
environment:
- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- "discovery.type=single-node"
- ELASTIC_PASSWORD=secretpassword
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:7.16.3
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=secretpassword
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Also checked the logs on the es-container and it displayed;
Created elasticsearch keystore in
/usr/share/elasticsearch/config/elasticsearch.keystore
Audit logging can be only enabled with paid ES subscription and you don't provide any license info to your container.
I've the following docker compose file. I'm trying to connect elastic search running in another machine to kibana.
version: '3.3'
services:
kibana_ci:
image: docker.elastic.co/kibana/kibana:6.3.2
environment:
ELASTICSEARCH_URL: http://my_domain:9200
container_name: kibana_ci
command: kibana
ports:
- "5601:5601"
But kibana is keep trying to connect to http://elasticsearch:9200/ url. I've also tried with following options which didnt work.
environment:
- "ELASTICSEARCH_URL=http://my_domain:9200"
environment:
- "KIBANA_ELASTICSEARCH_URL=http://my_domain:9200"
environment:
KIBANA_ELASTICSEARCH_URL: http://my_domain:9200
environment:
elasticsearch.url: http://my_domain:9200
How can I change the url in docker compose file (without overriding kibana.yml file).
This compose file works for me:
version: '3.3'
services:
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_URL: http://my_domain
You don't need to define default port 9200.
kibana_1 | {"type":"log","#timestamp":"2018-09-20T16:58:31Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://my_domain:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-09-20T16:58:31Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2018-09-20T16:58:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://my_domain:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-09-20T16:58:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
For those who will face the same issue in Kibana 7.5, you will have to use the ELASTICSEARCH_HOSTS environment variable instead of ELASTICSEARCH_URL, like below:
kibana:
image: docker.elastic.co/kibana/kibana:7.5.2
container_name: kibana
environment:
ELASTICSEARCH_HOSTS: http://es01:9200
ports:
- 5601:5601
depends_on:
- es01
networks:
- elastic
You can also consult via the following link the list of all environment variables available, and how to setup in a docker environment:
https://www.elastic.co/guide/en/kibana/7.5/docker.html
I'm trying to configure docker-compose with kibana and elasticsearch and I would like to know do I need logstash as well?
No. If you don't need the Logstash functionality, you don't need it.
Simple example with Elasticsearch and Kibana would be:
---
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:$ELASTIC_VERSION
volumes:
- /usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:$ELASTIC_VERSION
links:
- elasticsearch
ports:
- 5601:5601
Kibana credentials (if you are using version 5):
login: elastic
password: changeme
Using docker-compose v3 and deploying to a swarm:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
deploy:
replicas: 1
ports:
- "9200:9200"
tty: true
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
deploy:
mode: global
ports:
- "5601:5601"
depends_on:
- elasticsearch
tty: true
I see this in the kibana service log:
Unable to revive connection: http://elasticsearch:9200/
Elasticsearch service is running and can be reached.
Swarm consists of 3 nodes.
What am I missing?
Update:
I turns out that if I try to access kibana on the same swarm node where elasticsearch is running, it works. All other nodes either have a network problem or cannot resolve the elasticsearch name.
I found the reason, and the solution.
My swarm is running on AWS - All nodes are placed in the same security group and I assumed all ports were open internally in that security group. That's not the case.
I explicitly configured the security group to allow inbound traffic as per dockers routing mesh specs here: https://docs.docker.com/engine/swarm/ingress/
Docker-compose by default generates a network and puts all services within it. But I do not know if it changes in docker swarm. To define it you can do this.
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
deploy:
replicas: 1
ports:
- "9200:9200"
tty: true
networks:
- some-name
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
deploy:
mode: global
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
tty: true
networks:
- some-name
networks:
some-name:
driver: overlay
I hope it serves you, I will wait for news.
Docker version 17.03.1-ce, build c6d412e
OS: Ubuntu
I am trying to connect to host mysql from the docker container. but i am getting this error.
Error: connect ECONNREFUSED 0.0.0.0:3306
I am getting same for mysql if i use mysql container. Tried 127.0.0.1 and localhost also.
version: '2'
services:
### Applications Code Container
#############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
apache2:
build:
context: ./apache2
volumes_from:
- applications
volumes:
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ./apache2/sites:/etc/apache2/sites-available
ports:
- "${APACHE_HOST_HTTP_PORT}:80"
- "${APACHE_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
node:
build:
context: ./node
volumes_from:
- applications
ports:
- "4000:30001"
networks:
- frontend
- backend
### MySQL Container #########################################
mysql:
build:
context: ./mysql
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
volumes_from:
- applications
volumes:
- ${DATA_SAVE_PATH}/mysql:/var/lib/mysql
- ./mysql/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
ports:
- "${MYSQL_PORT}:3306"
networks:
- backend
### Networks Setup ############################################
networks:
frontend:
driver: "bridge"
backend:
driver: "bridge"
### Volumes Setup #############################################
volumes:
mysql:
driver: "local"
mongo:
driver: "local"
node:
driver: "local"
apache2:
driver: "local"
Instead of using 0.0.0.0, 127.0.0.1 or localhost, you should use your host machine's IP. This is because each container is a individual node in the network.
Or if you can inspect your MySQL container, and get the IP of it, you can use the IP as well, since they are on the same network.