I've setup elasticsearch and kibana with docker compose. elasticsearch is deployed on: localhost:9200 while kibana is deployed on localhost:5601
When trying to deploy metricbeat with docker run I got the following errors:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://localhost:9200: Get http://localhost:9200: dial tcp [::1]:9200: connect: cannot assign requested address]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 192.168.65.1:53: no such host]
My docker-compose.yml:
# ./docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
First edit your docker-compose file by adding a name for default docker network:
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
networks:
- my-network
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- my-network
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
networks:
my-network:
name: awesome-name
Execute docker-compose up and then start metricbeat with the below command:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 --network=awesome-name setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["elasticsearch:9200"]
Explanation:
When you try to deploy metricbeat, you provide below envars:
setup.kibana.host=kibana:5601
output.elasticsearch.hosts=["localhost:9200"]
I will start with the second one. With docker run command, when you are starting metricbeat, you are telling the container that it can access elastic search on localhost:9200. So when the container starts, it will access localhost on port 9200 expecting to find elasticsearch running. But, as the container is a host isolated process with its own network layer, localhost resolves to the container itself, not to your docker host machine as you are expecting.
Regarding the setup of kibana host, you should firstly understand how docker-compose works. By default, when you execute docker-compose up, a docker network is created and all services defined on yml file are added to this network. Inside this network and only, services are accessible through their service name. For your case, as defined on yml file, their names would be elasticsearch, kibana.
So in order metricbeat container to be able to communicate with elasticsearch and kibana containers, it should be added to the same docker network. This can be achieved with setting --network flag on docker run command.
Another approach would be to share docker host's network with your containers by using network mode host, but I would not recommend that.
References:
Docker compose
docker run
Related
I am trying to deploy a stack with the docker swarm with the following configuration docker-compose.yaml file as below via the command:
docker stack deploy --with-registry-auth -c docker-compose.yaml project
version: "3.9"
services:
mysql:
image: mysql:8.0
deploy:
replicas: 1
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
ports:
- 3306:3306
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: project_production
MYSQL_USER: username
MYSQL_PASSWORD: password
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- internal
website:
image: registry.gitlab.com/project/project-website:latest
networks:
- internal
deploy:
replicas: 1
ports:
- 3000:3000
environment:
- RAILS_ENV=production
- MYSQL_HOST=mysql
- ES_HOST=http://es01
- project_DATABASE_USERNAME=root
- project_DATABASE_PASSWORD=root
depends_on:
- es01
- mysql
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
mysql_data:
networks:
internal:
external: true
name: project
Before I deploy the stack I also have created the network for the project via the following command:
docker network create -d overlay project
But when I see the logs for the project using docker logs command I see the following error stops my project get started:
Mysql2::Error: Host '10.0.2.202' is not allowed to connect to this MySQL server
I went exactly as the documents suggested I am not sure what is wrong with the settings that I have come up!
Question:
How can I connect from project to mysql container in docker swarm?
Based on the documentation, Docker Swarm automatically creates the overlay network for you. So I think you don't need to create an external network by default, unless you have specific needs:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
As Chris also mentioned in the comments, the DB credentials also don't match.
OPTIONAL: MYSQL_ROOT_HOST is only necessary if you want to connect as root user which is not recommended in production environments. There's also no need to expose the port to the host machine since the database service will only be used from inside the cluster. So if you still want to use root user, you can set the variable to allow connections only from inside the cluster, like MYSQL_ROOT_HOST=10.*.*.*.
How do i enable basic authentication for kibana and elasticsearch on docker container?
I want to have authentication enabled in kibana. With the normal files we can simply set the flag
xpack.security.enabled=true and generate the password but since i am running elasticsearch and kibana on docker how do i do it ??
This is my current docker file
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.9.2
ports:
- '9200:9200'
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: kibana:7.9.2
ports:
- '5601:5601'
You can pass it in env vars while running docker run command for elasticsearch.
Something like this:
docker run -p 9200:9200 -p 9300:9300 -e "xpack.security.enabled=true" docker.elastic.co/elasticsearch/elasticsearch:7.14.0
I'm trying to run Elastic,Kibana inside docker-compose.
When I bring up the containers using docker-compose up, Elasticsearch loads up fine. After it loads, the Kibana containers start up. But once they load, they are not able to see or connect the Elasticsearch container, producing these messages:
Kibana docker Log:
{"type":"log","#timestamp":"2020-01-22T19:57:27Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch01:9200/"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
Am not able to see the elasticsearch host from the Kibana container
curl -X GET http://elasticsearch01:9200
throws below quoted error
curl: (7) Failed connect to elasticsearch01:9200; No route to host
Deeply digged and out this is happening only in CentOS8 .
Also in same CENTOS8 am able to up and use standalone elasticsearch and kibana instance via systemctl service.
Am i missing something here ?
Can anyone help?
docker-compose.yml:
networks:
docker-elk:
driver: bridge
services:
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch01
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
restart: always
environment:
- node.name=elasticsearch01
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticdata:/usr/share/elasticsearch/data
ports:
- "9200"
expose:
- "9200"
- "9300"
networks:
- docker-elk
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
container_name: kibana
depends_on: ['elasticsearch01']
environment:
- SERVER_NAME=kibanaServer
restart: always
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
restart: always
networks:
- docker-elk
volumes:
- kibanadata:/usr/share/kibana/data
ports: ['5601:5601']
links:
- elasticsearch01
volumes:
elasticdata:
driver: local
kibanadata:
driver: local
secrets:
elasticsearch.yml:
file: ./ELK_Config/elastic/elasticsearch.yml
kibana.yml:
file: ./ELK_Config/kibana/kibana.yml
System/Docker Info
OS: CentOS 8
ELK versions 7.4.0
Docker version 19.03.4, build 9013bf583a
Docker-compose:docker-compose version 1.25.0, build 0a186604
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
I am trying to create a local kibana/elastic stack while developing a spring-boot application. I can successfully connect my application to elastic when I launch it as a single container:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.3
My application can connect on 9300, and my web browser can see that it's up on localhost:9200
So... I tried launching the provided stack-docker docker-compose file found here: https://github.com/elastic/stack-docker
Everything seems to setup fine, and I can connect to kibana on localhost:5601, but neither my browser or my application can connect to elastic on 9200 and 9300 respectively.
The only modification from what's checked into github and what I ran is that I added 9300 to the elastic definition.
Any idea what changes I can make to make elastic accessible to my app/browser when running in docker-compose?
Please add the following docker compose
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
kibana:
image: docker.elastic.co/kibana/kibana:6.4.2
container_name: kibana
environment:
- SERVER_NAME=localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- XPACK.MONITORING.COLLECTION.ENABLED=true
ports:
- 5601:5601
volumes:
esdata1:
driver: local
After running the kibana url will be available at
http://localhost:5601
And elasticsearch url
http://localhost:9200/