How to connect my Kibana to ElasticSearch in the docker run command? - docker

Just trying to learn to setup Kibana and Elastic search using native docker command (i.e. not using Docker-Compose).
Below are the commands I run
docker network create es-net
docker run -d --name es-cluster \
--net es-net -p 9200:9200 \
-e "xpack.security.enabled=false" \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:7.2.0
docker run -d --net es-net -p 5601:5601 \
-e ELASTICSEARCH_URL=http://es-cluster:9200 \
docker.elastic.co/kibana/kibana:7.2.0
Somehow Kibana is not loading the elastic search up when I run http://localhost:5601/ and always with the message Kibana server is not ready yet
I follow the answer as per Kibana on Docker cannot connect to Elasticsearch, to ensure the ELASTICSEARCH_URL is correctly set, but it is still not coming up. Anything I miss?
note: tested with curl 0.0.0.0:9200, the elastic search is already running

Looks like since I'm in version 7.2.0 of Kibana, it has changed from ELASTICSEARCH_URL to ELASTICSEARCH_HOSTS
as per https://www.elastic.co/guide/en/kibana/current/docker.html
docker run -d --net es-net -p 5601:5601 \
-e ELASTICSEARCH_HOSTS=http://es-cluster:9200 \
docker.elastic.co/kibana/kibana:7.2.0
With this in place, all should work then.

Related

run elasticsearch and kibanawith docker bootstrap checks failed ERROR

this is my first time using elasticsearch and I use this link but when I run this command
docker run --name es-node01 --net elastic -p 9200:9200 -p 9300:9300 -t docker.elastic.co/elasticsearch/elasticsearch:8.6.1
I got this error
ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
Try to disable swapping.
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_disable_swapping
EDIT:
To use it:
docker run --name es-node01 --net elastic -p 9200:9200 -p 9300:9300 -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 -t docker.elastic.co/elasticsearch/elasticsearch:8.6.1

How to set "xpack.security.enrollment.enabled" to "true for elasticsearch in Docker

This is how I start elasticsearch with Kibana in "Docker for Windows":
docker network create --driver bridge elastic
docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 --name elasticsearch -v elasticsearch-data:/usr/share/elasticsearch/data -e "discovery.type=single-node" -e ELASTIC_USER=Andreas -e ELASTIC_PASSWORD=Hirsebrei docker.elastic.co/elasticsearch/elasticsearch:8.5.2
docker run --name kib-01 -p 5601:5601 docker.elastic.co/kibana/kibana:8.5.3
This all runs fine and I can open the Kibana pagein the browser which requests an enrollment token.
I use the following from a command line to generate the enrollment token:
docker exec -it elasticsearch /bin/sh
then in the shell I do this:
cd /usr/share/elasticsearch/bin/
./elasticsearch-create-enrollment-token --scope kibana
which results in the following error message:
ERROR: [xpack.security.enrollment.enabled] must be set to `true` to create an enrollment token
Now I am lost.
Can someone please help me out and explain to me how to set [xpack.security.enrollment.enabled] to true?

How to set kibana ELASTICSEARCH_URL parameter

Docker for windows: 2.0.0.3(31259)
I run a elasticsearch and kibana in docker. elasticsearch is run .But kibana can not run . It always try to connected http://elasticsearch:9200 .I set
the ELASTICSEARCH_URL kibana command. But not work
request http://localhost:5601/. Kibana server is not ready yet
docker run -d --name d-elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.1.1
docker run -d --name d-kibana -e ELASTICSEARCH_URL=http://192.168.0.73:9200 -p 5601:5601 docker.elastic.co/kibana/kibana:7.1.1
kibana docker log:
{"type":"log","#timestamp":"2019-06-05T08:21:26Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
In version 7, Kibana uses a different environment variable: ELASTICSEARCH_HOSTS
docker run -d --name d-kibana -e "ELASTICSEARCH_HOSTS=http://192.168.0.73:9200" -p 5601:5601 docker.elastic.co/kibana/kibana:7.1.1

Running a local kibana in a container

I am trying to run use kibana console with my local elasticsearch (container)
In the ElasticSearch documentation I see
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.2
Which lets me run the community edition in a quick one liner.
Looking at the kibana documentation i see only
docker pull docker.elastic.co/kibana/kibana:6.2.2
Replacing pull with run it looks for the x-pack (I think it means not community) and fails to find the ES
Unable to revive connection: http://elasticsearch:9200/
Is there a one liner that could easily set up kibana localy in a container?
All I need is to work with the console (Sense replacement)
If you want to use kibana with elasticsearch locally with docker, they have to communicate with each other. To do so, according to the doc, you need to link the containers.
You can give a name to the elasticsearch container with --name:
docker run \
--name elasticsearch_container \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
And then link this container to kibana:
docker run \
--name kibana \
--publish 5601:5601 \
--link elasticsearch_container:elasticsearch_alias \
--env "ELASTICSEARCH_URL=http://elasticsearch_alias:9200" \
docker.elastic.co/kibana/kibana:6.2.2
The port 5601 is exposed locally to access it from your browser. You can check in the monitoring section that elasticsearch's health is green.
EDIT (24/03/2020):
The option --link may eventually be removed and is now a legacy feature of docker.
The idiomatic way of reproduce the same thing is to firstly create a user-defined bridge:
docker network create elasticsearch-kibana
And then create the containers inside it:
 Version 6
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_URL=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:6.2.2
Version 7
As it was pointed out, the environment variable changed for the version 7. It now is ELASTICSEARCH_HOSTS.
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_HOSTS=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:7.6.2
User-defined bridges provide automatic DNS resolution between containers that means you can access each other by their container names.
It is convenient to use docker-compose as well.
For instance, the file below, stored in home directory, allows to start Kibana with one command:
docker-compose up -d:
# docker-compose.yml
version: "2"
kibana:
image: "docker.elastic.co/kibana/kibana:6.2.2"
container_name: "kibana"
environment:
- "ELASTICSEARCH_URL=http://<elasticsearch-endpoint>:9200"
- "XPACK_GRAPH_ENABLED=false"
- "XPACK_ML_ENABLED=false"
- "XPACK_REPORTING_ENABLED=false"
- "XPACK_SECURITY_ENABLED=false"
- "XPACK_WATCHER_ENABLED=false"
ports:
- "5601:5601"
restart: "unless-stopped"
In addition, Kibana service might be a part of your project in development environment (in case, docker-compose is used).

Wordpress Access denied for user root with MySQL container

I'm trying to make MySQL instance available to other containers, I'm following this documentation mysql and this wordpress official documentation, I get this error
MySQL Connection Error: (1045) Access denied for user 'root'#'172.17.0.3' (using password: YES)
Code for MySQL instance
docker run -d --restart on-failure -v hatchery:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=Kerrigan \
-e MYSQL_DATABASE=zerglings --name spawning-pool mysql
Code for WordPress instance
docker run -d --name lair -p 8080:80 --link spawning-pool:mysql wordpress
How can I successfully link wordpress and mysql containers?
you need to pass in your database connection credentials via environment variables to wordpress:
docker run -d --name lair -p 8080:80 --link spawning-pool:mysql \
-e WORDPRESS_DB_HOST=mysql \
-e WORDPRESS_DB_NAME=zerglings \
-e WORDPRESS_DB_PASSWORD=zerglings wordpress
I have solved it by deleting everything and try starting it up again.
docker rm -v spawning-pool # -v Remove the volumes associated with the container
Remove the volume too
docker volume rm hatchery
Then I created the containers again
# create the volume
docker volume create hatchery
# MySQL instance
docker run -it -d --restart on-failure -v hatchery:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=Kerrigan \
-e MYSQL_DATABASE=zerglings --name spawning-pool mysql
# creating wordpress
docker run -d --name lair -p 8080:80 --link spawning-pool:mysql \
-e WORDPRESS_DB_HOST=mysql -e WORDPRESS_DB_NAME=zerglings
-e WORDPRESS_DB_PASSWORD=Kerrigan wordpress

Resources