I trying to use docker container :elastic search and kibana. \But keep face to this word "Kibana server is not ready yet", when I connect to the web (http://localhost:5601/)
my system:
Mac m1 os
docker version : 20.10.22, build 3a2c30b
Below is my yaml file
version: '3.6'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2
platform:linux/amd64
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.16.1
platform: linux/amd64
container_name: kibana
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=["http://es01:9200"]
depends_on:
- es01
networks:
- elastic
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
docker ps -a : looks fine.
I have no idea what is wrong.
please let me know.
Related
Need to upgrade Elasticsearch , Kibana installed with docker compose as a 3 node cluster on linux from 7.10 to 7.17
This document shares other methods but not containers installed/started with docker compose - swarm.
Is their a step by step documentation for the same?
I have upgraded from my elastic from 7.10 to 7.17.6 I have not faced any issues. I have just used docker compose in this scenario. In your case can you try to rename your elastic search it seems that's your older elastic container is still up and its conflicting the name? If this is not a production setup let me know we could try few more things as well.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.6
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:7.17.6
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200"]'
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
Created an Elasticsearch domain from Localstack and can able to access the endpoint(Postman | Non dockerized application). When I'm calling the same ES URL from the SAM Lambda application using Axios
http://host.docker.internal:4566/es/us-east-1/idx/_all/_search
Returns as
"code":"ERR_BAD_REQUEST","status":404
But when I checked Localstack health using
http://host.docker.internal:4566/health
Returns AWS running services
Sharing my Docker file for Localstack
version: "3.9"
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
network_mode: bridge
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
network_mode: bridge
ports:
- "4566:4566"
depends_on:
- elasticsearch
environment:
- ES_CUSTOM_BACKEND=http://elasticsearch:9200
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- ES_ENDPOINT_STRATEGY=path
# - LOCALSTACK_HOSTNAME=localstack
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
links:
- elasticsearch
volumes:
data01:
driver: local
Do i want modify network in docker?
Please help me to resolve this error
I am attempting to run an elasticsearch cluster with Kibana and Logstash using docker-compose.
The problem I'm running into is that Logstash keeps looking for the elastic search DB hostname as http://elasticsearch:9200. Here's an example of the logstash output.
logstash | [2021-08-23T15:30:03,534][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2021-08-23T15:30:03,540][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
I'm also attaching my docker-compose.yml file.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:7.14.0
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
networks:
- elastic
logstash:
image: logstash:7.14.0
environment:
ELASTICSEARCH_HOST: localhost
container_name: logstash
hostname: localhost
ports:
- 9600:9600
- 8089:8089
volumes:
- ./logstash/logstash.yml
- ./logstash/pipelines.yml
- ./logstash/data
command: --config.reload.automatic
environment:
LS_JAVA_OPTS: "-Xmx1g -Xms1g"
links:
- es01:es01
depends_on:
- es01
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
For some reason, putting the host into the docker compose yaml file doesn't seem to work. Where should I go to point logstash to locahost rather than 'elasticsearch'?
Thanks
I don't think Logstash has any such variable ELASTICSEARCH_HOST... Plus, localhost would refer to the Logstash container itself, not something else. And don't set hostname: localhost for a container...
You have no container/service named elasticsearch, you have es01-3, thus why its unable to connect, (notice that Kibana uses the correct addresses) and you'd modify that address in the Logstash config/pipeline files
I have a docker-compose file and I am trying to have elasticsearch and kibana inside.
The problem : I am setting the ELASTIC_PASSWORD in the environnment but the authentification system is failing and my elastic is expose.
What is wrong ?
version: '3.7'
services:
elasticsearch:
container_name: 'elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.7.1'
environment:
- node.name=es01
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- network.host=0.0.0.0
- ELASTIC_PASSWORD=mySuperPassword
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
privileged: true
volumes:
- api_esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
kibana:
image: docker.elastic.co/kibana/kibana:6.7.1
container_name: kibana
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://127.0.0.1:9200/
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
volumes:
api_esdata1:
external: true
As #leandrojmp said, Basic authentification in Elasticsearch starting only from 6.8 versions. I upgraded the version and now it works well.
Hi i'm using docker compose to handle all of my configuration.
i have mongo, node, redis, and elastic stack.
But i can't get my redis connect to my node app.
Here is my docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.6
container_name: "backend-mongo"
ports:
- "27017:27017"
volumes:
- "./data/db:/data/db"
redis:
image: redis:4.0.7
ports:
- "6379:6379"
user: redis
adminmongo:
container_name: "backend-adminmongo"
image: "mrvautin/adminmongo"
ports:
- "1234:1234"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1
container_name: "backend-elastic"
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
web:
container_name: "backend-web"
build: .
ports:
- "8888:8888"
environment:
- MONGODB_URI=mongodb://mongo:27017/backend
restart: always
depends_on:
- mongo
- elasticsearch
- redis
volumes:
- .:/backend
- /backend/node_modules
volumes:
esdata1:
driver: local
networks:
esnet:
Things to notice:
The redis is already running ( I can ping the redis)
I don't have any services running on my host only from the container
Other containers (except redis) work well
I've tried this method below
const redisClient = redis.createClient({host: 'redis'});
const redisClient = redis.createClient(6379, '127.0.0.1');
const redisClient = redis.createClient(6379, 'redis');
I'm using
docker 17.12
xubuntu 16.04
How can i connect my app to my redis container?
adding
hostname: redis
under redis section fix this issue.
So it will be something like this,
redis:
image: redis:4.0.7
ports:
- "6379:6379"
command: ["redis-server", "--appendonly", "yes"]
hostname: redis