I have a docker-compose file and I am trying to have elasticsearch and kibana inside.
The problem : I am setting the ELASTIC_PASSWORD in the environnment but the authentification system is failing and my elastic is expose.
What is wrong ?
version: '3.7'
services:
elasticsearch:
container_name: 'elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.7.1'
environment:
- node.name=es01
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- network.host=0.0.0.0
- ELASTIC_PASSWORD=mySuperPassword
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
privileged: true
volumes:
- api_esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
kibana:
image: docker.elastic.co/kibana/kibana:6.7.1
container_name: kibana
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://127.0.0.1:9200/
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
volumes:
api_esdata1:
external: true
As #leandrojmp said, Basic authentification in Elasticsearch starting only from 6.8 versions. I upgraded the version and now it works well.
Related
I trying to use docker container :elastic search and kibana. \But keep face to this word "Kibana server is not ready yet", when I connect to the web (http://localhost:5601/)
my system:
Mac m1 os
docker version : 20.10.22, build 3a2c30b
Below is my yaml file
version: '3.6'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2
platform:linux/amd64
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.16.1
platform: linux/amd64
container_name: kibana
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=["http://es01:9200"]
depends_on:
- es01
networks:
- elastic
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
docker ps -a : looks fine.
I have no idea what is wrong.
please let me know.
Created an Elasticsearch domain from Localstack and can able to access the endpoint(Postman | Non dockerized application). When I'm calling the same ES URL from the SAM Lambda application using Axios
http://host.docker.internal:4566/es/us-east-1/idx/_all/_search
Returns as
"code":"ERR_BAD_REQUEST","status":404
But when I checked Localstack health using
http://host.docker.internal:4566/health
Returns AWS running services
Sharing my Docker file for Localstack
version: "3.9"
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
network_mode: bridge
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
network_mode: bridge
ports:
- "4566:4566"
depends_on:
- elasticsearch
environment:
- ES_CUSTOM_BACKEND=http://elasticsearch:9200
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- ES_ENDPOINT_STRATEGY=path
# - LOCALSTACK_HOSTNAME=localstack
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
links:
- elasticsearch
volumes:
data01:
driver: local
Do i want modify network in docker?
Please help me to resolve this error
I am attempting to run an elasticsearch cluster with Kibana and Logstash using docker-compose.
The problem I'm running into is that Logstash keeps looking for the elastic search DB hostname as http://elasticsearch:9200. Here's an example of the logstash output.
logstash | [2021-08-23T15:30:03,534][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2021-08-23T15:30:03,540][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
I'm also attaching my docker-compose.yml file.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:7.14.0
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
networks:
- elastic
logstash:
image: logstash:7.14.0
environment:
ELASTICSEARCH_HOST: localhost
container_name: logstash
hostname: localhost
ports:
- 9600:9600
- 8089:8089
volumes:
- ./logstash/logstash.yml
- ./logstash/pipelines.yml
- ./logstash/data
command: --config.reload.automatic
environment:
LS_JAVA_OPTS: "-Xmx1g -Xms1g"
links:
- es01:es01
depends_on:
- es01
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
For some reason, putting the host into the docker compose yaml file doesn't seem to work. Where should I go to point logstash to locahost rather than 'elasticsearch'?
Thanks
I don't think Logstash has any such variable ELASTICSEARCH_HOST... Plus, localhost would refer to the Logstash container itself, not something else. And don't set hostname: localhost for a container...
You have no container/service named elasticsearch, you have es01-3, thus why its unable to connect, (notice that Kibana uses the correct addresses) and you'd modify that address in the Logstash config/pipeline files
When trying to bring up a project with a webapp (using elixir/ecto as backend language),a postgres database, elasticsearch, and kibana using following docker-compose.yaml file:
version: '3'
services:
registry:
restart: always
image: registry:2
ports:
- 443:443
volumes:
- /path/data:/var/lib/registry
- /path/certs:/registry/certs
- /path/auth:/registry/auth
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: MyWebApp-dev
image: 'localhost:443/123'
environment:
- ELASTICSEARCH_URL=http://localhost:9200
- ELASTICSEARCH_HOST=localhost
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- db
- elasticsearch
- kibana
networks:
- esnet
db:
image: postgres:10
container_name: db
environment:
- POSTGRES_USER=paul
- POSTGRES_PASSWORD=SilviaZita1
- POSTGRES_DB=snitch_dev
networks:
- esnet
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
ports:
- "5601:5601"
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
depends_on:
- elasticsearch
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
,
I am getting following error:
** (Mix) Index products could not be created.
MyWebApp-dev |
MyWebApp-dev | %HTTPoison.Error{id: nil, reason: :econnrefused}
Does anyone know how to solve this problem?
I believe you are getting this connection refused error when you are trying to access elasticsearch from your webapp.
Using localhost inside a container refers to itself. In docker-compose, if you want to access another service which is listening in a particular port, then you have to frame your URL like http://<service-name>:<port>
In your case:
If you want to access elasticsearch service which is listening on 9200 from webapp container, then your URL should be http://elasticsearch:9200
In your webapp service definition, for ELASTICSEARCH_URL and ELASTICSEARCH_HOST use elasticsearch instead of localhost.
Use the below compose file:
version: '3'
services:
registry:
restart: always
image: registry:2
ports:
- 443:443
volumes:
- /path/data:/var/lib/registry
- /path/certs:/registry/certs
- /path/auth:/registry/auth
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: MyWebApp-dev
image: 'localhost:443/123'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- db
- elasticsearch
- kibana
networks:
- esnet
db:
image: postgres:10
container_name: db
environment:
- POSTGRES_USER=paul
- POSTGRES_PASSWORD=SilviaZita1
- POSTGRES_DB=snitch_dev
networks:
- esnet
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
ports:
- "5601:5601"
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
depends_on:
- elasticsearch
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
I am getting an error attempting to connect Kibana to ES using Docker containers:
kibana-products-624 | {"type":"log","#timestamp":"2018-05-25T14:56:36Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana-products-624 | {"type":"log","#timestamp":"2018-05-25T14:56:36Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
I have tried a number of variations in the environment settings and other configuration for the yml, but continue to get this error.
I have verified that ElasticSearch is running and available at port 9200 using CURL and a browser.
What is wrong with this configuration?
Here is the docker-compose.yml:
version: "3"
volumes:
elasticsearch-products-624-vol:
networks:
elasticsearch-net-624:
services:
elasticsearch-products-624-service:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch-products-624
restart: always
networks:
- elasticsearch-net-624
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
expose:
- "9200"
volumes:
- elasticsearch-products-624-vol:/usr/share/elasticsearch/data
kibana-products-624-service:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana-products-624
hostname: kibana
restart: always
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- ELASTIC_PWD=changeme
- KIBANA_PWD=changeme
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
ELASTICSEARCH_URL=http://elasticsearch:9200
should be changed to:
ELASTICSEARCH_URL=http://elasticsearch-products-624:9200
to refer to the container that was instantiated above.