I'm trying to run Elastic,Kibana inside docker-compose.
When I bring up the containers using docker-compose up, Elasticsearch loads up fine. After it loads, the Kibana containers start up. But once they load, they are not able to see or connect the Elasticsearch container, producing these messages:
Kibana docker Log:
{"type":"log","#timestamp":"2020-01-22T19:57:27Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch01:9200/"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
Am not able to see the elasticsearch host from the Kibana container
curl -X GET http://elasticsearch01:9200
throws below quoted error
curl: (7) Failed connect to elasticsearch01:9200; No route to host
Deeply digged and out this is happening only in CentOS8 .
Also in same CENTOS8 am able to up and use standalone elasticsearch and kibana instance via systemctl service.
Am i missing something here ?
Can anyone help?
docker-compose.yml:
networks:
docker-elk:
driver: bridge
services:
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch01
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
restart: always
environment:
- node.name=elasticsearch01
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticdata:/usr/share/elasticsearch/data
ports:
- "9200"
expose:
- "9200"
- "9300"
networks:
- docker-elk
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
container_name: kibana
depends_on: ['elasticsearch01']
environment:
- SERVER_NAME=kibanaServer
restart: always
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
restart: always
networks:
- docker-elk
volumes:
- kibanadata:/usr/share/kibana/data
ports: ['5601:5601']
links:
- elasticsearch01
volumes:
elasticdata:
driver: local
kibanadata:
driver: local
secrets:
elasticsearch.yml:
file: ./ELK_Config/elastic/elasticsearch.yml
kibana.yml:
file: ./ELK_Config/kibana/kibana.yml
System/Docker Info
OS: CentOS 8
ELK versions 7.4.0
Docker version 19.03.4, build 9013bf583a
Docker-compose:docker-compose version 1.25.0, build 0a186604
Related
I'm trying to run elasticsearch 8.3.3 using docker-compose. I'm getting an Error.
Below is the docker-compose.yml
version: '3.1'
services:
elasticsearch:
container_name: els
image: docker.elastic.co/elasticsearch/elasticsearch:8.3.3-arm64
ports:
- 9200:9200
volumes:
- elasticsearch-data:/usr/share/elasticsearch/datafile
environment:
- xpack.monitoring.enabled=true
- xpack.watcher.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
networks:
- elastcinetwork
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:8.3.3-arm64
ports:
- 5601:5601
depends_on:
- els
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastcinetwork
networks:
elastcinetwork:
driver: bridge
volumes:
elasticsearch-data:
Error:
Error: Process 'docker compose -f "docker-compose.yml" config --s...' exited with code 15
Error: service "kibana" depends on undefined service els: invalid compose project
You should depends on 'service name' rather than on container name
depends_on:
- elasticsearch
Need to upgrade Elasticsearch , Kibana installed with docker compose as a 3 node cluster on linux from 7.10 to 7.17
This document shares other methods but not containers installed/started with docker compose - swarm.
Is their a step by step documentation for the same?
I have upgraded from my elastic from 7.10 to 7.17.6 I have not faced any issues. I have just used docker compose in this scenario. In your case can you try to rename your elastic search it seems that's your older elastic container is still up and its conflicting the name? If this is not a production setup let me know we could try few more things as well.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.6
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:7.17.6
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200"]'
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
While accessing DB it threw me an error that.
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
How to fix it? So my application can make a connection to the database. As in code, you can see my application is relying on multiple databases. how can I make sure before starting the application all of the database containers got started.
version: '3.8'
networks:
appnetwork:
driver: bridge
services:
mysql:
image: mysql:8.0.27
restart: always
command: --init-file /data/application/init.sql
environment:
- MYSQL_ROOT_PASSWORD=11999966
- MYSQL_DATABASE=interview
- MYSQL_USER=interviewuser
- MYSQL_PASSWORD=11999966
ports:
- 3306:3306
volumes:
- db:/var/lib/mysql
- ./migration/init.sql:/data/application/init.sql
networks:
- appnetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- 9200:9200
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- elastic:/usr/share/elasticsearch/data
networks:
- appnetwork
redis:
image: redis
restart: always
ports:
- 6379:6379
volumes:
- cache:/var/lib/redis
networks:
- appnetwork
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- mongo:/var/lib/mongo
networks:
- appnetwork
app:
depends_on:
- mysql
- elasticsearch
- redis
- mongodb
build: .
restart: always
ports:
- 3000:3000
networks:
- appnetwork
stdin_open: true
tty: true
command: npm start
volumes:
db:
elastic:
cache:
mongo:
The container (probably app) tries to connect to a mongodb instance running on localhost (i.e. the container itself). Since there is nothing listening on port 27017 of this container, we get the error.
We can fix the probelm by reconfiguring the application running in the container to use the name of the mongodb-container (which, in the given docker-compose.yml, is also mongodb) instead of 127.0.0.1 or localhost.
If we have designed our app accoring to the 12 factors, it should be as simple as setting an environment variable for the container.
Use http://mongo:27017 as connection string instead.
I received the following error when attempting to connect Dockerized fscrawler to Dockerized elasticsearch:
[f.p.e.c.f.c.ElasticsearchClientManager] failed to create
elasticsearch client, disabling crawler… [f.p.e.c.f.FsCrawler] Fatal
error received while running the crawler: [Connection refused]
When fscrawler is run for the fist time (i.e., docker-compose run fscrawler) it creates /config/{fscrawer_job}/_settings.yml with the following default setting:
elasticsearch:
nodes:
- url: "http://127.0.0.1:9200"
This will cause fscrawler to attempt to connect to localhost (i.e., 127.0.0.1). However, this will fail when fscrawler is located within a docker container because it is attempting to connect with the localhost of the CONTAINER. This was particularly confusing in my case because elasticsearch WAS accessible as localhost, but on the localhost of my physical computer (and NOT localhost of the container). Changing the url allowed fscrawler to connect to network address where elasticsearch actually resides.
elasticsearch:
nodes:
- url: "http://elasticsearch:9200"
I used the following docker image: https://hub.docker.com/r/toto1310/fscrawler
# FILE: docker-compose.yml
version: '2.2'
services:
# FSCrawler
fscrawler:
image: toto1310/fscrawler
container_name: fscrawler
volumes:
- ${PWD}/config:/root/.fscrawler
- ${PWD}/data:/tmp/es
networks:
- esnet
command: fscrawler job_name
# Elasticsearch Cluster
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch2
- cluster.initial_master_nodes=elasticsearch,elasticsearch2
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,elasticsearch2
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
Ran docker-compose up elasticsearch elasticsearch2
to bring up elasticsearch nodes.
Ran docker-compose run fscrawler to create _settings.yml
Edited _settings.yml to
elasticsearch:
nodes:
- url: "http://elasticsearch:9200"
Started fscrawler docker-compose up fscrawler
Rancher v 1.6.10, Docker v 17.06.2-ce
I'm deploying a stack via Rancher UI that contains one of the docker containers that has an app which connects to Dropbox via the internet. But the app isn't able to access the internet.
However, if I don't use rancher and simply use docker-compose up natively, then it all works fine.
The networking that the Rancher creates appears to be the problem I guess.
Can I be advised please?
My docker compose file:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: es1
environment:
- cluster.name=idc-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- docker-elk
idcdb:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=DriveMe
- POSTGRES_USER=idc
- POSTGRES_DB=idc
volumes:
- pgdata:/var/lib/db
idcredis:
image: redis:4.0
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
volumes:
- redisdata:/var/lib/redis
booking-service:
environment:
- PORT=8085
- PROFILE=integration
ports:
- 8085:8085
image: idc/idc-booking-service
depends_on:
- idcdb
- idcredis
notification-service:
environment:
- PORT=8087
- PROFILE=integration
ports:
- 8087:8087
image: idc/idc-notification-service
depends_on:
- idcredis
analytics-service:
environment:
- PORT=8088
- PROFILE=integration
ports:
- 8088:8088
image: idc/idc-analytics-service
depends_on:
- idcredis
- elasticsearch1
kibana:
image: docker.elastic.co/kibana/kibana:5.6.3
environment:
- "ELASTICSEARCH_URL=http://elasticsearch1:9200"
networks:
- docker-elk
volumes:
pgdata: {}
redisdata: {}
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
You should specify the networks while starting docker
--net=host
if this does not solve your problem
sudo gedit /etc/NetworkManager/NetworkManager.conf
comment out the following line:
#dns=dnsmasq
then
sudo restart network-manager
You could use a Rancher LB and add it to your application as follows:
In the stack where you application is you will have to click on Add Service button and select Add a Load Balancer
Then you make sure that where is says Access is set to Public
In the Request Host you will have to add the desired URL such as: mylocal.dev
Then you will have to add the port 80 so it will be accessible from the outside world on port 80
Select the service you want the LB to apply for and the internal application port.
Thats' all :) now you should be able to connect to mylocal.dev from the outside world.