ElasticSearch Unable to revive connection: http://elasticsearch:9200/ - docker

I tried to run ELK on Centos 8 with docker-compose :
here my docker-compose.yml
version: '3.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
hostname: elasticsearch
ports:
- "9200:9200"
expose:
- "9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- docker-network
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana
ports:
- "5601:5601"
expose:
- "5601"
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- ELASTIC_PWD=changeme
- KIBANA_PWD=changeme
depends_on:
- elasticsearch
networks:
- docker-network
networks:
docker-network:
driver: bridge
volumes:
elasticsearch-data:
but i'm facing with this error :
{"type":"log","#timestamp":"2020-03-03T22:53:19Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable
to revive connection: http://elasticsearch:9200/"}
while i checked:
elasticsearch is running fine.
docker exec kibana ping elasticsearch work fine.
both kibana and elasticsearch are on same network as you can see in docker-compose.yml
i checked docker exec kibana curl http://elasticsearch:9200 and result is :
Failed connect to elasticsearch:9200; No route to host
I also check other similar problems and their solution but none of them worked.

If you are running ElasticSearch inside Docker, then you may need to check if you have allocated sufficient memory limits to Docker. This can cause ElasticSearch to slowdown and even crash.
By default Docker Desktop is set to allow 2Gb of RAM per Docker, but in my own project I found that 4Gb prevented crashing, but 5Gb produced an additional performance speedup. Your mileage may vary depending on the amount of data you are ingesting.
Docker Desktop memory settings can be set via:
Docker Desktop -> Preferences -> Resources -> Memory
To inspect memory usage within the Docker container
DOCKER_ID=`docker ps | tail -n1 | awk '{ print $1 }'`; docker exec -it $DOCKER_ID /bin/bash
free -h # repeatedly run to inspect changes over time
Note that ElasticSearch memory usage peaks during ingest and indexing and then eventually settle down to a slightly lower number once indexing and consolidation is complete. So ideally peak memory usage should be tested during ingest.

Related

How to change command-line-flags of prometheus in a docker container

I followed this guide (https://www.jeffgeerling.com/blog/2021/monitor-your-internet-raspberry-pi) by Jeff Geerling to install an internet monitoring dashboard using prometheus and grafana running in docker containers.
Everything works great, but I noticed that the data is getting deleted after 15 days. After a quick search I found out that this is the default setting for the storage retention in prometheus.
I tried a lot by myself but I cannot find a way to change this setting.
Even though I found this tutorial (https://mkezz.wordpress.com/2017/11/13/prometheus-command-line-flags-in-docker-service/) which as far as I know should exactly tackle the problem I have but it doesn't work. I get the Error: Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. when running the first command mentioned.
Also I found this question (Increasing Prometheus storage retention) but I cannot use the best answer because my prometheus is running in the docker container.
Is there an easy way to set a command line flag for prometheus something like this --storage.tsdb.retention.time=30d?
This is the ReadMe-File I downloaded when I first installed it:
# Internet Monitoring Docker Stack with Prometheus + Grafana
> This repository is a fork from [maxandersen/internet-monitoring](https://github.com/maxandersen/internet-monitoring), tailored for use on a Raspberry Pi. It has only been tested on a Raspberry Pi 4 running Pi OS 64-bit beta.
Stand-up a Docker [Prometheus](http://prometheus.io/) stack containing Prometheus, Grafana with [blackbox-exporter](https://github.com/prometheus/blackbox_exporter), and [speedtest-exporter](https://github.com/MiguelNdeCarvalho/speedtest-exporter) to collect and graph home Internet reliability and throughput.
## Pre-requisites
Make sure Docker and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your Docker host machine.
## Quick Start
`
git clone https://github.com/geerlingguy/internet-monitoring
cd internet-monitoring
docker-compose up -d
`
Go to [http://localhost:3030/d/o9mIe_Aik/internet-connection](http://localhost:3030/d/o9mIe_Aik/internet-connection) (change `localhost` to your docker host ip/name).
## Configuration
To change what hosts you ping you change the `targets` section in [/prometheus/pinghosts.yaml](./prometheus/pinghosts.yaml) file.
For speedtest the only relevant configuration is how often you want the check to happen. It is at 30 minutes by default which might be too much if you have limit on downloads. This is changed by editing `scrape_interval` under `speedtest` in [/prometheus/prometheus.yml](./prometheus/prometheus.yml).
Once configurations are done, run the following command:
$ docker-compose up -d
That's it. docker-compose builds the entire Grafana and Prometheus stack automagically.
The Grafana Dashboard is now accessible via: `http://<Host IP Address>:3030` for example http://localhost:3030
username - admin
password - wonka (Password is stored in the `config.monitoring` env file)
The DataSource and Dashboard for Grafana are automatically provisioned.
If all works it should be available at http://localhost:3030/d/o9mIe_Aik/internet-connection - if no data shows up try change the timeduration to something smaller.
<center><img src="images/dashboard.png" width="4600" heighth="500"></center>
## Interesting urls
http://localhost:9090/targets shows status of monitored targets as seen from prometheus - in this case which hosts being pinged and speedtest. note: speedtest will take a while before it shows as UP as it takes about 30s to respond.
http://localhost:9090/graph?g0.expr=probe_http_status_code&g0.tab=1 shows prometheus value for `probe_http_status_code` for each host. You can edit/play with additional values. Useful to check everything is okey in prometheus (in case Grafana is not showing the data you expect).
http://localhost:9115 blackbox exporter endpoint. Lets you see what have failed/succeded.
http://localhost:9798/metrics speedtest exporter endpoint. Does take about 30 seconds to show its result as it runs an actual speedtest when requested.
## Thanks and a disclaimer
Thanks to #maxandersen for making the original project this fork is based on.
Thanks to #vegasbrianc work on making a [super easy docker](https://github.com/vegasbrianc/github-monitoring) stack for running prometheus and grafana.
This setup is not secured in any way, so please only use on non-public networks, or find a way to secure it on your own.
After further tinkering with it I found a docker-compose.yml file and I simply added under the command-section of prometheus the --storage.tsdb.retention.time=30d as shown here:
version: "3.1"
volumes:
prometheus_data: {}
grafana_data: {}
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:v2.25.2
restart: always
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
ports:
- 9090:9090
links:
- ping:ping
- speedtest:speedtest
networks:
- back-tier
grafana:
image: grafana/grafana
restart: always
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
depends_on:
- prometheus
ports:
- 3030:3000
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
ping:
tty: true
stdin_open: true
expose:
- 9115
ports:
- 9115:9115
image: prom/blackbox-exporter
restart: always
volumes:
- ./blackbox/config:/config
command:
- '--config.file=/config/blackbox.yml'
networks:
- back-tier
speedtest:
tty: true
stdin_open: true
expose:
- 9798
ports:
- 9798:9798
image: miguelndecarvalho/speedtest-exporter
restart: always
networks:
- back-tier
nodeexp:
privileged: true
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
ports:
- 9100:9100
restart: always
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
networks:
- back-tier
By then running docker-compose create followed by docker start internet-monitoring_prometheus_1 I can see under [Hostname of Server]:9090/status under Storage Retention 30 days.
Is that the way it should be done, as I think I found my solution.

Downgrading elasticsearch in docker-compose.yml leads to license error

I have the following docker-compose.yml file:
...
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch-cust-comp
...
I've previously run it with another elasticsearch version:
...
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch-cust-comp
...
Since changing the version to 5.4.3, I'm getting this error in docker logs for the Elasticsearch container:
Unknown license version found, please upgrade all nodes to the latest elasticsearch-license plugin
My guess is that version 6.5.4 of Elasticsearch is still running somewhere and that it creates issues when I'm trying to run 5.4.3. But as far as I know, I've shut down all Elasticsearch containers currently running with docker-compose down, and docker ps shows no processes after this. Still, when I run docker-compose up -d with version 5.4.3, it gives me this error. Running 6.5.4 works fine. What do I need to do to be able to run version 5.4.3?
EDIT:
This is the whole part regarding elasticsearch in the docker-compose.yml. As you can see, xpack is already disabled:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch-cust-comp
ports:
- 9200:9200
- 9300:9300
volumes:
- cust-comp-elastic:/usr/share/elasticsearch/data
- ./cust/externalConfig/elasticsearch/config/hunspell/:/usr/share/elasticsearch/config/hunspell/
# - ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
# - ./fwhome/elasticsearch/cust-comp:/usr/share/elasticsearch/config/cust-comp
networks:
- cust-comp
environment:
- cluster.name=i3-elasticsearch
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.ml.enabled=false
- xpack.graph.enabled=false
- xpack.watcher.enabled=false
restart: unless-stopped
In doubt, try:
docker system prune
This will clean unused containers/images.

Using rabbitmq with docker in production

I currently have a small server running in a docker container, the server uses RabbitMQ which is being run by docker-compose using the DockerHub image.
It is running nicely, but I'm worried that it may not be properly configured for production (production being a simple server, without clustering or anything fancy). In particular, I'm worried about the disk space limit described at RabbitMQ production checklist.
I'm not sure how to configure these things through docker-compose, as the env variables defined by the image seem to be quite limited.
My docker-compose file:
version: '3.4'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq:/var/lib/rabbitmq
restart: always
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=secretpassword
my-server:
# server config here
volumes:
rabbitmq:
networks:
server-network:
driver: bridge
disk_free_limit is set in /etc/rabbitmq/rabbitmq.conf, seems there is no environment available here.
So, you just need to override the rabbitmq.conf with your own one with docker bind mount volume to make your aim.
For your case, if you enter into the rabbitmq container, you can see:
shubuntu1#shubuntu1:~$ docker exec some-rabbit cat /etc/rabbitmq/rabbitmq.conf
loopback_users.guest = false
listeners.tcp.default = 5672
So you just need to add disk_free_limit.absolute = 1GB local rabbitmq.conf & mount it to container to override the default configure, full example as next:
rabbitmq.conf:
loopback_users.guest = false
listeners.tcp.default = 5672
disk_free_limit.absolute = 1GB
docker-compose.yaml:
version: '3.4'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq:/var/lib/rabbitmq
- ./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
volumes:
rabbitmq:
networks:
server-network:
driver: bridge
check if have effect now:
$ docker-compose up -d
$ docker-compose logs rabbitmq | grep "Disk free limit"
rabbitmq_1 | 2019-07-30 04:51:40.609 [info] <0.241.0> Disk free limit set to 1000MB
You can see disk free limit already set to 1GB.

docker-compose.yml for elasticsearch 7.0.1 and kibana 7.0.1

I am using Docker Desktop with linux containers on Windows 10 and would like to launch the latest versions of the elasticsearch and kibana containers over a docker compose file.
Everything works fine when using some older version like 6.2.4.
This is the working docker-compose.yml file for 6.2.4.
version: '3.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- docker-network
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- docker-network
networks:
docker-network:
driver: bridge
volumes:
elasticsearch-data:
I deleted all installed docker containers and adapted the docker-compose.yml file by changing 6.2.4 to 7.0.1.
By starting the new compose file everything looks fine, both the elasticsearch and kibana containers are started. But after a couple of seconds the elasticsearch container exits (the kibana container is running further). I restarted everything, attached a terminal to the elasticsearch container and saw the following error message:
...
ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
...
What must be changed in the docker-compose.yml file to get elasticsearch 7.0.1 working?
Making a few changes worked for me -
Add cluster.initial_master_nodes to the elasticsearch service in compose -
environment:
- cluster.initial_master_nodes=elasticsearch
vm.max_map_count on the linux box kernel setting needs to be set to at least 262144 -
$ sudo sysctl -w vm.max_map_count=262144
For development mode, you can use below settings as well -
environment:
- discovery.type=single-node
Working compose file for me -
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es01
environment:
- cluster.initial_master_nodes=es01
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200
For production mode, you must consider having multiple ES nodes/containers as suggested in the official documentation
https://www.elastic.co/guide/en/elasticsearch/reference/7.0/docker.html#docker-cli-run-prod-mode

Mapping ports in docker-compose file doesn't work. Network unreachable

I'm trying to map a port from my container, to a port on the host following the docs but it doesn't appear to be working.
After I run docker-compose -f development.yml up --force-recreate I get no errors. But if I try to reach the frontend service using localhost:8081 the network is unreachable.
I used docker inspect to view the IP and tried to ping that and still nothing.
Here is the docker-compose file I am using. And I doing anything wrong?
development.yml
version: '3'
services:
frontend:
image: nginx:latest
ports:
- "8081:80"
volumes:
- ./frontend/public:/var/www/html
api:
image: richarvey/nginx-php-fpm:latest
ports:
- "8080:80"
restart: always
volumes:
- ./api:/var/www/html
environment:
APPLICATION_ENV: development
ERRORS: 1
REMOVE_FILES: 0
links:
- db
- mq
db:
image: mariadb
restart: always
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: dEvE10pMeNtMoDeBr0
mq:
image: rabbitmq:latest
restart: always
environment:
RABBITMQ_DEFAULT_USER: developer
RABBITMQ_DEFAULT_PASS: dEvE10pMeNtMoDeBr0
You are using docker toolbox. Docker toolbox uses docker machine. In Windows with docker toolbox, you are running under a virtualbox with its own IP, so localhost is not where your containers live. You will need to go 192.168.99.100:8081 to find your frontend.
As per the documentation on docker machine(https://docs.docker.com/machine/get-started/#run-containers-and-experiment-with-machine-commands):
$ docker-machine ip default
192.168.99.100

Resources