The Kibana data has been lost while down the docker-compose - docker

I'm using elk docker image and using the below docker-compose file to kick start the ELK containers and storing the data in volume.
version: '3.5'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
Kibana version 6.6.0
I used below command to start:
docker-compose up -d
Observed that all three containers are up and running and I can able to publish my data into kibana and It can be visualized.
I just had situation to down this compose file and start up . But When I do that activity all the earlier kibana data has been lost.
docker-compose down
Is there any way that I can permanently store those records in machine (Some where in linux box) as backup else any database?
Please help me on this.

You have to move elasticsearch data dir (/usr/share/elasticsearch/data) into a persistent docker volume like this:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- ./es_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk

Related

Logging to Elastic search is not working while pointing to docker url in Api docker image

Logging to Elastic search work fine when testing from local debugging to localhost app connection to elasticsearch url at https://localhost:9200, but it fails to connect with docker images of the dotnet.monitoring.api to the docker image of Elastic search at http://elasticsearch:9200
Below is the docker compose file.
version: '3.4'
services:
productdb:
container_name: productdb
environment:
SA_PASSWORD: "SwN12345678"
ACCEPT_EULA: "Y"
restart: always
ports:
- "1433:1433"
elasticsearch:
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name-es-docker-cluster
- xpack.security.enabled=false
- "discovery.type=single-node"
networks:
- es-net
volumes:
- data01:/urs/share/elasticsearch/data
ports:
- 9200:9200
kibana:
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
dotnet.monitoring.api:
container_name: dotnet.monitoring.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "ElasticConfiguration:Url=http://elasticsearch:9200/"
- "ConnectionStrings:Product=server=productdb;Database=ProductDb;User Id=sa;Password=SampleP#&&W0rd;TrustServerCertificate=True;"
depends_on:
- productdb
- kibana
ports:
- "8001:80"
volumes:
data01:
driver: local
networks:
es-net:
driver: bridge
Your image of API is not in the same network. Make es-net as default.
networks:
default:
name: es-net
external: true

Running Kibana in Docker image gives Non-Root error

I'm having issues attempting to setup the ELK stack (v7.6.0) in docker using Docker-Compose.
Elastic Search & Logstash startup fine, but Kibana instantly exists, the docker logs for that container report:
Kibana should not be run as root. Use --allow-root to continue.
the docker-compose for those elements looks like this:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
environment:
- discovery.type=single-node
ports:
- 9200:9200
mem_limit: 2gb
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
environment:
- discovery.type=single-node
ports:
- 5601:5601
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
mem_limit: 2gb
depends_on:
- elasticsearch
How do I either disable the run as root error, or set the application to not run as root?
In case you don't have a separate Dockerfile for Kibana and you just want to set the startup arg in docker-compose, the syntax there is as follows:
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastic
entrypoint: ["./bin/kibana", "--allow-root"]
That works around the problem of running it on a Windows Container.
That being said, I don't know why Kibana should not be executed as root.
i've just run this docker image and all works perfectly, i share my docker-compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- node.name=node
- cluster.name=elasticsearch-default
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ports:
- "9200:9200"
expose:
- "9200"
networks:
- "esnet"
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
expose:
- "5601"
networks:
- "esnet"
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600
depends_on:
- elasticsearch
networks:
- "esnet"
networks:
esnet:
I had the same problem. I did run it manually by adding the ENTRYPOINT to the end of Dockerfile.
ARG ELK_VERSION
FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
ENTRYPOINT ["./bin/kibana", "--allow-root"]
The docker-compose.yml
version: '3.2'
# Elastic stack (ELK) on Docker https://github.com/deviantony/docker-elk
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5002:5002/tcp"
- "5002:5002/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/ ############## Dockerfile ##############
args:
ELK_VERSION: $ELK_VERSION
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: nat
volumes:
elasticsearch:

How to control services start-up sequence in docker-compose.yml file?

Here is my docker-compose.yml file
version: '3'
services:
mongodb:
image: hedge-mongo:1.0
container_name: mongodb
ports:
- "27017:27017"
networks:
- appNetwork
service-A:
image: service-A:1.0
container_name: service-A
ports:
- "3030:3030"
networks:
- appNetwork
depends_on:
- mongodb
service-B:
image: service-B:1.0
container_name: service-B
ports:
- "4200:4200"
networks:
- appNetwork
service-C:
image: service-C:1.0
container_name: service-C
ports:
- "8082:8082"
networks:
- appNetwork
networks:
appNetwork:
external: true
In above file service-A and service-C are trying to connect to MongoDB
Issue:
In above file service-A and service-C are getting connected to MongoDB before MongoDB is up
so, I want to run service-A and service-C after MongoDB is up completely.
I tried adding depends-on for service-A and service-C
service-A:
image: service-A:1.0
container_name: service-A
ports:
- "3030:3030"
networks:
- appNetwork
depends_on:
- mongodb
service-C:
image: service-C:1.0
container_name: service-C
ports:
- "8082:8082"
networks:
- appNetwork
depends_on:
- mongodb
still, it did not work out sometimes it works sometimes it doesn't so it's not reliable.
I want to make sure my other services are up only after MongoDB is up.
that is a race condition, you're services is trying to start at the same time, thats why you got a result of sometimes it success and sometimes its not.. here an example on how to solve the race condition..
example 1
example 2

Docker elastic stack cannot receive connection error

I have docker-compose file looking like below
version: '3'
services:
redis:
build: ./docker/redis
postgresql:
build: ./docker/postgresql
ports:
- "5433:5432"
env_file:
- .env
graphql:
build: .
command: npm run start
volumes:
- ./logs/:/usr/app/logs/
ports:
- "3000:3000"
env_file:
- .env
depends_on:
- "redis"
- "postgresql"
links:
- "redis"
- "postgresql"
elasticsearch:
build: ./docker/elasticsearch
container_name: elasticsearch
ports:
- "9200:9200"
depends_on:
- "graphql"
links:
- "kibana"
kibana:
build: ./docker/kibana
ports:
- "5601:5601"
depends_on:
- "graphql"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
metricbeat:
build: ./docker/metricbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
packetbeat:
build: ./docker/packetbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
logstash:
build: ./docker/logstash
ports:
- "9600:9600"
volumes:
- ./logs:/usr/logs
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
networks:
elastic:
driver: bridge
When I run docker-compose build and docker-compose up, I get "unable to revive connection: http://elasticsearch:9200" from every container. I don't think any of the containers are able to talk to each other right now. However, it really feels like everything should work because I have exposed all the ports for elastic components, linked them with same networks and URL is also pointing to the correct alias. What am I doing wrong?
The dockerfile settings are all correct as each container runs correctly in isolation - just not able to talk to each other at all

Unable to connect docker container to logstash via gelf driver

Hi guys i'm having trouble to send my server container logs to my ELK stack. No input is sent to logstash so i'm unable to set kibana index for collecting logs. I think my problem is in the port settings.
Here is the docker-compose yml for the LAMP stack (only the server service):
version: '3'
services:
server:
build: ./docker/apache
links:
- fpm
ports:
- 80:80 # HTTP
- 443:443 # HTTPS
logging:
driver: "gelf"
options:
gelf-address: "udp://127.0.0.1:5000"
tag: "server"
And here is the docker-compose yml for the ELK stack, based on deviantony/docker-elk github project
version: '2'
services:
elasticsearch:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
I've found the mistake, i've to specify the UDP protocol in the logstash service port definition.
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000/udp"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
You need to use the gelf input plugin. Here an example of a functioning compose file:
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.3.1
logging:
driver: "json-file"
networks:
- logging
ports:
- "127.0.0.1:12201:12201/udp"
entrypoint: logstash -e 'input { gelf { } } output { stdout{ } }'
You can test it by running:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N"; sleep 1 ; done
and checking docker logs on the logstash container.

Resources