Unable to connect docker container to logstash via gelf driver - docker

Hi guys i'm having trouble to send my server container logs to my ELK stack. No input is sent to logstash so i'm unable to set kibana index for collecting logs. I think my problem is in the port settings.
Here is the docker-compose yml for the LAMP stack (only the server service):
version: '3'
services:
server:
build: ./docker/apache
links:
- fpm
ports:
- 80:80 # HTTP
- 443:443 # HTTPS
logging:
driver: "gelf"
options:
gelf-address: "udp://127.0.0.1:5000"
tag: "server"
And here is the docker-compose yml for the ELK stack, based on deviantony/docker-elk github project
version: '2'
services:
elasticsearch:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge

I've found the mistake, i've to specify the UDP protocol in the logstash service port definition.
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000/udp"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch

You need to use the gelf input plugin. Here an example of a functioning compose file:
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.3.1
logging:
driver: "json-file"
networks:
- logging
ports:
- "127.0.0.1:12201:12201/udp"
entrypoint: logstash -e 'input { gelf { } } output { stdout{ } }'
You can test it by running:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N"; sleep 1 ; done
and checking docker logs on the logstash container.

Related

Not receiving any data in Kibana from Elasticsearch/Logstash. ELK Stack made in docker-compose

I'm trying to build my first ELK Stack in Docker for a school project.
The goal is to have an ElasticSearch container along with it's volume.
A LogStash container linked to ElasticSearch along with it's volume aswell.
A Filebeat container to send data to LogStash
And finaly a Kibana Container to visualise the logs.
The problem I'm having is that even though all seems to be configured and working, Kibana isn't receiving anything from ElasticSearch. When I try to add a new index pattern I'm faced with this message : "No data streams, indices, or index aliases match your index pattern."
And not a thing is visible in Index Management.
Here's the docker-compose.yml:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.5.3
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx750m -Xms750m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.5.3
container_name: logstash
restart: always
volumes:
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
command: logstash -f /etc/logstash/conf.d/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
environment:
LS_JAVA_OPTS: "-Xmx750m -Xms750m"
networks:
- elk
Kibana:
image: kibana:8.5.3
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
Filebeat:
container_name: filebeat
image: "docker.elastic.co/beats/filebeat:8.5.3"
restart: always
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- elk
volumes:
elastic_data:
networks:
elk:
driver: bridge
logstash/logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch"
codec => "json"
index => "logstash-%{indexDay}"
}
stdout { codec => rubydebug }
}
And finally the filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
output.logstash:
hosts: ["logstash:5044"]
Any idea what I'm doing wrong?

elasticsearch docker compose

I'm trying to run elasticsearch 8.3.3 using docker-compose. I'm getting an Error.
Below is the docker-compose.yml
version: '3.1'
services:
elasticsearch:
container_name: els
image: docker.elastic.co/elasticsearch/elasticsearch:8.3.3-arm64
ports:
- 9200:9200
volumes:
- elasticsearch-data:/usr/share/elasticsearch/datafile
environment:
- xpack.monitoring.enabled=true
- xpack.watcher.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
networks:
- elastcinetwork
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:8.3.3-arm64
ports:
- 5601:5601
depends_on:
- els
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastcinetwork
networks:
elastcinetwork:
driver: bridge
volumes:
elasticsearch-data:
Error:
Error: Process 'docker compose -f "docker-compose.yml" config --s...' exited with code 15
Error: service "kibana" depends on undefined service els: invalid compose project
You should depends on 'service name' rather than on container name
depends_on:
- elasticsearch

Docker-compose elastic stack no container tags

I have a setup with docker-compose and the elastic stack. My 'main' container is running a Django application (there are some more containers for metrics, certificates, and so on).
The logging itself works with this setup but I have no container labels or tags in Kibana. So I can't differentiate between logs from different containers (except when I know what I'm looking for).
How do I configure logstash or logspout to label or tag all logs with the container where they're from? In the best case tagging container image and container id.
I tried to add a label to the container but that didn't change anything. I also tried specified logging, with driver syslog and a tag, but that didn't work either.
I guess I have to make a specific logstash config and do some stuff there?
Below is my current docker-compose.yml
version: '2'
services:
# django container
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
expose:
- 8001
env_file:
- ./environments/web.test.env
image: mycontainer
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'udp://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- STDOUT=true
links:
- elasticsearch
expose:
- 5000
depends_on:
- elasticsearch
- kibana
command: 'logstash -e "input { udp { port => 5000 } } output { elasticsearch { hosts => elasticsearch } }"'
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
Any help would be appreciated, thanks!
Sorry, I'm really inexperienced with the elastic stack, but I got it right.
Indeed you have to provide a logstash config with filter, at least that's how I got it working. Additionally, I had to switch from UDP to just syslog in logspout, I guess the udp connection didn't forward all it got (for example the docker image).
Here are my configurations that work (there are definitely some improvements to do).
logstash.conf
input {
syslog {
port => 5000
type => "docker"
}
}
filter {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:ver} +(?:%{TIMESTAMP_ISO8601:ts}|-) +(?:%{HOSTNAME:service}|-) +(?:%{NOTSPACE:containerName}|-) +(?:%{NOTSPACE:proc}|-) +(?:%{WORD:msgid}|-) +(?:%{SYSLOG5424SD:sd}|-|) +%{GREEDYDATA:msg}" }
}
syslog_pri { }
}
output {
elasticsearch { hosts => "elasticsearch" }
stdout {codec => rubydebug}
}
docker-compose.yml
version: '2'
services:
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
image: myimage
expose:
- 8001
env_file:
- ./environments/web.test.env
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'syslog://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- LOGSPOUT=ignore
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
volumes:
- ./containers/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- LOGSPOUT=ignore
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- LOGSPOUT=ignore
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch

Running Kibana in Docker image gives Non-Root error

I'm having issues attempting to setup the ELK stack (v7.6.0) in docker using Docker-Compose.
Elastic Search & Logstash startup fine, but Kibana instantly exists, the docker logs for that container report:
Kibana should not be run as root. Use --allow-root to continue.
the docker-compose for those elements looks like this:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
environment:
- discovery.type=single-node
ports:
- 9200:9200
mem_limit: 2gb
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
environment:
- discovery.type=single-node
ports:
- 5601:5601
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
mem_limit: 2gb
depends_on:
- elasticsearch
How do I either disable the run as root error, or set the application to not run as root?
In case you don't have a separate Dockerfile for Kibana and you just want to set the startup arg in docker-compose, the syntax there is as follows:
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastic
entrypoint: ["./bin/kibana", "--allow-root"]
That works around the problem of running it on a Windows Container.
That being said, I don't know why Kibana should not be executed as root.
i've just run this docker image and all works perfectly, i share my docker-compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- node.name=node
- cluster.name=elasticsearch-default
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ports:
- "9200:9200"
expose:
- "9200"
networks:
- "esnet"
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
expose:
- "5601"
networks:
- "esnet"
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600
depends_on:
- elasticsearch
networks:
- "esnet"
networks:
esnet:
I had the same problem. I did run it manually by adding the ENTRYPOINT to the end of Dockerfile.
ARG ELK_VERSION
FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
ENTRYPOINT ["./bin/kibana", "--allow-root"]
The docker-compose.yml
version: '3.2'
# Elastic stack (ELK) on Docker https://github.com/deviantony/docker-elk
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5002:5002/tcp"
- "5002:5002/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/ ############## Dockerfile ##############
args:
ELK_VERSION: $ELK_VERSION
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: nat
volumes:
elasticsearch:

The Kibana data has been lost while down the docker-compose

I'm using elk docker image and using the below docker-compose file to kick start the ELK containers and storing the data in volume.
version: '3.5'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
Kibana version 6.6.0
I used below command to start:
docker-compose up -d
Observed that all three containers are up and running and I can able to publish my data into kibana and It can be visualized.
I just had situation to down this compose file and start up . But When I do that activity all the earlier kibana data has been lost.
docker-compose down
Is there any way that I can permanently store those records in machine (Some where in linux box) as backup else any database?
Please help me on this.
You have to move elasticsearch data dir (/usr/share/elasticsearch/data) into a persistent docker volume like this:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- ./es_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk

Resources