Not receiving any data in Kibana from Elasticsearch/Logstash. ELK Stack made in docker-compose - docker

I'm trying to build my first ELK Stack in Docker for a school project.
The goal is to have an ElasticSearch container along with it's volume.
A LogStash container linked to ElasticSearch along with it's volume aswell.
A Filebeat container to send data to LogStash
And finaly a Kibana Container to visualise the logs.
The problem I'm having is that even though all seems to be configured and working, Kibana isn't receiving anything from ElasticSearch. When I try to add a new index pattern I'm faced with this message : "No data streams, indices, or index aliases match your index pattern."
And not a thing is visible in Index Management.
Here's the docker-compose.yml:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.5.3
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx750m -Xms750m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.5.3
container_name: logstash
restart: always
volumes:
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
command: logstash -f /etc/logstash/conf.d/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
environment:
LS_JAVA_OPTS: "-Xmx750m -Xms750m"
networks:
- elk
Kibana:
image: kibana:8.5.3
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
Filebeat:
container_name: filebeat
image: "docker.elastic.co/beats/filebeat:8.5.3"
restart: always
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- elk
volumes:
elastic_data:
networks:
elk:
driver: bridge
logstash/logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch"
codec => "json"
index => "logstash-%{indexDay}"
}
stdout { codec => rubydebug }
}
And finally the filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
output.logstash:
hosts: ["logstash:5044"]
Any idea what I'm doing wrong?

Related

Logging to Elastic search is not working while pointing to docker url in Api docker image

Logging to Elastic search work fine when testing from local debugging to localhost app connection to elasticsearch url at https://localhost:9200, but it fails to connect with docker images of the dotnet.monitoring.api to the docker image of Elastic search at http://elasticsearch:9200
Below is the docker compose file.
version: '3.4'
services:
productdb:
container_name: productdb
environment:
SA_PASSWORD: "SwN12345678"
ACCEPT_EULA: "Y"
restart: always
ports:
- "1433:1433"
elasticsearch:
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name-es-docker-cluster
- xpack.security.enabled=false
- "discovery.type=single-node"
networks:
- es-net
volumes:
- data01:/urs/share/elasticsearch/data
ports:
- 9200:9200
kibana:
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
dotnet.monitoring.api:
container_name: dotnet.monitoring.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "ElasticConfiguration:Url=http://elasticsearch:9200/"
- "ConnectionStrings:Product=server=productdb;Database=ProductDb;User Id=sa;Password=SampleP#&&W0rd;TrustServerCertificate=True;"
depends_on:
- productdb
- kibana
ports:
- "8001:80"
volumes:
data01:
driver: local
networks:
es-net:
driver: bridge
Your image of API is not in the same network. Make es-net as default.
networks:
default:
name: es-net
external: true

Elasticsearch Unreachable: [http://kafka:9200/][Manticore::SocketException] Connection refused

I am trying to run Spring application with logstash, Elastic Search, Kafka and Kibana.
[main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://kafka:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://kafka:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
The above error is being repeated under logstash container
docker-compose.yml
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
#restart: always
networks:
- tweetapp-network
kafka:
image: wurstmeister/kafka
container_name: kafka
#restart: always
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "tweetapp-logs:1:1, Tweets:1:1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- tweetapp-network
mongodb:
image: mongo
container_name: mongodb
# restart: always
ports:
- "27017:27017"
# volumes:
# - mongodb-volume:/data/db
networks:
- tweetapp-network
springboot:
image: tweetapp
#restart: always
ports:
- "8080:8080"
depends_on:
- mongodb
- kafka
- elasticsearch
- logstash
- kibana
networks:
- tweetapp-network
logstash:
image: logstash:7.7.0
container_name: logstash
hostname: logstash
ports:
- "9600:9600"
volumes:
- .\logstash:/usr/share/logstash/pipeline/
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
elasticsearch:
image: elasticsearch:7.7.0
container_name: elasticsearch
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- discovery.type=single-node
networks:
- tweetapp-network
kibana:
image: kibana:7.7.0
container_name: kibana
hostname: kibana
ports:
- "5601:5601"
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
# Networks to be created to facilitate communication between containers
networks:
tweetapp-network:
I made sure that Elastic Search is working http://localhost:9200/ , for this URL I get JSON output.
logstash.config
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["tweetapp-logs"]
}
}
filter {
grok {
match => [ "message", "%{GREEDYDATA}" ]
}
}
output {
elasticsearch {
hosts => ["kafka:9200"]
index => "tweetapp"
workers => 1
}
}
I am new to docker-compose, Elastic search and Kafka. Any help will be appreciated.
Seems like you are unlucky.
Replace Elasticsearch instead of kafka in host name
Try this:
input { kafka { bootstrap_servers => "kafka:9092" topics => ["tweetapp-logs"] } } filter { grok { match => [ "message", "%{GREEDYDATA}" ] } } output { elasticsearch { hosts => ["elasticsearch:9200"] index => "tweetapp" workers => 1 } }

How to set up http input plugin with logstash

Hello I am unable to send http requests to my logstash http input. I receive a server returned no response message. I am able to send requests to my tcp endpoint using cat. I am also able to send http requests to my endpoint when running it from the cli as docker exec -it. I don't know what else to try at this point
I am using:
opensearchproject/opensearch-dashboards:latest as kibana.
opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2 as logstash
opensearchproject/opensearch:1.2.3 as elasticsearch
My docker-compose file looks as follows:
version: '3.7'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/opensearch.yml:/usr/share/elasticsearch/config/opensearch.yml:ro,z
- elasticsearch:/usr/share/elasticsearch/data:z
ports:
- "9200:9200"
- "9300:9300"
environment:
- node.name=elasticsearch
- discovery.type=single-node
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.seed_hosts=elasticsearch
# - "DISABLE_SECURITY_PLUGIN=true"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,z
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro,z
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
- "8080:8080"
networks:
- elk
depends_on:
- elasticsearch
expose:
- "5000"
- "8080"
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/opensearch_dashboards.yml:/usr/share/kibana/config/opensearch_dashboards.yml:ro,z
ports:
- "5601:5601"
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://elasticsearch:9200"]'
# - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch: null
logstash/pipeline/logstash.conf
input {
beats {
port => 5044
}
http {
port => 8080
}
tcp {
port => 5000
}
}
output {
opensearch {
hosts => "https://elasticsearch:9200"
user => "admin"
password => "admin"
ssl_certificate_verification => false
}
}
After almost a week, I just figured it out. When specifying the ports I must include /tcp for port 8080.
ports:
"5044:5044"
"5000:5000/tcp"
"5000:5000/udp"
"9600:9600"
"8080:8080/tcp"

Docker-compose elastic stack no container tags

I have a setup with docker-compose and the elastic stack. My 'main' container is running a Django application (there are some more containers for metrics, certificates, and so on).
The logging itself works with this setup but I have no container labels or tags in Kibana. So I can't differentiate between logs from different containers (except when I know what I'm looking for).
How do I configure logstash or logspout to label or tag all logs with the container where they're from? In the best case tagging container image and container id.
I tried to add a label to the container but that didn't change anything. I also tried specified logging, with driver syslog and a tag, but that didn't work either.
I guess I have to make a specific logstash config and do some stuff there?
Below is my current docker-compose.yml
version: '2'
services:
# django container
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
expose:
- 8001
env_file:
- ./environments/web.test.env
image: mycontainer
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'udp://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- STDOUT=true
links:
- elasticsearch
expose:
- 5000
depends_on:
- elasticsearch
- kibana
command: 'logstash -e "input { udp { port => 5000 } } output { elasticsearch { hosts => elasticsearch } }"'
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
Any help would be appreciated, thanks!
Sorry, I'm really inexperienced with the elastic stack, but I got it right.
Indeed you have to provide a logstash config with filter, at least that's how I got it working. Additionally, I had to switch from UDP to just syslog in logspout, I guess the udp connection didn't forward all it got (for example the docker image).
Here are my configurations that work (there are definitely some improvements to do).
logstash.conf
input {
syslog {
port => 5000
type => "docker"
}
}
filter {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:ver} +(?:%{TIMESTAMP_ISO8601:ts}|-) +(?:%{HOSTNAME:service}|-) +(?:%{NOTSPACE:containerName}|-) +(?:%{NOTSPACE:proc}|-) +(?:%{WORD:msgid}|-) +(?:%{SYSLOG5424SD:sd}|-|) +%{GREEDYDATA:msg}" }
}
syslog_pri { }
}
output {
elasticsearch { hosts => "elasticsearch" }
stdout {codec => rubydebug}
}
docker-compose.yml
version: '2'
services:
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
image: myimage
expose:
- 8001
env_file:
- ./environments/web.test.env
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'syslog://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- LOGSPOUT=ignore
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
volumes:
- ./containers/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- LOGSPOUT=ignore
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- LOGSPOUT=ignore
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch

Unable to connect docker container to logstash via gelf driver

Hi guys i'm having trouble to send my server container logs to my ELK stack. No input is sent to logstash so i'm unable to set kibana index for collecting logs. I think my problem is in the port settings.
Here is the docker-compose yml for the LAMP stack (only the server service):
version: '3'
services:
server:
build: ./docker/apache
links:
- fpm
ports:
- 80:80 # HTTP
- 443:443 # HTTPS
logging:
driver: "gelf"
options:
gelf-address: "udp://127.0.0.1:5000"
tag: "server"
And here is the docker-compose yml for the ELK stack, based on deviantony/docker-elk github project
version: '2'
services:
elasticsearch:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
I've found the mistake, i've to specify the UDP protocol in the logstash service port definition.
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000/udp"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
You need to use the gelf input plugin. Here an example of a functioning compose file:
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.3.1
logging:
driver: "json-file"
networks:
- logging
ports:
- "127.0.0.1:12201:12201/udp"
entrypoint: logstash -e 'input { gelf { } } output { stdout{ } }'
You can test it by running:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N"; sleep 1 ; done
and checking docker logs on the logstash container.

Resources