Docker-compose elastic stack no container tags - docker

I have a setup with docker-compose and the elastic stack. My 'main' container is running a Django application (there are some more containers for metrics, certificates, and so on).
The logging itself works with this setup but I have no container labels or tags in Kibana. So I can't differentiate between logs from different containers (except when I know what I'm looking for).
How do I configure logstash or logspout to label or tag all logs with the container where they're from? In the best case tagging container image and container id.
I tried to add a label to the container but that didn't change anything. I also tried specified logging, with driver syslog and a tag, but that didn't work either.
I guess I have to make a specific logstash config and do some stuff there?
Below is my current docker-compose.yml
version: '2'
services:
# django container
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
expose:
- 8001
env_file:
- ./environments/web.test.env
image: mycontainer
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'udp://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- STDOUT=true
links:
- elasticsearch
expose:
- 5000
depends_on:
- elasticsearch
- kibana
command: 'logstash -e "input { udp { port => 5000 } } output { elasticsearch { hosts => elasticsearch } }"'
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
Any help would be appreciated, thanks!

Sorry, I'm really inexperienced with the elastic stack, but I got it right.
Indeed you have to provide a logstash config with filter, at least that's how I got it working. Additionally, I had to switch from UDP to just syslog in logspout, I guess the udp connection didn't forward all it got (for example the docker image).
Here are my configurations that work (there are definitely some improvements to do).
logstash.conf
input {
syslog {
port => 5000
type => "docker"
}
}
filter {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:ver} +(?:%{TIMESTAMP_ISO8601:ts}|-) +(?:%{HOSTNAME:service}|-) +(?:%{NOTSPACE:containerName}|-) +(?:%{NOTSPACE:proc}|-) +(?:%{WORD:msgid}|-) +(?:%{SYSLOG5424SD:sd}|-|) +%{GREEDYDATA:msg}" }
}
syslog_pri { }
}
output {
elasticsearch { hosts => "elasticsearch" }
stdout {codec => rubydebug}
}
docker-compose.yml
version: '2'
services:
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
image: myimage
expose:
- 8001
env_file:
- ./environments/web.test.env
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'syslog://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- LOGSPOUT=ignore
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
volumes:
- ./containers/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- LOGSPOUT=ignore
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- LOGSPOUT=ignore
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch

Related

Not receiving any data in Kibana from Elasticsearch/Logstash. ELK Stack made in docker-compose

I'm trying to build my first ELK Stack in Docker for a school project.
The goal is to have an ElasticSearch container along with it's volume.
A LogStash container linked to ElasticSearch along with it's volume aswell.
A Filebeat container to send data to LogStash
And finaly a Kibana Container to visualise the logs.
The problem I'm having is that even though all seems to be configured and working, Kibana isn't receiving anything from ElasticSearch. When I try to add a new index pattern I'm faced with this message : "No data streams, indices, or index aliases match your index pattern."
And not a thing is visible in Index Management.
Here's the docker-compose.yml:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.5.3
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx750m -Xms750m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.5.3
container_name: logstash
restart: always
volumes:
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
command: logstash -f /etc/logstash/conf.d/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
environment:
LS_JAVA_OPTS: "-Xmx750m -Xms750m"
networks:
- elk
Kibana:
image: kibana:8.5.3
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
Filebeat:
container_name: filebeat
image: "docker.elastic.co/beats/filebeat:8.5.3"
restart: always
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- elk
volumes:
elastic_data:
networks:
elk:
driver: bridge
logstash/logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch"
codec => "json"
index => "logstash-%{indexDay}"
}
stdout { codec => rubydebug }
}
And finally the filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
output.logstash:
hosts: ["logstash:5044"]
Any idea what I'm doing wrong?

Logging to Elastic search is not working while pointing to docker url in Api docker image

Logging to Elastic search work fine when testing from local debugging to localhost app connection to elasticsearch url at https://localhost:9200, but it fails to connect with docker images of the dotnet.monitoring.api to the docker image of Elastic search at http://elasticsearch:9200
Below is the docker compose file.
version: '3.4'
services:
productdb:
container_name: productdb
environment:
SA_PASSWORD: "SwN12345678"
ACCEPT_EULA: "Y"
restart: always
ports:
- "1433:1433"
elasticsearch:
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name-es-docker-cluster
- xpack.security.enabled=false
- "discovery.type=single-node"
networks:
- es-net
volumes:
- data01:/urs/share/elasticsearch/data
ports:
- 9200:9200
kibana:
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
dotnet.monitoring.api:
container_name: dotnet.monitoring.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "ElasticConfiguration:Url=http://elasticsearch:9200/"
- "ConnectionStrings:Product=server=productdb;Database=ProductDb;User Id=sa;Password=SampleP#&&W0rd;TrustServerCertificate=True;"
depends_on:
- productdb
- kibana
ports:
- "8001:80"
volumes:
data01:
driver: local
networks:
es-net:
driver: bridge
Your image of API is not in the same network. Make es-net as default.
networks:
default:
name: es-net
external: true

Elasticsearch Unreachable: [http://kafka:9200/][Manticore::SocketException] Connection refused

I am trying to run Spring application with logstash, Elastic Search, Kafka and Kibana.
[main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://kafka:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://kafka:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
The above error is being repeated under logstash container
docker-compose.yml
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
#restart: always
networks:
- tweetapp-network
kafka:
image: wurstmeister/kafka
container_name: kafka
#restart: always
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "tweetapp-logs:1:1, Tweets:1:1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- tweetapp-network
mongodb:
image: mongo
container_name: mongodb
# restart: always
ports:
- "27017:27017"
# volumes:
# - mongodb-volume:/data/db
networks:
- tweetapp-network
springboot:
image: tweetapp
#restart: always
ports:
- "8080:8080"
depends_on:
- mongodb
- kafka
- elasticsearch
- logstash
- kibana
networks:
- tweetapp-network
logstash:
image: logstash:7.7.0
container_name: logstash
hostname: logstash
ports:
- "9600:9600"
volumes:
- .\logstash:/usr/share/logstash/pipeline/
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
elasticsearch:
image: elasticsearch:7.7.0
container_name: elasticsearch
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- discovery.type=single-node
networks:
- tweetapp-network
kibana:
image: kibana:7.7.0
container_name: kibana
hostname: kibana
ports:
- "5601:5601"
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
# Networks to be created to facilitate communication between containers
networks:
tweetapp-network:
I made sure that Elastic Search is working http://localhost:9200/ , for this URL I get JSON output.
logstash.config
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["tweetapp-logs"]
}
}
filter {
grok {
match => [ "message", "%{GREEDYDATA}" ]
}
}
output {
elasticsearch {
hosts => ["kafka:9200"]
index => "tweetapp"
workers => 1
}
}
I am new to docker-compose, Elastic search and Kafka. Any help will be appreciated.
Seems like you are unlucky.
Replace Elasticsearch instead of kafka in host name
Try this:
input { kafka { bootstrap_servers => "kafka:9092" topics => ["tweetapp-logs"] } } filter { grok { match => [ "message", "%{GREEDYDATA}" ] } } output { elasticsearch { hosts => ["elasticsearch:9200"] index => "tweetapp" workers => 1 } }

Container won't connect to network Docker-Compose

I am trying to set up my docker-compose with Kafka, and have containers communicate through it. While trying to connect containers over shared networks, one of my containers can't seem to connect to Kafka, while the other can.
Said container returns NoBrookersAvailable error. I tried to use docker networks inspect on the network and I saw that it is not connected while the other containers are.
What am I doing wrong?
The container is the password_module.
Docker-compose.yml:
version: "3.3"
services:
controller_module:
image: controller_module:latest
networks:
- password_network
- analyze_network
restart: unless-stopped
depends_on:
- kafka
analyze_module:
image: analyze_module:latest
networks:
- analyze_network
volumes:
- /kamuti:/testdir
restart: unless-stopped
depends_on:
- kafka
password_module:
image: password_module:latest
networks:
- password_network
volumes:
- /kamuti:/testdir
restart: unless-stopped
depends_on:
- kafka
kafka:
image: wurstmeister/kafka:latest
container_name: kafka
networks:
- password_network
- analyze_network
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=172.17.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS= find-password:1:1, password:1:1, analyze-folder:1:1, folder-data:1:1
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
networks:
- password_network
- analyze_network
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
#volumes:
# - #TODO
networks:
password_network:
external: true
analyze_network:
external: true
The results of docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' password_network:
homeassignment_zookeeper_1 homeassignment_password_module_1 kafka homeassignment_controller_module_1

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

Resources