Build the EFK system used for simulating logging server on Docker - docker

I want to simulate laravel logging to EFK system server
Base on this, I build up two container. One of laravel project's container. The ohter is EFK system container
but EFK's fluentd does not catch any data or event
my container's compose:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 8010:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:delegated
- ./server:/var/www/:delegated
depends_on:
- php
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24225
fluentd-async-connect: 'true'
fluentd-retry-wait: '1s'
fluentd-max-retries: '30'
tag: fubo.logger
php:
container_name: php-laravel
build: ./php
volumes:
- ./server:/var/www/:delegated
db:
build: ./mysql
volumes:
- ./mysql/data/:/var/lib/mysql
ports:
- 3306:3306
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 8811:80
depends_on:
- db
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
ports:
- "24225:24224"
- "24225:24224/udp"
networks:
- docker-efk_efk_network
networks:
docker-efk_efk_network:
external: true
my container's fluent.conf:
<source>
#type tail
path /etc/logs/laravel.log
pos_file /etc/logs/laravel.log.pos
tag docker.space
<parse>
#type json
</parse>
</source>
<match *.**>
#type forward
send_timeout 60s
recover_wait 10s
hard_timeout 60s
<server>
name dockerSpace
host docker-efk-fluentd-1
port 24224
weight 60
</server>
</match>
EFK's container compose:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
container_name: elasticsearch
restart: unless-stopped
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.15.1
container_name: kibana
restart: unless-stopped
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- I18N_LOCALE=zh-tw
ports:
- 5601:5601
links:
- elasticsearch
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf/:/fluentd/etc/
links:
- elasticsearch
ports:
- "24224:24224"
- "24224:24224/udp"
networks:
- efk_network
networks:
efk_network:
driver: bridge
EFK's container fluent.conf:
<source>
#type forward
port 24225
bind docker-space_fluentd_1
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
This is my container networks list:
name
driver
scope
docker-efk_default
bridge
local
docker-efk_efk_network
bridge
local
docker-space_default
bridge
local
What's wrong my understanding?

There are two step to do:
First, ensurce both of container has connected each other. More detail can see this.
How to link multiple docker-compose services via network
Second, modify EFK container's fluentd configuare:
<source>
#type forward
bind 0.0.0.0
port 24224
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
And ... it's work.

Related

How can I configure logging to fluentd without docker?

I made a docker-compose.yml, which runs .NET Core web server and fluentd.
docker-compose.yml:
version: "2"
services:
webserver:
build:
context: ..
dockerfile: ./MyProject/Dockerfile
container_name: webserver
depends_on: [ fluentd ]
ports:
- "8080:80"
logging:
driver: fluentd
options:
fluentd-address: fluentd:24224
fluentd-async: "true"
fluentd-max-retries: 30
tag: "web.log"
fluentd:
build:
context: ./fluentd
dockerfile: Dockerfile
container_name: fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
ports:
- "24224:24224"
- "24224:24224/udp"
fluent.conf:
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<match web.log>
#type stdout
</match>
However, if I should run my .NET server not as a docker container, how can I configure logging to fluentd like fluentd-address? (fluentd is still running on docker, but web server isn't.)

How do I test my Fluentd config in combination with Elasticsearch?

Problem:
I have a complicated setup where I use Elasticsearch and FluentD as part of my logging stack. I would like to add a metric and test the FluentD config for that.
How can I do that?
Idea:
Use docker-compose to start an EFK stack. Create a docker container that writes to stdout and simulates logs.
Problem with my idea:
The logs written by mylogger never seem to reach Elasticsearch. What am I doing wrong? myloggeris a container that simply writes to stdout in a loop and forever using bash.
docker-compose.yaml
version: '3'
services:
logger:
image: mylogger
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
container_name: elasticsearch
environment:
- "discovery.type=single-node"
expose:
- "9200"
ports:
- "9200:9200"
kibana:
image: kibana:7.10.1
links:
- "elasticsearch"
ports:
- "5601:5601"
fluent.conf File does not contain the changes that I want to make, yet.
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<label #FLUENT_LOG>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
</label>

Docker Logging Efk 7.10.1 Compose ECONNREFUSED

just trying to build a test app for learning purposes on how to collect Docker logs to EFK (Elasticsearch7.10.1 + Fluentd + Kibana7.10.1) stack.
Elastic starts up fine and is reachable from http://localhost:5601/
But fluentd-* is not available as an index pattern, I assume do to the error I am getting on the logs from kibana:
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["error","elasticsearch","monitoring"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack => connect ECONNREFUSED 172.20.0.3:9200"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","plugins","monitoring","monitoring"],"pid":6,"message":"X-Pack Monitoring Cluster Alerts will not be available: No Living connections"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["error","elasticsearch","data"],"pid":6,"message":"[ConnectionError]: connect ECONNREFUSED 172.20.0.3:9200"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["error","savedobjects-service"],"pid":6,"message":"Unable to retrieve version information from Elasticsearch nodes."}
172.20.0.3:9200 and http://elasticsearch:9200/ are not reachable through browser
http://localhost:9200/ is reachable
What am I missing? I have been working on this for a week and don't know where to look anymore, thanks!
Docker-compose.yml
version: '2'
services:
web:
image: httpd
ports:
- "8080:80"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: elasticsearch:7.10.1
environment:
- "network.host=0.0.0.0"
- "transport.host=127.0.0.1"
expose:
- 9200
ports:
- "9200:9200"
kibana:
image: kibana:7.10.1
environment:
server.host: 0.0.0.0
elasticsearch.hosts: http://localhost:9200
ports:
- "5601:5601"
Dockerfile
# fluentd/Dockerfile
FROM fluent/fluentd:v1.11.5-debian-1.0
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "4.0.4"]
fluentd.conf file
# fluentd/conf/fluent.conf
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
This is totally fine and the expected outcome.
In docker, if you want your service (Kibana) to be available from the localhost you should map it's port to localhost. You are doing that by:
ports:
- "5601:5601"
then you can access Kibana from your browser (localhost) by using http://localhost:5601
On other hand, internally, if you want to access one container from another you should use the container name (rather than localhost) - so if you want to access the Kibana within the elasticsearch container you would execute into the elasticsearch container and call:
curl http://kibana:5601
EDIT:
one interesting case is your web container that uses a different port internally and externally, so from localhost you would:
curl http://localhost:8080
while internally (within that docker network) you will access by:
http://web
(you can omit the 80 since its the default http port)
EDIT2:
As stated in the documentation, the default value for elasticsearch.hosts is http://elasticsearch:9200.

Is there a way to mount volumes to a kubernetes docker compose deployment?

I am trying to use kompose convert on my docker-compose.yaml files however, when I run the command:
kompose convert -f docker-compose.yaml
I get the output:
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak" isn't supported - ignoring path on the host
It also says more warning for the other persistent volumes
My docker-compose file is:
version: '3'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.1
container_name: es01
environment:
[env]
ulimits:
nproc: 3000
nofile: 65536
memlock: -1
volumes:
- /home/centos/Sprint0Demo/Servers/elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- kafka_demo
zookeeper:
image: confluentinc/cp-zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
volumes:
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-data:/var/lib/zookeeper/data
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-txn-logs:/var/lib/zookeeper/log
networks:
kafka_demo:
kafka0:
image: confluentinc/cp-kafka
container_name: kafka0
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/kafkaData:/var/lib/kafka/data
ports:
- "9092:9092"
depends_on:
- zookeeper
- es01
networks:
kafka_demo:
schema_registry:
image: confluentinc/cp-schema-registry:latest
container_name: schema_registry
environment:
[env]
ports:
- 8081:8081
networks:
- kafka_demo
depends_on:
- kafka0
- es01
elasticSearchConnector:
image: confluentinc/cp-kafka-connect:latest
container_name: elasticSearchConnector
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect:/etc/kafka-connect
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch:/etc/kafka-elasticsearch
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak:/etc/kafka
ports:
- "28082:28082"
networks:
- kafka_demo
depends_on:
- kafka0
- es01
networks:
kafka_demo:
driver: bridge
Does anyone know how I can fix this issue? I was thinking it has to do with the error message saying that its a volume mount vs host mount?
I have made some research and there are three things to point out:
kompose does not support volume mount on host. You might consider using emptyDir instead.
Kubernetes makes it difficult to pass in host/root volumes. You can try with
hostPath.
kompose convert --volumes hostPath works for k8s.
Also you can check out Compose on Kubernetes if you'd like to run things on a single machine.
Please let me know if that helped.

Docker-compose filebeat connection issue to logstash

I am running logstash and filebeat inside separate docker-compose.yml. But filebeat cannot connect to logstash. I can properly telnet into logstash telnet 127.0.0.1 5044 after I wait for the logstash pipelines to start.
Filebeat cannot create a connection. I get this error.
ERROR pipeline/output.go:74 Failed to connect: dial tcp 127.0.0.1:5044: getsockopt: connection refused
This is my docker-compose for filebeat.
version: '2'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:6.2.3
container_name: filebeat
user: root
volumes:
- flask-sync:/home/flask/app/web:ro
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
volumes:
flask-sync:
external: true
This is my filebeat.yml
filebeat.prospectors:
- type: log
enabled: true
paths:
- /home/flask/app/web/tmp/log
output.logstash:
hosts: ["127.0.0.1:5044"]
This is my docker-compose for logstash
version: '2'
services:
logstash:
image: docker.elastic.co/logstash/logstash:6.2.4
container_name: logstash
ports:
- "5044:5044"
volumes:
- ./logstash/logstash.conf:/usr/share/logstash/logstash.conf:ro
- ./logstash/config/:/usr/share/logstash/config/:ro
command: bin/logstash -f logstash.conf --config.reload.automatic
This is my logstash.conf
input {
beats {
port => 5044
}
}
filter {
}
output {
stdout { codec => rubydebug }
}
I had same problems:
Check export the 5044 port in docker-compose.yml
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- /datos/sna/log:/datos/sna/log
ports:
- "5000:5000"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch

Resources