Docker Logging Efk 7.10.1 Compose ECONNREFUSED - docker

just trying to build a test app for learning purposes on how to collect Docker logs to EFK (Elasticsearch7.10.1 + Fluentd + Kibana7.10.1) stack.
Elastic starts up fine and is reachable from http://localhost:5601/
But fluentd-* is not available as an index pattern, I assume do to the error I am getting on the logs from kibana:
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["error","elasticsearch","monitoring"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack => connect ECONNREFUSED 172.20.0.3:9200"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["warning","plugins","monitoring","monitoring"],"pid":6,"message":"X-Pack Monitoring Cluster Alerts will not be available: No Living connections"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["error","elasticsearch","data"],"pid":6,"message":"[ConnectionError]: connect ECONNREFUSED 172.20.0.3:9200"}
kibana_1 | {"type":"log","#timestamp":"2021-01-03T23:46:32Z","tags":["error","savedobjects-service"],"pid":6,"message":"Unable to retrieve version information from Elasticsearch nodes."}
172.20.0.3:9200 and http://elasticsearch:9200/ are not reachable through browser
http://localhost:9200/ is reachable
What am I missing? I have been working on this for a week and don't know where to look anymore, thanks!
Docker-compose.yml
version: '2'
services:
web:
image: httpd
ports:
- "8080:80"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: elasticsearch:7.10.1
environment:
- "network.host=0.0.0.0"
- "transport.host=127.0.0.1"
expose:
- 9200
ports:
- "9200:9200"
kibana:
image: kibana:7.10.1
environment:
server.host: 0.0.0.0
elasticsearch.hosts: http://localhost:9200
ports:
- "5601:5601"
Dockerfile
# fluentd/Dockerfile
FROM fluent/fluentd:v1.11.5-debian-1.0
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "4.0.4"]
fluentd.conf file
# fluentd/conf/fluent.conf
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>

This is totally fine and the expected outcome.
In docker, if you want your service (Kibana) to be available from the localhost you should map it's port to localhost. You are doing that by:
ports:
- "5601:5601"
then you can access Kibana from your browser (localhost) by using http://localhost:5601
On other hand, internally, if you want to access one container from another you should use the container name (rather than localhost) - so if you want to access the Kibana within the elasticsearch container you would execute into the elasticsearch container and call:
curl http://kibana:5601
EDIT:
one interesting case is your web container that uses a different port internally and externally, so from localhost you would:
curl http://localhost:8080
while internally (within that docker network) you will access by:
http://web
(you can omit the 80 since its the default http port)
EDIT2:
As stated in the documentation, the default value for elasticsearch.hosts is http://elasticsearch:9200.

Related

Build the EFK system used for simulating logging server on Docker

I want to simulate laravel logging to EFK system server
Base on this, I build up two container. One of laravel project's container. The ohter is EFK system container
but EFK's fluentd does not catch any data or event
my container's compose:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 8010:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:delegated
- ./server:/var/www/:delegated
depends_on:
- php
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24225
fluentd-async-connect: 'true'
fluentd-retry-wait: '1s'
fluentd-max-retries: '30'
tag: fubo.logger
php:
container_name: php-laravel
build: ./php
volumes:
- ./server:/var/www/:delegated
db:
build: ./mysql
volumes:
- ./mysql/data/:/var/lib/mysql
ports:
- 3306:3306
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 8811:80
depends_on:
- db
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
ports:
- "24225:24224"
- "24225:24224/udp"
networks:
- docker-efk_efk_network
networks:
docker-efk_efk_network:
external: true
my container's fluent.conf:
<source>
#type tail
path /etc/logs/laravel.log
pos_file /etc/logs/laravel.log.pos
tag docker.space
<parse>
#type json
</parse>
</source>
<match *.**>
#type forward
send_timeout 60s
recover_wait 10s
hard_timeout 60s
<server>
name dockerSpace
host docker-efk-fluentd-1
port 24224
weight 60
</server>
</match>
EFK's container compose:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
container_name: elasticsearch
restart: unless-stopped
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.15.1
container_name: kibana
restart: unless-stopped
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- I18N_LOCALE=zh-tw
ports:
- 5601:5601
links:
- elasticsearch
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf/:/fluentd/etc/
links:
- elasticsearch
ports:
- "24224:24224"
- "24224:24224/udp"
networks:
- efk_network
networks:
efk_network:
driver: bridge
EFK's container fluent.conf:
<source>
#type forward
port 24225
bind docker-space_fluentd_1
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
This is my container networks list:
name
driver
scope
docker-efk_default
bridge
local
docker-efk_efk_network
bridge
local
docker-space_default
bridge
local
What's wrong my understanding?
There are two step to do:
First, ensurce both of container has connected each other. More detail can see this.
How to link multiple docker-compose services via network
Second, modify EFK container's fluentd configuare:
<source>
#type forward
bind 0.0.0.0
port 24224
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
And ... it's work.

How do I test my Fluentd config in combination with Elasticsearch?

Problem:
I have a complicated setup where I use Elasticsearch and FluentD as part of my logging stack. I would like to add a metric and test the FluentD config for that.
How can I do that?
Idea:
Use docker-compose to start an EFK stack. Create a docker container that writes to stdout and simulates logs.
Problem with my idea:
The logs written by mylogger never seem to reach Elasticsearch. What am I doing wrong? myloggeris a container that simply writes to stdout in a loop and forever using bash.
docker-compose.yaml
version: '3'
services:
logger:
image: mylogger
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
container_name: elasticsearch
environment:
- "discovery.type=single-node"
expose:
- "9200"
ports:
- "9200:9200"
kibana:
image: kibana:7.10.1
links:
- "elasticsearch"
ports:
- "5601:5601"
fluent.conf File does not contain the changes that I want to make, yet.
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<label #FLUENT_LOG>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
</label>

Problems running Kafka with Docker Compose in Windows

I'm trying to run Kafka locally on Windows 10 Pro and Docker Desktop (not toolbox). Everything seems to work perfectly, but I can't reach Kafka with my application and neither use kafka rest (http://localhost:8082/topics | http://127.0.0.1:8082/topics | http://192.168.1.103:8082/topics - this last one is my docker ip in hosts)
My docker-compose file is:
version: '2'
services:
# https://hub.docker.com/r/confluentinc/cp-zookeeper/tags
zookeeper:
image: confluentinc/cp-zookeeper:5.3.1
container_name: zookeeper
hostname: zookeeper
network_mode: host
ports:
- "2181:2181"
- "32181:32181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
# https://hub.docker.com/r/confluentinc/cp-kafka/tags
kafka:
image: confluentinc/cp-kafka:5.3.1
container_name: kafka
hostname: kafka
network_mode: host
ports:
- "9092:9092"
- "29092:29092"
restart: always
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_BROKER_ID: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
schema-registry:
image: confluentinc/cp-schema-registry:5.3.1
hostname: schema-registry
container_name: schema-registry
network_mode: host
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: localhost:2181
SCHEMA_REGISTRY_HOST_NAME: localhost
SCHEMA_REGISTRY_LISTENERS: http://localhost:8081
kafka-rest:
image: confluentinc/cp-kafka-rest:5.3.1
hostname: kafka-rest
container_name: kafka-rest
network_mode: host
depends_on:
- zookeeper
- kafka
ports:
- "8082:8082"
environment:
KAFKA_REST_HOST_NAME: localhost
KAFKA_REST_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_REST_LISTENERS: http://localhost:8082
KAFKA_REST_SCHEMA_REGISTRY_URL: http://localhost:8081
And my hosts file is:
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
192.168.1.103 host.docker.internal
192.168.1.103 gateway.docker.internal
# Added by Docker Desktop
192.168.2.236 host.docker.internal
192.168.2.236 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
127.0.0.1 kafka
In the logs I've got messages like
"kafka-rest | [2019-10-21 11:40:57,903] INFO Server started, listening for requests... (io.confluent.kafkarest.KafkaRestMain)"
I don't know what I'm doing wrong
I've tried to follow some instructions on other posts:
Kafka with Docker Problems
Confluent Kafka & docker-compose - error running example
Kafka setup with docker-compose
And many other on Google
You need to configure your Kafka listeners correctly given the networking that Docker involves. This post explains how: https://rmoff.net/2018/08/02/kafka-listeners-explained/
You can find a working Docker Compose that includes host-access here: https://github.com/confluentinc/examples/blob/5.3.1-post/cp-all-in-one/docker-compose.yml

How to run fluentd in docker within the internal network

I have the following configuration in my docker-compose file:
fluentd:
build: ./fluentd
container_name: fluentd
expose:
- 24224
- 24224/udp
depends_on:
- "elasticsearch"
networks:
- internal
public-site:
build: ./public-site
container_name: public-site
depends_on:
- fluentd
logging:
driver: fluentd
options:
tag: public-site
networks:
- internal
networks:
internal:
When I start the app using docker-compose up, then the webserver exists with the error message ERROR: for public-site Cannot start service public-site: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused.
On the other hand, when I publish the ports from fluentd (ports: 24224:24224), it works. The problem is that I don't want to publish those ports on the host, since it bypasses the linux firewall (i.e. it exposes the fluentd port to everyone, see here).
This is confusing, since exposing a port should make it available for every container in the network. I am using an internal network betweem fluentd and the webserver, so I would expect that the exposed ports of fluentd are enough (which isn't the case).
When I connect to the webserver container, I can ping and resolve the fluentd container, so there is a connection. For some reasons however, at startup it won't accept a fluentd config with no published ports.
The communication to 127.0.0.1 is always problematic if you're in a container. I found this explanation in the docs that performs way better than I would do:
To use the fluentd driver as the default logging driver, set the
log-driver and log-opt keys to appropriate values in the daemon.json
file, which is located in /etc/docker/ on Linux hosts or
C:\ProgramData\docker\config\daemon.json on Windows Server. For more
about +configuring Docker using daemon.json, see +daemon.json.
The following example sets the log driver to fluentd and sets the
fluentd-address option.
{
"log-driver": "fluentd",
"log-opts": {
"fluentd-address": "fluentd:24224"
}
}
src: https://docs.docker.com/config/containers/logging/fluentd/
EDIT: this works until you want to have an application on the host communicating with the dockerized fluentd (then it's a pain)
I have facing issue, I have solve using using static ip address.
logging:
driver: fluentd
options:
fluentd-address: 172.24.0.5:24224
I am facing the same error with you. After check the example config in fluent official site, I was able to connect fluentd through links.
Below is my configuration that works:
version: "3.5"
networks:
test:
services:
flog:
container_name: flog
image: mingrammer/flog:0.4.3
command: -t stdout -f apache_common -d 1s -l
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
links:
- fluentd
networks:
- test
fluentd:
container_name: fluentd
image: moonape1226/fluentd-with-loki-plugin:v1.13-1
ports:
- "24224:24224"
- "24224:24224/udp"
volumes:
- ./config/fluentd/fluent.conf:/fluentd/etc/fluent.conf
networks:
- test

Can't log from (fluentd) logdriver using service name in compose

I have the following setup in docker:
Application (httpd)
Fluentd
ElasticSearch
Kibana
The configuration of the logdriver of the application is describing the fluentd container. The logs will be saved in ES and shown in Kibana.
When the logdriver is configured as this, it works:
web:
image: httpd
container_name: httpd
ports:
- "80:80"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
And fluentd is mapping its exposed port 24224 on port 24224 of the host.
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
ports:
- "24224:24224"
But I don't want to expose my fluentd on the hostnetwork. I want to keep it 'private' inside the docker network (I only want to map the app and kibana on the host network) so like this:
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
The port 24224 is still exposed (in the dockerfile) but it's not mapped on the host network. Now I want change the config of the logdriver of my app:
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
tag: httpd.access
So fluentd is the name of the fluentd container and they are in the same network but the app is not able to make a connection with it.
failed to initialize logging driver: dial tcp: lookup fluentd
Is this maybe because the logging option is executed before the 'link'-option in the compose file?
Is there a way to let this work?
This is not possible currently. The docker deamon which handles the log drivers is a process running on the host machine. It is not a service in your network and is therefore unable to resolve servicenames to IP's. See this github issue for more detailed explanations.
You will have to publish a port for this to work.

Resources