How can I configure logging to fluentd without docker? - fluentd

I made a docker-compose.yml, which runs .NET Core web server and fluentd.
docker-compose.yml:
version: "2"
services:
webserver:
build:
context: ..
dockerfile: ./MyProject/Dockerfile
container_name: webserver
depends_on: [ fluentd ]
ports:
- "8080:80"
logging:
driver: fluentd
options:
fluentd-address: fluentd:24224
fluentd-async: "true"
fluentd-max-retries: 30
tag: "web.log"
fluentd:
build:
context: ./fluentd
dockerfile: Dockerfile
container_name: fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
ports:
- "24224:24224"
- "24224:24224/udp"
fluent.conf:
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<match web.log>
#type stdout
</match>
However, if I should run my .NET server not as a docker container, how can I configure logging to fluentd like fluentd-address? (fluentd is still running on docker, but web server isn't.)

Related

localhost for fluentd-address is not working when fluentd container created

I'm trying to connect my custom web server to fluentd on Docker.
My docker-compose.yml is like below.
version: "2"
services:
web:
build:
context: ..
dockerfile: ./DockerTest/Dockerfile
container_name: web
depends_on: [ fluentd ]
networks:
test_net:
ipv4_address: 172.20.10.1
ports:
- "8080:80"
links:
- fluentd
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: "docker.{{.ID}}"
fluentd:
build:
context: ./fluentd
dockerfile: Dockerfile
container_name: fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
networks:
test_net:
ipv4_address: 172.20.10.2
ports:
- "24224:24224"
- "24224:24224/udp"
networks:
test_net:
ipam:
config:
- subnet: 172.20.0.0/16
When I run this first, so if fluentd container is newly created, it occurs an error: Error response from daemon: failed to initialize logging driver: dial tcp [::1]:24224: connect: connection refused. At this time, it works well if I set fluentd-address: 172.20.10.2:24224.
But when I run this again, so if remained fluentd container is changed into RUNNING status, it works well. At this time, it is not working with fluentd-address: 172.20.10.2:24224.
I wonder why fluentd address should be changed depending on container creation, and how can I solve this problem?

Build the EFK system used for simulating logging server on Docker

I want to simulate laravel logging to EFK system server
Base on this, I build up two container. One of laravel project's container. The ohter is EFK system container
but EFK's fluentd does not catch any data or event
my container's compose:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 8010:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:delegated
- ./server:/var/www/:delegated
depends_on:
- php
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24225
fluentd-async-connect: 'true'
fluentd-retry-wait: '1s'
fluentd-max-retries: '30'
tag: fubo.logger
php:
container_name: php-laravel
build: ./php
volumes:
- ./server:/var/www/:delegated
db:
build: ./mysql
volumes:
- ./mysql/data/:/var/lib/mysql
ports:
- 3306:3306
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 8811:80
depends_on:
- db
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
ports:
- "24225:24224"
- "24225:24224/udp"
networks:
- docker-efk_efk_network
networks:
docker-efk_efk_network:
external: true
my container's fluent.conf:
<source>
#type tail
path /etc/logs/laravel.log
pos_file /etc/logs/laravel.log.pos
tag docker.space
<parse>
#type json
</parse>
</source>
<match *.**>
#type forward
send_timeout 60s
recover_wait 10s
hard_timeout 60s
<server>
name dockerSpace
host docker-efk-fluentd-1
port 24224
weight 60
</server>
</match>
EFK's container compose:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
container_name: elasticsearch
restart: unless-stopped
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.15.1
container_name: kibana
restart: unless-stopped
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- I18N_LOCALE=zh-tw
ports:
- 5601:5601
links:
- elasticsearch
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf/:/fluentd/etc/
links:
- elasticsearch
ports:
- "24224:24224"
- "24224:24224/udp"
networks:
- efk_network
networks:
efk_network:
driver: bridge
EFK's container fluent.conf:
<source>
#type forward
port 24225
bind docker-space_fluentd_1
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
This is my container networks list:
name
driver
scope
docker-efk_default
bridge
local
docker-efk_efk_network
bridge
local
docker-space_default
bridge
local
What's wrong my understanding?
There are two step to do:
First, ensurce both of container has connected each other. More detail can see this.
How to link multiple docker-compose services via network
Second, modify EFK container's fluentd configuare:
<source>
#type forward
bind 0.0.0.0
port 24224
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
And ... it's work.

How do I test my Fluentd config in combination with Elasticsearch?

Problem:
I have a complicated setup where I use Elasticsearch and FluentD as part of my logging stack. I would like to add a metric and test the FluentD config for that.
How can I do that?
Idea:
Use docker-compose to start an EFK stack. Create a docker container that writes to stdout and simulates logs.
Problem with my idea:
The logs written by mylogger never seem to reach Elasticsearch. What am I doing wrong? myloggeris a container that simply writes to stdout in a loop and forever using bash.
docker-compose.yaml
version: '3'
services:
logger:
image: mylogger
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
container_name: elasticsearch
environment:
- "discovery.type=single-node"
expose:
- "9200"
ports:
- "9200:9200"
kibana:
image: kibana:7.10.1
links:
- "elasticsearch"
ports:
- "5601:5601"
fluent.conf File does not contain the changes that I want to make, yet.
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<label #FLUENT_LOG>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
</label>

How to collect logs via fluentd in swarm mode

I try to run services (mongo) in swarm mode with log collected to elasticsearch via fluentd. It's worked(!) with:
docker-compose up
But when I deploy via stack, services started, but logs not collected, and i don't know how to see what the reason.
docker stack deploy -c docker-compose.yml env_staging
docker-compose.yml:
version: "3"
services:
mongo:
image: mongo:3.6.3
depends_on:
- fluentd
command: mongod
networks:
- webnet
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: mongo
fluentd:
image: zella/fluentd-es
depends_on:
- elasticsearch
ports:
- 24224:24224
- 24224:24224/udp
networks:
- webnet
elasticsearch:
image: elasticsearch
ports:
- 9200:9200
networks:
- webnet
kibana:
image: kibana
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
- webnet
networks:
webnet:
upd
I remove fluentd-address: localhost:24224 and problem solves. But I don't understand what is "localhost"? Why we can't set "fluentd" host. If someone explain what is fluentd-address, I will accept answer.
fluentd-address is the address where fluentd daemon resides (default is localhost and you don't need to specify it in this case).
In your case (using stack) your fluentd daemon will run on a node, you should reach that service using the name of the service (in your case fluentd, have you tried?).
Remember to add to your options the fluentd-async-connect: "true"
Reference is at:
https://docs.docker.com/config/containers/logging/fluentd/#usage
You don't need to specify fluentd-address. When you set logging driver to fluentd, Swarm automatically discovers nearest fluentd instance and sends there all stdout of desired container.

Can't log from (fluentd) logdriver using service name in compose

I have the following setup in docker:
Application (httpd)
Fluentd
ElasticSearch
Kibana
The configuration of the logdriver of the application is describing the fluentd container. The logs will be saved in ES and shown in Kibana.
When the logdriver is configured as this, it works:
web:
image: httpd
container_name: httpd
ports:
- "80:80"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
And fluentd is mapping its exposed port 24224 on port 24224 of the host.
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
ports:
- "24224:24224"
But I don't want to expose my fluentd on the hostnetwork. I want to keep it 'private' inside the docker network (I only want to map the app and kibana on the host network) so like this:
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
The port 24224 is still exposed (in the dockerfile) but it's not mapped on the host network. Now I want change the config of the logdriver of my app:
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
tag: httpd.access
So fluentd is the name of the fluentd container and they are in the same network but the app is not able to make a connection with it.
failed to initialize logging driver: dial tcp: lookup fluentd
Is this maybe because the logging option is executed before the 'link'-option in the compose file?
Is there a way to let this work?
This is not possible currently. The docker deamon which handles the log drivers is a process running on the host machine. It is not a service in your network and is therefore unable to resolve servicenames to IP's. See this github issue for more detailed explanations.
You will have to publish a port for this to work.

Resources