I need to send my application logs into a FluentD which is part of an EFK service. so I tried to config another FluentD to do that.
my-fluent.conf:
<source>
#type kafka_group
consumer_group cgrp
brokers "#{ENV['KAFKA_BROKERS']}"
scram_mechanism sha512
username "#{ENV['KAFKA_USERNAME']}"
password "#{ENV['KAFKA_PASSWORD']}"
ssl_ca_certs_from_system true
topics "#{ENV['KAFKA_TOPICS']}"
format json
</source>
<filter TOPIC>
#type parser
key_name log
reserve_data false
<parse>
#type json
</parse>
</filter>
<match TOPIC>
#type copy
<store>
#type stdout
</store>
<store>
#type forward
<server>
host "#{ENV['FLUENTD_HOST']}"
port "#{ENV['FLUENTD_PORT']}"
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</server>
</store>
</match>
I am able to see the output of stdout correctly
2021-07-06 07:36:54.376459650 +0000 TOPIC: {"foo":"bar", ...}
But I'm unable to see the logs from kibana. after tracing I figured it out that the second fluentd is throwing error when receiving data:
{"time":"2021-07-05 11:21:41 +0000","level":"error","message":"unexpected error on reading data host="X.X.X.X" port=58548 error_class=MessagePack::MalformedFormatError error="invalid byte"","worker_id":0}
{"time":"2021-07-05 11:21:41 +0000","level":"error","worker_id":0,"message":"/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:262:in feed_each'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:262:in block (2 levels) in read_messages'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:271:in block in read_messages'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/server.rb:613:in on_read_without_connection'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/io.rb:123:in on_readable'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/io.rb:186:in on_readable'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/loop.rb:88:in run_once'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/loop.rb:88:in run'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/event_loop.rb:93:in block in start'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/thread.rb:78:in block in thread_create'"}
The problem was missing security tag in first fluentd.
<match TOPIC>
#type copy
<store>
#type stdout
</store>
<store>
#type forward
<server>
host "#{ENV['FLUENTD_HOST']}"
port "#{ENV['FLUENTD_PORT']}"
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</server>
<security>
self_hostname HOSTNAME
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</security>
</store>
</match>
Related
I've been trying to send the Fluentd log to a opensearch, those two are installed in two different machines.
fluentd.conf match clause is the following :
<match **>
#type copy
<store>
#type forward
#id forward_output
<server>
name TisaOS
host private_ip
port 24224
</server>
<buffer tag>
flush_interval 1s
</buffer>
<secondary>
#type opensearch
host public_ip
port 5601
ssl_verify false
user admin
password admin
index_name fluentd
</secondary>
</store>
<store>
#type stdout
</store>
</match>
I access to opensearch in the browser with private_ip:port
I've been trying for a while, some help would be very much appreciated!
my task:
is to collect access.log nginx
to divide the nginx logs by codes
if the response code is from 1xx to 3xx, then write to /tmp/1xx-3xx
if the response code is from 4xx to 5xx, then /tmp/4xx-5xx
since I'm new to fluentd, I can't figure out where my error is?
td-agent.conf:
<source>
#type tail
#id input_tail
<parse>
#type nginx
</parse>
path /var/log/nginx/access.log
tag nginx
</source>
<match nginx.**>
#type rewrite_tag_filter
<rule>
key code
pattern /([1-5][0-9]{2})/
tag nginx.$1
</rule>
</match>
<match {nginx.4**,nginx.5**}>
#type file
path /tmp/4xx-5xx
</match>
<match {nginx.1**,nginx.2**,nginx.3**}>
#type file
path /tmp/1xx-3xx
</match>
error with permission /var/log/nginx/access.log
solution
chmod 0644 /var/log/nginx/access.log
the config on the host server, I need some way with the two servers to put the logs in /tmp/task/<hostname/<file_name> , for example /tmp/task/app1/auth.log or /tmp/task/app2/auth.log
on servers app1 and app2 all messages are marked with the tag .var.log.*, where * is the file name, and - hostname of the source of logs
<source>
#type forward
</source>
<match *.localfile>
#type copy
<store>
#type file
path /tmp/task/*
<buffer>
timekey 1m
</buffer>
</store>
</match>
I'm trying to extend the configuration someone else made on a server:
#input from collectd over http
<source>
type http
port 26001
bind 127.0.0.1
</source>
# This actually does other stuff, just changed to file for debugging
# I cannot change anything here on the final result
<match td-agent.*>
#type file
path /var/log/fluent/myapp2
compress gzip
<buffer>
timekey 1d
timekey_use_utc true
timekey_wait 1m
</buffer>
</match>
My requirement is to forward everything out, without changing a lot the basic configuration.
I tried something like this, to basically send everything from the source to an intermediate LABEL, that then send everything to my Backup label, and then remits so that the <match td-agent.*> (which is the entry point for much more complex logic) can execute:
#input from collectd over http
<source>
type http
port 26001
bind 127.0.0.1
#label #MULTIPLEX # Added This label
</source>
# This label is meant to simply copy everything to BACKUP, and then remit so that original match rule can run
<label #MULTIPLEX>
<match **>
#type copy
<store>
#type relabel
#label #BACKUP
</store>
# Dummy rule that simply copies everything again
<store>
#type rewrite_tag_filter
<rule>
key plugin
pattern /.*/
tag ${tag}
</rule>
</store>
</match>
</label>
# This will actually forward everything out
<label #BACKUP>
<match **>
#type file
path /var/log/fluent/myapp
compress gzip
<buffer>
timekey 1d
timekey_use_utc true
timekey_wait 1m
</buffer>
</match>
</label>
# This Actually does other stuff, just changed to file for debugging
<match td-agent.*>
#type file
path /var/log/fluent/myapp2
compress gzip
<buffer>
timekey 1d
timekey_use_utc true
timekey_wait 1m
</buffer>
</match>
But only the stdout from backup is working!
I suspect this is because my dummy tag_rewrite is still sending the data with the label attached? If so, how can I remove it? If not, what am I missing?
I have Flask app which is streaming some logs in stdout on localhost:5555.
I want to listen these logs by dockerized Fluentd, but I'm a bit confused which plugin I should use: in_tcp or in_forward?
Config like this results in error: "Address not available - bind(2) for \"my_ip\" port 5555"
<source>
#type tcp
tag "tcp.events"
format none
bind my_ip
port 5555
log-level debug
</source>
<filter **>
#type stdout
</filter>
Config examples for in_forward always have port 24224 in config, so they seem to listen the other fluentds, not to listen an application.
Could you please advice?
For the ones which will follow:
Use fluent-logger-language to export your logs to Fluentd server.
Here are all the links:
https://github.com/fluent
Fluentd server config
<source>
#type forward
port 24224
host <if remote>
</source>
<filter **>
#type stdout
</filter>