Send Fluentd log to a opensearch installed in a different machine : Could not communicate to OpenSearch, resetting connection and trying again. [302] - fluentd

I've been trying to send the Fluentd log to a opensearch, those two are installed in two different machines.
fluentd.conf match clause is the following :
<match **>
#type copy
<store>
#type forward
#id forward_output
<server>
name TisaOS
host private_ip
port 24224
</server>
<buffer tag>
flush_interval 1s
</buffer>
<secondary>
#type opensearch
host public_ip
port 5601
ssl_verify false
user admin
password admin
index_name fluentd
</secondary>
</store>
<store>
#type stdout
</store>
</match>
I access to opensearch in the browser with private_ip:port
I've been trying for a while, some help would be very much appreciated!

Related

how to use variables or tags in the fluentd config?

the config on the host server, I need some way with the two servers to put the logs in /tmp/task/<hostname/<file_name> , for example /tmp/task/app1/auth.log or /tmp/task/app2/auth.log
on servers app1 and app2 all messages are marked with the tag .var.log.*, where * is the file name, and - hostname of the source of logs
<source>
#type forward
</source>
<match *.localfile>
#type copy
<store>
#type file
path /tmp/task/*
<buffer>
timekey 1m
</buffer>
</store>
</match>

FluentD forward logs from kafka to another fluentD

I need to send my application logs into a FluentD which is part of an EFK service. so I tried to config another FluentD to do that.
my-fluent.conf:
<source>
#type kafka_group
consumer_group cgrp
brokers "#{ENV['KAFKA_BROKERS']}"
scram_mechanism sha512
username "#{ENV['KAFKA_USERNAME']}"
password "#{ENV['KAFKA_PASSWORD']}"
ssl_ca_certs_from_system true
topics "#{ENV['KAFKA_TOPICS']}"
format json
</source>
<filter TOPIC>
#type parser
key_name log
reserve_data false
<parse>
#type json
</parse>
</filter>
<match TOPIC>
#type copy
<store>
#type stdout
</store>
<store>
#type forward
<server>
host "#{ENV['FLUENTD_HOST']}"
port "#{ENV['FLUENTD_PORT']}"
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</server>
</store>
</match>
I am able to see the output of stdout correctly
2021-07-06 07:36:54.376459650 +0000 TOPIC: {"foo":"bar", ...}
But I'm unable to see the logs from kibana. after tracing I figured it out that the second fluentd is throwing error when receiving data:
{"time":"2021-07-05 11:21:41 +0000","level":"error","message":"unexpected error on reading data host="X.X.X.X" port=58548 error_class=MessagePack::MalformedFormatError error="invalid byte"","worker_id":0}
{"time":"2021-07-05 11:21:41 +0000","level":"error","worker_id":0,"message":"/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:262:in feed_each'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:262:in block (2 levels) in read_messages'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:271:in block in read_messages'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/server.rb:613:in on_read_without_connection'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/io.rb:123:in on_readable'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/io.rb:186:in on_readable'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/loop.rb:88:in run_once'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/loop.rb:88:in run'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/event_loop.rb:93:in block in start'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/thread.rb:78:in block in thread_create'"}
The problem was missing security tag in first fluentd.
<match TOPIC>
#type copy
<store>
#type stdout
</store>
<store>
#type forward
<server>
host "#{ENV['FLUENTD_HOST']}"
port "#{ENV['FLUENTD_PORT']}"
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</server>
<security>
self_hostname HOSTNAME
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</security>
</store>
</match>

Real time data syncing using fluentd and elasticsearch

I am syncing data from kafka to elascticsearch using fluentd. But fluentd takes 60 seconds to sync data to elasticsearch. I want real time data syncing. Is there any configuration parameter which i will have to include.
i have tried
<source>
#type kafka
brokers localhost:9092
topics xxx
</source>
<match xxx>
#type elasticsearch
scheme http
port 9200
<buffer tag>
#type memory
flush_thread_count 4
</buffer>
</match>
we use flush_interval parameter, like this:
<buffer>
flush_interval 5s
flush_thread_count 4
</buffer>

Which Fluentd plugin I should use to listen application logs : in_tcp or in_forward?

I have Flask app which is streaming some logs in stdout on localhost:5555.
I want to listen these logs by dockerized Fluentd, but I'm a bit confused which plugin I should use: in_tcp or in_forward?
Config like this results in error: "Address not available - bind(2) for \"my_ip\" port 5555"
<source>
#type tcp
tag "tcp.events"
format none
bind my_ip
port 5555
log-level debug
</source>
<filter **>
#type stdout
</filter>
Config examples for in_forward always have port 24224 in config, so they seem to listen the other fluentds, not to listen an application.
Could you please advice?
For the ones which will follow:
Use fluent-logger-language to export your logs to Fluentd server.
Here are all the links:
https://github.com/fluent
Fluentd server config
<source>
#type forward
port 24224
host <if remote>
</source>
<filter **>
#type stdout
</filter>

how to send tag when using server attribute in fluentd

I am very new to fluentd so this may be a very basic question.
I want to send the data from my one fluentd to another one directly (using the <server> attribute) instead of writing to the file system, but not I am not able to find a way to send the tag with the <server> attribute.
What I've tried is:
<match testString>
type forward
buffer_chunk_limit 1m
buffer_queue_limit 6000
flush_interval 5s
flush_at_shutdown true
heartbeat_type tcp
heartbeat_interval 3s
num_threads 50
<server>
host **.**.**.****
port ******
tag testTagName
</server>
</match>
But when I ran the config it gives me:
2016-03-11 13:33:41 +0000 [warn]: parameter 'tag' in <server>
host **.**.**.***
port *****
tag testTagName
</server> is not used.
I dont think tag will work in <server> attribute.
Instead you can forward logs to remote fluentd-aggregator at port 24224 and there you could use tag in <source> attribute of fluentd-aggregator's config file.
fluend-forwarder.conf
<match testString>
type forward
buffer_chunk_limit 1m
buffer_queue_limit 6000
flush_interval 5s
flush_at_shutdown true
heartbeat_type tcp
heartbeat_interval 3s
num_threads 50
<server>
host **.**.**.****
port 24224
</server>
</match>
fluentd-aggregator.conf
<source>
#type forward
port 24224
tag testTagName
</source>
<match testTagName>
...
</match>

Resources