I am syncing data from kafka to elascticsearch using fluentd. But fluentd takes 60 seconds to sync data to elasticsearch. I want real time data syncing. Is there any configuration parameter which i will have to include.
i have tried
<source>
#type kafka
brokers localhost:9092
topics xxx
</source>
<match xxx>
#type elasticsearch
scheme http
port 9200
<buffer tag>
#type memory
flush_thread_count 4
</buffer>
</match>
we use flush_interval parameter, like this:
<buffer>
flush_interval 5s
flush_thread_count 4
</buffer>
Related
I've been trying to send the Fluentd log to a opensearch, those two are installed in two different machines.
fluentd.conf match clause is the following :
<match **>
#type copy
<store>
#type forward
#id forward_output
<server>
name TisaOS
host private_ip
port 24224
</server>
<buffer tag>
flush_interval 1s
</buffer>
<secondary>
#type opensearch
host public_ip
port 5601
ssl_verify false
user admin
password admin
index_name fluentd
</secondary>
</store>
<store>
#type stdout
</store>
</match>
I access to opensearch in the browser with private_ip:port
I've been trying for a while, some help would be very much appreciated!
the config on the host server, I need some way with the two servers to put the logs in /tmp/task/<hostname/<file_name> , for example /tmp/task/app1/auth.log or /tmp/task/app2/auth.log
on servers app1 and app2 all messages are marked with the tag .var.log.*, where * is the file name, and - hostname of the source of logs
<source>
#type forward
</source>
<match *.localfile>
#type copy
<store>
#type file
path /tmp/task/*
<buffer>
timekey 1m
</buffer>
</store>
</match>
I need to send my application logs into a FluentD which is part of an EFK service. so I tried to config another FluentD to do that.
my-fluent.conf:
<source>
#type kafka_group
consumer_group cgrp
brokers "#{ENV['KAFKA_BROKERS']}"
scram_mechanism sha512
username "#{ENV['KAFKA_USERNAME']}"
password "#{ENV['KAFKA_PASSWORD']}"
ssl_ca_certs_from_system true
topics "#{ENV['KAFKA_TOPICS']}"
format json
</source>
<filter TOPIC>
#type parser
key_name log
reserve_data false
<parse>
#type json
</parse>
</filter>
<match TOPIC>
#type copy
<store>
#type stdout
</store>
<store>
#type forward
<server>
host "#{ENV['FLUENTD_HOST']}"
port "#{ENV['FLUENTD_PORT']}"
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</server>
</store>
</match>
I am able to see the output of stdout correctly
2021-07-06 07:36:54.376459650 +0000 TOPIC: {"foo":"bar", ...}
But I'm unable to see the logs from kibana. after tracing I figured it out that the second fluentd is throwing error when receiving data:
{"time":"2021-07-05 11:21:41 +0000","level":"error","message":"unexpected error on reading data host="X.X.X.X" port=58548 error_class=MessagePack::MalformedFormatError error="invalid byte"","worker_id":0}
{"time":"2021-07-05 11:21:41 +0000","level":"error","worker_id":0,"message":"/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:262:in feed_each'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:262:in block (2 levels) in read_messages'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin/in_forward.rb:271:in block in read_messages'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/server.rb:613:in on_read_without_connection'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/io.rb:123:in on_readable'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/io.rb:186:in on_readable'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/loop.rb:88:in run_once'\n/usr/lib/ruby/gems/2.7.0/gems/cool.io-1.7.1/lib/cool.io/loop.rb:88:in run'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/event_loop.rb:93:in block in start'\n/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.12.2/lib/fluent/plugin_helper/thread.rb:78:in block in thread_create'"}
The problem was missing security tag in first fluentd.
<match TOPIC>
#type copy
<store>
#type stdout
</store>
<store>
#type forward
<server>
host "#{ENV['FLUENTD_HOST']}"
port "#{ENV['FLUENTD_PORT']}"
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</server>
<security>
self_hostname HOSTNAME
shared_key "#{ENV['FLUENTD_SHARED_KEY']}"
</security>
</store>
</match>
I have fluentd + InfluxDB + Graphite + Grafana.
I need to apply math operations with number data, but InfluxDB or Grafana regard my numeric data like a string. So I can't compare with WHERE statements or color with grafana.
How I can set data type?
My configuration is like this:
<source>
#type http
port 12102
format tsv
keys string1,string2,number1,number2
delimiter |
</source>
<match test>
#type copy
<store>
#type graphite
tag_for prefix
name_keys number1,number2
host localhost
port 2003
</store>
<store>
#type influxdb
dbname test
flush_interval 10s
host localhost
port 8086
</store>
</match>
And the input is like this:
curl -X POST -d "text1|text2|764.2|57" "http://localhost:12102/test?time=1461940658"
On graphite it's all OK.
I am very new to fluentd so this may be a very basic question.
I want to send the data from my one fluentd to another one directly (using the <server> attribute) instead of writing to the file system, but not I am not able to find a way to send the tag with the <server> attribute.
What I've tried is:
<match testString>
type forward
buffer_chunk_limit 1m
buffer_queue_limit 6000
flush_interval 5s
flush_at_shutdown true
heartbeat_type tcp
heartbeat_interval 3s
num_threads 50
<server>
host **.**.**.****
port ******
tag testTagName
</server>
</match>
But when I ran the config it gives me:
2016-03-11 13:33:41 +0000 [warn]: parameter 'tag' in <server>
host **.**.**.***
port *****
tag testTagName
</server> is not used.
I dont think tag will work in <server> attribute.
Instead you can forward logs to remote fluentd-aggregator at port 24224 and there you could use tag in <source> attribute of fluentd-aggregator's config file.
fluend-forwarder.conf
<match testString>
type forward
buffer_chunk_limit 1m
buffer_queue_limit 6000
flush_interval 5s
flush_at_shutdown true
heartbeat_type tcp
heartbeat_interval 3s
num_threads 50
<server>
host **.**.**.****
port 24224
</server>
</match>
fluentd-aggregator.conf
<source>
#type forward
port 24224
tag testTagName
</source>
<match testTagName>
...
</match>