I am adjusting our fluentd configuration to include a specific log file and send to S3. The issue I am trying to wrap my head around is this.... Only some instance types in our datacenter will contain this specific log. Other instances will not (because they are not running the app that we are logging). How do you modify the configuration so that fluentd can handle the file existing or not existing?
So in the below example Input, this log file will not be on every server instance -- that is expected. Do we have to configure the security.conf file to look for this and skip if missing? Or will fluentd just not include what it doesn't find?
## Inputs:
<source>
#type tail
path /var/log/myapp/myapp-scan.log.*
pos_file /var/log/td-agent/myapp-scan.log.pos
tag s3.system.security.myapp-scan
format none
</source>
Related
Trying to exclude logs using the grep's exclude directive.
<filter kubernetes.var.log.containers.**>
#type grep
<exclude>
key kubernetes.pod_name
pattern /^podname-*/
</exclude>
</filter>
I tried with different key names e.g. container and namespace as well. I am trying to exclude logs from a certain pod using the pattern but it's not working. Using type forward source type to send logs.
Want to exclude logs from certain pods starting with the same name from var log containers.
I have a log file that is continuously deleted and re-created with the same structure but different data.
I'd like to use fluentD to export that file when a new version of the file is created. I tried various set of options but it looks like fluentD misses the updates unless I manually add some lines to the file.
Is this a use case that is supported by default sources/parsers?
Here is a config file is use
<source>
#type tail
tag file.keepalive
open_on_every_update true
read_from_head true
encoding UTF-8
multiline_flush_interval 1
...
</source>
Try tail plugin, but instead of specifying a path to a file, specify a path to a parent directory like dir/*: https://docs.fluentd.org/input/tail#path
Try adding a datetime to filename everytime you recreate it - this will 100% force it to read all.
I have a specific need for knowing how to "import" log files I receive from anyone into Graylog. My need is not about 'sending' or configuring a collector that will be sending logs to Graylog.
I need to know if I can copy a TAR with logs into the graylog and render the content of via the Web UI of Graylog.
I have read many blogs, and I am having difficulty finding guidance for my specific need.
Your help is greatly appreciated
so far as i know it is not possible to import logs, but you can use fluentd(http://www.fluentd.org/guides/recipes/graylog2) to read log-files.
BUT if you want to send logfiles from apache to graylog try this, add into you apache2.conf the following lines:
LogFormat "{ \"version\": \"1.1\", \"host\": \"%V\", \"short_message\": \"%r\", \"timestamp\": %{%s}t, \"level\": 6, \"_user_agent\": \"%{User-Agent}i\", \"_source_ip\": \"%a\", \"_duration_usec\": %D, \"_duration_sec\": %T, \"_request_size_byte\": %O, \"_http_status\": %s, \"_http_request_path\": \"%U\", \"_http_request\": \"%U%q\", \"_http_method\": \"%m\", \"_http_referer\": \"%{Referer}i\" }" graylog2_access
and add into you virtualhost file the following lines:
CustomLog "|/bin/nc -u syslogserver.example.de 50520" graylog2_access
also take a look here: https://serverfault.com/questions/310695/sending-logs-to-graylog2-server
You could try the community editon of nxlog. With nxlog you can load up your log files with im_file parse the logs up some and get them into gelf format which should make them easier to search in Graylog2. If you set SavePos and ReadFromLast to False it will suck in the entire log file anytime you kick off nxlog regardless of when the log happened, or even if it's been entered into Graylog2 before.
I'm looking for a way to send source_hostname to the fluentd destination server.
I was on logstash but we have agent/server side and we have variables to get the source hostname in the logstash server config file.
I search a similar way to do it with FluentD but the only thing that I find is to set the hostname in the source tag "#{Socket.gethostname}". But in this way i can't use the hostname in the path of the destinatation log file.
Based on source : http://docs.fluentd.org/articles/config-file#embedded-ruby-code
In the server-side, this is why i would like to do :
<source>
type forward
port 24224
bind 192.168.245.100
</source>
<match apache.access.*>
type file
path /var/log/td-agent/apache2/#{hostname}/access
</match>
<match apache.error.*>
type file
path /var/log/td-agent/apache2/#{hostname}/error
</match>
Should someone can help me to something like this please ?
Thank you in advance for your time.
You can evaluate the Ruby code with #{} in " quoted string.
So you can change it to,
path /var/log/td-agent/apache2/"#{hostname}"/access
Refer the docs - http://docs.fluentd.org/articles/config-file#embedded-ruby-code
You can try using record-reformer plugin here or forest plugin here
When I run flume using the command :
bin/flume-ng agent --conf conf --conf-file flume.conf --name agentName -Dflume.root.logger=INFO,console
it runs listing all its log data on the console. I would like to store all this log data (flume's log data) in a file. How do I do it?
You need to make a custom build of Flume which uses log4j2.
You configure log4j2 to use a rolling file appender that rolls every minute (or whatever the latency is that you desire) to a spooling directory.
You configure Flume to use a SpoolingDirectorySource against that spooling directory.
You can't use a direct Flume appender (such as what's in log4j2) to log Flume because you will get into deadlock.
You can't use log4j1 with a rolling file appender because it has a concurrency defect which means it may write new messages to an old file and the SpoolingDirectorySource then fails.
I can't remember if I tried the Log4j appender from Flume with this setup. That appender does not have many ways to configure it and I think it will cause you problems if the subsequent agent you're trying to talk to is down.
Another approach might be to patch log4j1 and fix that concurrency defect (there's a variable that needs to be made volatile)
(Yes, setting this up is a little frustrating!)
dont run with -Dflume.root.logger=INFO,console ,then flume will log in ./logs