Fluentd removes log entry after applying json parser - docker

I got 2 docker containers using Fluentd as log driver. Both send valid JSON messages. Here are examples of them:
{"tag":"docker/article-api","log":"{\"level\":\"debug\",\"port\":\":80\",\"time\":\"2020-02-17T17:06:46Z\",\"message\":\"starting the server\"}"}
{"log":"{\"level\":\"info\",\"ts\":1581959205.461808,\"caller\":\"apiserv/main.go:69\",\"msg\":\"Service is ready to listen\"}","tag":"docker/user-api"}
They are quite different, but I am sure both are valid.
As we use Stackdriver logging, I'd like to add the "severity" field equal to the value of level.
Here's the part of the config file, that creates all the confusion.
<filter **>
#type parser
key_name log
replace_invalid_sequence true
<parse>
#type json
</parse>
</filter>
And here's the problem itself. After passing through the filter, the first log entry message is completely removed, while the second one got passed through.
I've tried to specify time_format, but it doesn't seem to work at all.
Aside from that, I've tried to use filter docker**, but it removes all the useful entries instead. It has nothing to do with it, but if you got an idea on what has caused it, I'll appreciate this
Thank you in advance
P.S. I'm using the google-fluentd service if it does make a difference.

Related

Seeing Bad parsing rule for Jenkins Log parser plugin

I am trying to use Log Parser Plugin with Jenkins. Following is my rule file which I have taken from the sample given on the link.
# match line starting with 'error', case-insensitive
error /(?i)^error/
# list of warnings here...
warning /[Ww]arning/
warning /WARNING/
# create a quick access link to lines in the report containing 'INFO'
info /INFO/
# each line containing 'BUILD' represents the start of a section for grouping errors and warnings found after the line.
# also creates a quick access link.
start /BUILD/
I still see following at the end of the Parsed Console Output page:
NOTE: Some bad parsing rules have been found:
Bad parsing rule: , Error:1
Bad parsing rule: , Error:1
Bad parsing rule: , Error:1
I did come across this, but dint help as I am not using space anywhere.
Can someone help me resolving this issue?
It appears you have extra white-space somewhere in the file that the plugin is interpreting as you attempting to define a rule. Maybe try running it with the empty lines removed. That plugin has given me quite a bit of trouble as well, it's not very well documented (as is the case with many Jenkins plugins).
I had tried no spaces in the pattern, but that did not work. Turns out that the Parsing Rules files does not support empty lines in it. Once I removed the empty lines, I did not get this "Bad parsing rule: , Error:1".
I think since the line is empty - it doesn't echo any rule after the first colon. Would have been nice it the line number was reported where the problem is.
I posted the same to this thread too - Log parsing rules in Jenkins
Hopefully, it helps out other folks who may be wondering what is causing this.

log4j2 xml Configuration tag dest attribute example

Has anyone a working example using the dest attribute to output log4j2 debug information to a file? I am trying to troubleshoot configuration problems and see the documentation shows this property can be set to err, or a url, or a file path along with setting the status to the level of output to produce, but I am not able to find a working example anywhere for this particular property.
Presently my xml for the Configuration element is as follows:
<Configuration status="DEBUG" dest="${sys:catalina.home}/logs/log4jdebug.log">
But, alas, no log4jdebug.log file is appearing along with the files my RollingFile appender create in the same path.
Thanks!
I have not found a specific example, but you can obtain equivalent functionality by putting
dest="err"
where output will appear in std.out. With a Tomcat web app, for example, output from log4j will write to catalina.out.
I was able to troubleshoot my configuration by setting status="debug" along with dest="err".

How to separate logs of uWSGI?

I want to separate the logs of uwsgi like access logs, request logs, error logs in individual files. At the moment these all are in same file and not well formatted.
There are configuration directives to specify different loggers for requests and all other messages: logger and req-logger. Example:
# uwsgi.ini
req-logger = file:/var/log/uwsgi/uwsgi-req.log
logger = file:/var/log/uwsgi/uwsgi.log
If you want nondefault formatting, filtering or a peculiar output location, you could write your own logging plugin. Here's a link the relevant page: http://uwsgi-docs.readthedocs.org/en/latest/Logging.html

Is there a way to send files AS-IS via Fluentd?

I'm trying to use using Fluentd to aggregate log files from various servers. And by default it parses the log lines in various ways (and I can see the value in doing that) but in my current situation I would like to send the files AS-IS, without parsing and without changing a thing.
I'm using the in_tail plugin with the following configurations:
<source>
type tail
format none
read_from_head true
path /path/to/logs/*.log
pos_file /path/to/logs/pos_file
tag mylog
</source>
And even this none format parses the logs. For example
I am a line of log
gets parsed as
{"message":"hello world. I am a line of log!"}
I guess the question is: Is there a way for it to send the tail content, without altering anything?
Thanks!
Well, all messages in fluentd will be handled as JSON objects but what you could do is on the receiving end match with a file output (out_file) and that would basically just create log files on the receiving end with the same content as the source.
http://docs.fluentd.org/articles/out_file
You could even "hack" it to output with the format csv and set the delimiter to a whitespace. That could also work...

Indexing and Parsing XML files with ElasticSearch

I need to index multiple XML files under multiple directories into ElasticSearch and parse them into JSON format, possibly adding some tags. Is it possible to be done with ElastichSearch and Logstash, and if so how can I do it?
Thank you!
It is possible. Point logstash to your XML files and use tagging to tag different files differently to determine how they will be handled by Logstash down the road. Inside of Logstash, you can set up filters to add tags, and other fields, and in the output portion of logstash you can specify what files gets added to what index inside of elasticsearch

Resources