I am trying to set up json logging for log4j2 described here:
https://logging.apache.org/log4j/2.x/manual/json-template-layout.html
However, the format of the output is not matching what I expect. This is what I am getting:
{"#version":1,"source_host":"localhost","message":"hello world","thread_name":"main","#timestamp":"2021-08-17T15:44:54.948-04:00","level":"INFO","logger_name":"com.logging.test.LoggingTest"}
At first I created my own template but this wasn't working so I set it to the logstash one described in the docs:
<JsonTemplateLayout eventTemplateUri="classpath:LogstashJsonEventLayoutV1.json"/>
I am not getting the line number in the output or a lot of other fields. I know it is picking up the eventTemplateURI field because if I set it to a value I know doesn't exist then I get an exception on start up.
I am using log4j-slf4j-impl, does anything special need to be done to make it work with this?
Thanks
Related
I am working on building a replacement to MIRTH and it looks like we are sending out non-standard HL7 ORU_R01 messages. OBR.5 should be just a single field but looks like we are sending a bunch of other data in this section.
<OBR.5>
<OBR.5.1>XXXX</OBR.5.1>
<OBR.5.2>XXXX</OBR.5.2>
<OBR.5.3>XXXXX</OBR.5.3>
<OBR.5.5>XXXXX</OBR.5.5>
<OBR.5.6>XXXX</OBR.5.6>
<OBR.5.7/>
<OBR.5.8>XXXXXXXXXX</OBR.5.8>
<OBR.5.10>XXXXXXX</OBR.5.10>
<OBR.5.11>X</OBR.5.11>
<OBR.5.12>X</OBR.5.12>
<OBR.5.13>XXXXX</OBR.5.13>
<OBR.5.15>XXXXXXX</OBR.5.15>
</OBR.5>
It seems like I should be able to something like the following.
obr.getObr5_Priority().getExtraComponents().getComponent(2).setData(...)
But I am having issues trying to find the correct way to set the different segments. All the fields are Strings.
Found something that I think has ended up working for us.
ID expirationDate = new ID(obr.getMessage(), 502);
expirationDate.setValue(format2.format(date));
obr.getObr5_Priority().getExtraComponents().getComponent(0).setData(expirationDate);
Where 503 refers to which element you want to set. In this case I am trying to set OBR-5.2. getComponent(0) because it's the first extra component I am adding for this particular segment. I am not sure entirely if my explanation here is correct but it creates a message we need and parses as I'd expect so its my best guess.
Dereived the answer from this old email thread https://sourceforge.net/p/hl7api/mailman/hl7api-devel/thread/0C32A03544668145A925DD2C339F2BED017924D8%40FFX-INF-EX-V1.cgifederal.com/#msg19632481
I am writing out json structured log messages to stdout with exactly one time field, called origin_timestamp.
I collect the log messages using Fluent Bit with the tail input plugin, which uses the parser docker. The parser is configured with the Time_Key time.
The documentation about Time_Key says:
If the log entry provides a field with a timestamp, this option
specify the name of that field.
Since time != origin_timestamp, I would have thought no time fields will be added by Fluent Bit, however the final log messages ending up in Elasticsearch have the following time fields:
(origin_timestamp within the field log that contains the original log message)
origin_timestamp
time
#timestamp (sometimes even multiple times).
The #timestamp field is probably added by the es output plugin I am using in Fluent Bit, but where the heck is the time field coming from?
I came across the following issue in the Fluent-bit issue tracker, Duplicate #timestamp fields in elasticsearch output, which sounds like it might be related to your issue in question.
I've deep linked to a particular comment from one of the contributors, which outlines two possible solutions depending on whether you are using their Kubernetes Filter plugin, or are ingesting the logs into Elasticsearch directly.
Hope this helps.
The time field being added by the docker json plugin. Docker logging plugin takes logs from your stdout and logs to a file in following format by default:
{"log":"Log line is here\n","stream":"stdout","**time**":"2019-01-01T11:11:11.111111111Z"}
So, you might observe three timestamps in your final log:
Added by you (origin_timestamp)
Added by docker driver (time)
Added by fluent bit plugin (#timestamp)
Ref - https://docs.docker.com/config/containers/logging/json-file/
I'm trying to understand an existing ant script created by someone else. I notice a lot of:
However when trying to read up on API or info on the topic the most I can find is:
setFiltering(boolean filtering)
Set filtering mode.
http://docs.groovy-lang.org/docs/ant/api/org/apache/tools/ant/taskdefs/Copy.html
setFiltering
public void setFiltering(boolean filtering)
Set filtering mode. Parameters: filtering - if true enable filtering;
default is false.
http://docs.groovy-lang.org/docs/ant/api/org/apache/tools/ant/taskdefs/Copy.html#setFiltering(boolean)
Without any explanation of what is actually happening, can anyone shed some light?
This page has some more details:
Indicates whether token filtering using the global build-file filters should take place during the copy. Note: Nested <filterset> elements will always be used, even if this attribute is not specified, or its value is false (no, or off).
And the docs for filters can be found here
I'd like to read the value of a setting in the ejabberd.yml file and was wondering if I could still use application:get_env(ejabberd, ) to do so, as I was able to do from the ejabberd.cfg file which is a collection of erlang terms.
Any clues? I've tried searching and when I try using the application:get_env() call, I get back the value 'undefined' ...
I'm thinking that it must be a simple thing to do, and would appreciate all help!
Thanks,
Ombud.
I am trying to modify the messages.properties file for form input validated by a Command Object that is in specified in the controller. The output I get from the standard error message (that I modified slightly to assure I was hitting that specific one) is:
email cannot be empty test class com.dashboard.RegisterController$DashboardUserRegistrationCommand
but no variant of com.dashboard.RegisterController$DashboardUserRegistrationCommand.null.message
works
I am wondering what the correct specification should be.
Try to put DashboardUserRegistrationCommand outside (below) of RegisterController but still in the same file. Then com.dashboard.DashboardUserRegistrationCommand.. should work.
i.e. com.dashboard.DashboardUserRegistrationCommand.message.nullable
The typical layout of error messages is:
${packageName}.${className}.${propertyName}.${errorCode}
So for your example it would be:
com.dashboard.DashboardUserRegistrationCommand.message.nullable
In the Grails Reference on the right hand side there is a header titled 'Constraints'. On each page of the specific constraints listed under that header the ${errorCode} value is listed at the bottom of the page.
And sometimes you have to restart a run-app to get new messages to populate in a Grails project.
Just to help others in the future, I had the same issues and my problem was the way I was defining my key, I use now:
For default messages:
default.null.message=Write a value for {0}
For commmand error messages:
my.package.UserCommand.name.nullable=Please tell us your name
It is strange that sometimes you use nullable and sometimes you use null. The best thing is going to the Grails Constraints directly and check how is it done for example:
http://grails.org/doc/latest/ref/Constraints/nullable.html