log4j2 skip logging additional information - log4j2

The log4j2 wraps original message in "message" attribute
I am using JSON layout
{
"timeMillis": 1538154855953,
"thread": "MyThred #19",
"level": "INFO",
"loggerName": "MyLogger",
"message": "My log message",
"endOfBatch": false,
"loggerFqcn": "org.ops4j.pax.logging.slf4j.Slf4jLogger",
"threadId": 63,
"threadPriority": 5
}
I want to avoid the additional fields
Just wanted to have a message like below
{
"message": "My log message"
}
Just want to print the data as print statement does
Don't need additional info like loggerName,thread etc

I think the "additional info" is the entire purpose of the JSONLayout. If you don't want to use the format that this layout provides then I can think of a few options:
Configure a different layout such that it produces JSON output. For example, you could use a PatternLayout like this: <PatternLayout pattern="{"message":"%m"}%n" /> which produces output like this: {"message":"log message"}
The disadvantage with this approach is that it only works well for very simple scenarios. If you wanted to log a more complex data structure than simply a string message it wouldn't work well.
Serialize your messages as JSON strings before you pass them to your logger. This wouldn't require any special layout - you could use a simple pattern like pattern="%m%n" since your message would already be in JSON format. This would require you to serialize your messages every time before passing them to the logger.
Create a custom class that implements the log4j2 Message interface and is responsible for generating a JSON string based on the input you provide to its constructor. Then when you log you simply create an instance of your class and pass it the necessary input. With this approach you can incorporate the serialization into the message class itself rather than having to pre-serialize the data, and it's probably less work than creating a custom layout.
I hope this points you in the right direction. Without more detailed requirements it's hard to provide an exact solution.

Related

Accept/Content-Type header based processing in Quart and Quart-Schema

Because I am rewriting a legacy app, I cannot change what the clients either send or accept. I have to accept and return JSON, HTML, and an in-house XML-like serialization.
They do, fortunately set headers that describe what they are sending and what they accept.
So right now, what I do is have a decoder module and an encoder module with methods that are basically if/elif/else chains. When a route is ready to process/return something, I call the decoder/encoder module with the python object and the header field, which returns the formatted object as a string and the route processes the result or returns Response().
I am wondering if there is a more Quart native way of doing this.
I'm also trying to figure out how to make this work with Quart-Schema. I see from the docs that one can do app.json_encoder = <class> and I suppose I could sub in a different processor there, but it seems application global, there's no way to set it based on what the client sends. Optimally, it would be great if I could just pass the results of a dynamically chosen parser to Quart-Schema and let it do it's thing on python objects.
Thoughts and suggestions welcome. Thanks!
You can write your own decorator like the quart-schema #validation_headers(). Inside the decorator, check the header for the Content-Type, parse it, and pass the parsed object to the func(...).

Use Annotation tool configuration / Automatic annotation service from brat

I'd like to use a personnal API for named entity recognition (NER), and use brat for visualisation. It seems brat offers an Automatic annotation tool, but documentation about its configuration is sparse.
Are there available working examples of this features ?
Could someone explain me what should be the format of the response of the API ?
I finally manage to understand how it works, thanks to this topic in the GoogleGroup diffusion list of BRAT
https://groups.google.com/g/brat-users/c/shX1T2hqzgI
The text is sent to the Automatic Annotator API as a byte string in the body of a POST request, and the format BRAT required in response from this API is in the form of a dictionary of dictionaries, namel(
{
"T1": {
"type": "WhatEverYouWantString", # must be defined in the annotation.conf file
"offsets": [(0, 2), (10, 12)], # list of tuples of integers that correspond to the start and end position of
"texts": ["to", "go"]
}
"T2" : {
"type": "SomeString",
"offsets":[(start1, stop1), (start2, stop2), ...]
"texts":["string[start1:stop1]", "string[start2:stop2]", ...
}
"T3" : ....
}
THEN, you put this dictionary in a JSON format and you send it back to BRAT.
Note :
"T1", "T2", ... are mandatory keys (and corresponds to the Term index in the .ann file that BRAT generates during manual annotation)
the keys "type", "offsets" and "texts" are mandatory, otherwise you get some error in the log of BRAT (you can consult these log as explained in the GoogleGroup thread linked above)
the format of the values are strict ("type" gets a string, "offsets" gets a list of tuple (or list) or integers, "texts" gets a list of strings), otherwise you get BRAT errors
I suppose that the strings in "texts" must corresponds to the "offsets", otherwise there should be an error, or at least a problem with the display of tags (this is already the case if you generate the .ann files from an automatic detection algorithm and have different start and stop than the associated text)
I hope it helps. I managed to make the API using Flask this morning, but I needed to construct a flask.Response object to get the correct output format. Also, the incoming format from BRAT to the Flask API could not be catch until I used a flask.request object with request.get_body() method.
Also, I have to mention that I was not able to use the examples given in the BRAT GitHub :
https://github.com/nlplab/brat/blob/master/tools/tokenservice.py
https://github.com/nlplab/brat/blob/master/tools/randomtaggerservice.py
I mean I could not make them working, but I'm not familiar at all with API and HTTP packages in Python. At least I figured out what was the correct format for the API response.
Finally, I have no idea how to make relations among entities (i.e. BRAT arrows) format from the API, though
https://github.com/nlplab/brat/blob/master/tools/restoataggerservice.py
seems to work with such thing.
The GoogleGroup discussion
https://groups.google.com/g/brat-users/c/lzmd2Nyyezw/m/CMe9FenZAAAJ
seems to mention that it is not possible to send relations between entities back from the Automatic Annotation API and make them work with BRAT.
I may try it later :-)

Azure Logic Apps Problem with Liquid Transformation of SOAP XML

I'm using Azure Logic Apps to integrate with a legacy SOAP API. I would like to translate the XML (particularly the responses) in to something easier to use such as json.
Normally I use a Custom Connector within Logic Apps to connect to new APIs. I've tried to create a Custom Connector for this SOAP, but the WSDL contains recursive references which apparently aren't allowed. I was able to create managed API with our APIM container, but still could not produce anything that would allow me to create the custom connector. So, I moved on to dealing with the transactions individually. A Liquid transformation map from XML to json seems ideal, but so far I haven't got it to work, namely because I can't figure out the naming convention to access certain XML elements (those that happen to have the same id as their parent). For now I am using the json(xml()) function as a work around, but it seems less ideal than a Liquid map.
As you can see the AgreementId is easily accessible via the normal naming conventions, but I can't seem to access any of the child elements of the 2nd RequestReportResponse node.
This is the XML I'm trying to transform:
<SOAP-ENV:Envelope>
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<RequestReportResponse>
<MessageHeader>
<AgreementId>urn:agreementId:</AgreementId>
</MessageHeader>
<RequestReportResponse>
<Extension>csv</Extension>
<FileByteArray>xyzFileBytes</FileByteArray>
<FileName>xyzFileName</FileName>
<StatusCode>200</StatusCode>
<StatusDescription>SUCCESS</StatusDescription>
</RequestReportResponse>
</RequestReportResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Here is the Liquid map I'm using:
{
"AgreementId": " {{content.Envelope.Body.RequestReportResponse.MessageHeader.AgreementId}}",
"FileByteArray": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.FileByteArray}}",
"FileName": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.FileName}}",
"StatusCode": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.StatusCode}}",
"StatusDescription": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.StatusDescription}}"
}
Expected result:
{
"AgreementId": "urn:agreementId:",
"FileByteArray": "xyzFileBytes",
"FileName": "xyzFileName",
"StatusCode": "200",
"StatusDescription": "SUCCESS"
}
Actual result:
{
"AgreementId": "urn:agreementId:",
"FileByteArray": "",
"FileName": "",
"StatusCode": "",
"StatusDescription": ""
}
It seems liquid doesn't have good support for the nested same tag name, we can use xslt instead to operate the xml and then transform to json which we want. But it's better for us to improve the format of xml resource to escape nested same tag name.

ksql statement for extracting comma delimited message to it is own fields

I have a JSON object that has fewer fields.
{
"#timestamp": "2019-01-14T14:34:47.617Z",
"message": "20190114T063447-0800,dm-2,SSD2T-backarea,1.99,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00",
"node": "hostnameA",
}
what would be the proper way to disect message field to it is own fields as well as keep node and #timestamp.
I don't think this is possible. You'd be looking for some kind of SPLIT function, which doesn't currently exist. I've logged this as a suggested enhancement here.
Where is your data coming from? Do you have the option of getting the message field as structured JSON instead? Or the entire payload as comma-delimited?

Having a POJO like feature in KarateAPI?

I have been using Karate and RestAssured for sometime. There are advantages and downside of both tools of course. Right now I have a RestAssured project where I have Request and Response object and POJOs. My requests wraps my endpoint and send my POJOs to those endpoint. I do all my Headers, etc configuration in an abstract layer. In case I need to override them, I override them during the test. If not, Its a two lines of code for me to trigger an endpoint.
My way of working with happy path and negative path of an edpoint is that I initialize the POJO before every test with new values in the constructor. Then I override the value that I want in test scope. For example, if I want to test a negative case for password field, I set this field to empty string during the test. But other fields are already set to some random stuff before the test.
But I dont know how to achieve this with Karate.
Karate allows me to create a JSON representation of my request body and define my parameters as seen below example.
{
"firstName": "<name>",
"lastName": "<lastName>",
"email": "<email>",
"role": <role>
}
Then in every test I have to fill all the fields with some data.
|token |value|
|name |'canberk'|
|lastName |''|
|email |'canberk#blbabla.com'|
|role |'1'|
and
|token |value|
|name |''|
|lastName |'akduygu'|
|email |'canberk#blbabla.com'|
|role |'1'|
It goes on like this.
It's ok with a 4 fields JSON body but when the body starts to have more than 20 fields, it become a pain to initialise every field for every test.
Does Karate have a way of achieving this problem with a predefined steps of I need to come up with a solution?
There are advantages and downside of both tools of course.
I'm definitely biased, but IMHO the only disadvantage of Karate compared to REST-assured is that you don't get compile time safety :) I hope that you have seen this comparison.
Karate has multiple ways to do what you want. Here's what I would do.
create a JSON file that has all your "happy path" values set
use the read() syntax to load the file (which means this is re-usable across multiple tests)
use the set keyword to update only the field for your scenario or negative test
You can get even more fancier if you use embedded expressions.
create a JSON file that has all your "happy path" values set and the values you want to vary look like foo: '##(foo)'
before using read() you init some variables for e.g. * def foo = 'bar' and if you use null that JSON key will even be removed from the JSON
read() the JSON. it is ready for use !
You can refer to this file that demonstrates some of these concepts for XML, and you may get more ideas: xml.feature

Resources