log4j 2 maximum message size using SocketAppender - log4j2

What is the maximum size of a message supported in SocketAppender of log4j 2?
I am using SocketAppender with TCP protocol to send log message. I wonder if log4j will truncate my message if it exceeds an unknown amount and how to configure it.
Thanks,

The socket appender has no maximum size and will not truncate your message.

Related

Restricting the number of the same log via fluentD

Use case: Set the maximum number of messages (within a timeframe) to be sent to a target service.
Example.
We collect logs from service X which has these kind of logs:
{"#timestamp":"2020-10-30T13:00:00.310Z","level":"INFO","message":"This is some event"}
{"#timestamp":"2020-10-30T13:00:00.315Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.325Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.327Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.335Z","level":"WARN","message":"This is warn xyz123"}
As you can see the same warning (abc123) was logged multiple time by the service within 12ms.
What I want is to send only one from them.
So fluentD should forward these to the target service:
{"#timestamp":"2020-10-30T13:00:00.310Z","level":"INFO","message":"This is some event"}
{"#timestamp":"2020-10-30T13:00:00.315Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.335Z","level":"WARN","message":"This is warn xyz123"}
Which timestamp to use or to have a counter doesn't matter for me.
Is there a filter,plugin for this use case? Something like where I can set a regex rule for the messages(for deciding whether more messages should be considered as equal) and a timeframe?
In fluentd one may try the throttle plugin https://github.com/rubrikinc/fluent-plugin-throttle with a message key as a group_key (not sure about performance in this case tho).
In FluentBit you can use a built-in SQL stream processor and write a SELECT with WINDOW and GROUP BY statements: https://docs.fluentbit.io/stream-processing/getting_started/fluent_bit_sql#select-statement.

Errors shown by k6 when reaching a bigger number of virtual users

I'm evaluating k6 for my load testing needs. I've set up a basic load test and I'm currently trying to interpret the error messages and result values I get. Maybe someone can help me interpret what I'm seeing:
If I crank up the VUS to about 300, I start seeing error messages in the console and at 500 lots of error messages.
These mostly consist of:
dial tcp XXX:443: i/o timeout
read tcp YYY(local ip):35252->XXX(host ip):443: read: connection reset by peer
level=warning msg="Request Failed" error="unexpected EOF"
Get https://REQUEST_URL/: context deadline exceeded"
I also have problems with several checks:
check errors in which res.status === 0 and res.body === null
check errors in which res.status === 0, but the body contains the correct content
How can res.status be 0 but the body still contains the proper values?
I suspect that I'm reaching the connection limit of my load producing machine and that's why I get the error messages. So I'd have to set up a cluster or move to the Cloud runners!?
The stats generated by k6 show long http_req_blocked values, which I interpret as the time waiting to get a connection port. This seems to indicate that the connection pool of my test running machine is at its limits.
http_req_blocked...........: avg=5.66s min=0s med=3.26s max=59.38s p(90)=13.12s p(95)=20.31s
http_req_connecting........: avg=1.85s min=0s med=280.16ms max=24.27s p(90)=4.2s p(95)=9.24s
http_req_duration..........: avg=2.05s min=0s med=496.24ms max=1m0s p(90)=4.7s p(95)=8.39s
http_req_receiving.........: avg=600.94ms min=0s med=82.89µs max=58.8s p(90)=436.95ms p(95)=2.67s
http_req_sending...........: avg=1.42ms min=0s med=35.8µs max=11.76s p(90)=56.22µs p(95)=62.45µs
http_req_tls_handshaking...: avg=3.85s min=0s med=1.78s max=58.49s p(90)=8.93s p(95)=15.81s
http_req_waiting...........: avg=1.45s min=0s med=399.43ms max=1m0s p(90)=3.23s p(95)=5.87s
Can anyone help me out interpret the results I'm seeing?
You are likely running out of CPU on the runner.
As explained in the http specific metrics of the documentation, you are right about http_req_blocked it is (mostly) the time from when we say we want to make a
request to when we get a socket on which to do it. This is most likely because:
the test runner is running out of CPU and can't handle both making all the other request and starting new
the system under test is running out of CPU and has ... the same problem
You will need to monitor them (you are highly advised to do this regardless) as test at 100% runner CPUs are probably not very representable :) and you likely don't want the system you are testing to get to 100% as well.
The status code === 0 means that we couldn't make the request/read the response ... for some reason, usually explained by the error and error_code.
As I commented if you have status code 0 and a body this is most likely a bug ... at least I don't remember there being a case where this won't be true.
The errors you have list mean (most likely):
dial tcp XXX:443: i/o timeout
this is literally we tried to get a tcp connection and it took too long (probably the reason for the big http_req_blocking)
read tcp YYY(local ip):35252->XXX(host ip):443: read: connection reset by peer
the other side closed the connection .. likely because some timeout was reached - for example, if we don't read over 30 seconds the server decides that we won't read anymore and closes it ... and in the case where CPU is 100% there is a good chance some connection won't get time to be read from.
level=warning msg="Request Failed" error="unexpected EOF"
literally, what it says .. the connection was closed when we totally didn't expect, or more accurately the golang net/http stdlib didn't expect. Likely again a timeout just at a point in the life of the request where the other errors aren't returned.
Get https://REQUEST_URL/: context deadline exceeded"
This is because a request took longer then the timeout (by default 60s) and will at some point be changed to a better error message.

Is this a wireshark bug when display information about AMQP?

I am using spring-amqp and testing RabbitListener#AcknowledgeMode.
When i set RabbitListener#AcknowledgeMode#AUTO,I triggered the nack reponse by thorwing a exception in my RabbitListener.
When i set defaultRequeueRejected to true(it means message will requeue), package by wireshark:
It looks like the last two bits represent these two properties.
And When i set defaultRequeueRejected to false(it means message will not requeue), package by wireshark:
Requeue should be false.So is this a wireshark bug? Or do I understand something wrong?
It looks like a wireshark bug to me 0x03 Vs. 0x01.
I just looked at the code in the client lib and the multiple bit is the LSB and the requeued bit is the next bit.

How to change Jenkins/Jetty max header size

running a Jenkins server with the embedded Jetty, I get errors regarding too big headers in the Jenkins log:
Feb 15, 2017 3:18:15 PM org.eclipse.jetty.util.log.JavaUtilLog warn
WARNING: header full: java.lang.ArrayIndexOutOfBoundsException: 8192
I'd like to increase the Jetty max header size but can't find how to do it, in the case of a Jenkins... I can't find any Jetty config file and don't know if I can set the limit on the Jenkins command line (and what would be the name of the variable to define).
How to achieve this?
If using the built-in Jetty found in the self-running jenkins.war, you cannot adjust that value.
You can only adjust the maximum number of parameters.
--maxParamCount=N = set the max number of parameters allowed in a form submission to protect
against hash DoS attack (oCERT #2011-003). Default is 10000.
Either deploy the war to a full blown container which you can then adjust the value, or change how you use Jenkins to not send excessive URI or HTTP headers (such as using POST vs GET).
To adjust the Jetty 9 header buffer maximum size, you'd adjust the requestHeaderSize in the HttpConfiguration for the ServerConnector that you want that new setting to exist in.
Add this parameter to jenkins config:
JENKINS_ARGS="--requestHeaderSize=258140"

How to validate a message in pcap?

I have a requirement to expand required tree in decoded parameters of pcap file and validate a message in it.
Example:
Open "Transmission Control Protocol" as shown in screenshot and and validate for the message "This is an ACK to the segment in frame: 278".
Need to develop an automation script in Java for validating messages in pcap files . Currently am using jnetpcap lib.
Appreciate your inputs!.
You can't, without protocol analysis by yourself. A pcap file doesn't include such massages.
The message "This is an ACK to the segment in frame: 278" was generated by wireshark after TCP session analysis by itself. Even the frame number 278 was assigned by wireshark. A pcap file only contains packets' data.

Resources