Does Log4j2 AsyncLogger follow insertion order? - log4j2

Suppose I have a single thread to keep writing logs using Log4j2 AsyncLogger, will the logs in the files follow the order of the calls? How many threads it uses to consume the log events?

The message order bug was fixed in version 2.10.0. So from this version the messages should be displayed in the order.
According to this answer there is only one thread that writes to file

Related

Esper 8.2 statement stops matching events

In my application I have about 100 continuous Esper filter queries with events being sent in. At some point for an unknown reason some of statemens stop matching events and never match any further event without ever throwing an exception (nothing logged in log4j default logging setup). This is not reproducible in a small example, and I realize that it's difficult to pinpoint a problem like this, but I'm writing this in the hopes of this being a known and/or fixed issue.
I would suggest to review your application code to make sure it is still sending events and the listener/subscriber code to make sure it is still processing output events, i.e. exception handling and logging. Or perhaps an OOM occurs and logging doesn't happen and thus you may want to check heap memory use. Also look at console out see if the JVM has encountered an issue.

How can I save additional messages that would normally be excluded by Loglevel in case of errors

I have a basic serilog-usage-scenario: Logging messages from an Web-Application. In production I set the log-level to "information".
Now my question: Is it possible to write the last ~100 debug/trace messages to the log after an error occurs, so that I have a short history of detailed messages before the error occurred. This would keep my log clean and gives me enough informations to track errors.
I created such a mechanism years ago for another application/logging-framework, but I'm curious if thats already possible with Serilog.
If not, where in the pipeline would be the place to implement such logic.
This is not something that Serilog has out-of-the-box, but it would be possible to implement by writing a custom sink that wraps all other sinks and caches the most recent ~100 Debug messages and forwards them to the sinks when an Error message occurs.
You might want to look at the code of Serilog.Sinks.Async for inspiration, as it shows you a way of wrapping multiple sinks into one.

references in erlang receive

I heard that tagging messages with references in Erlang as shown here (see part about references), will prevent the process to go through the all message queue when using "receive". Is that true?
Yes, see OTP-8623 in http://www.erlang.org/doc/apps/erts/notes.html#id65167

Erlang/OTP framework's error_logger hangs under fairly high load

My application is basically a content based router which will route MMS events.
The logger I am using is the one that comes with the OTP framework in SASL mode "error_logger"
The issue is ::
I am using a client to generate MMS events with default values. This client (in Java) has the ability to send high load of events in multiple THREADS
I am sending 100 events in 10 threads (each thread sending 10 MMS events) to the my router written in Erlang/OTP.
The problem is, when such high load is received by my router , my Logger hangs i.e it stops updating my Log file. But the router is still able to route the events.
The conclusions that I have come up with is ::
Scheduling problem in Erlang when such high load of events is received (a separate process for each event).
A very unlikely dead-loack state.
Might be due to sending events in multiple threads rather than sending them sequentially. But I guess a router will be connected to multiple service provider boxes, so I thought of sending events in threads.
Can anybody help mw in demystifying the problem?
You already have a good answer, but I'll add to the discussion.
The error_logger is by default using cached write operations to disk. So one possibility is that you don't really notice this while under low load, but under high load your writes get stuck in the cache for a while.
On a side note: there should be no problem having multiple threads doing calls to Erlang.
Another way of testing this is to add your own logger to error_logger, and see what happens. Possibly printing to the shell or something else that is "fast".
Which version of Erlang are you using? Prior to R14A (R13B4 maybe?), there was a performance penalty when you invoked a selective receive when the message queue contained a lot of messages. This behaviour meant that in a process that receives lots of messages (error_logger being the canonical example), if it was barely keeping up with the load then a small spike in load could cause the cost of processing to spike up and stay there as the new processing cost was higher than the process could bear. This problem has been solved in R14A.
Secondly - why are you sending a high volume of events/calls/logs to a text logger? Formatting strings for output to a human readable log file is a lot more expensive than using a binary disk_log for instance. Reducing the cost of logging will help, but reducing the volume of logs will help even more. Maybe investigate exactly why you need to log these things and see if you can't record them another (less expensive) way.
Problems with error_logger are often symptoms of some other overload problem. Try looking at the message queue sizes for all your processes when this problem occurs and see if something else is backed up too. The following erlang shellcode might help:
[ { P, element(2, process_info(P, message_queue_len)) }
|| P <- erlang:processes(), is_process_alive(P) ]

How do you do exception management with delayed job?

My application needs to parse a user-generated CSV file. And, once uploaded, the application will queue it in delayed job to be processed. My question is, how do you usually handle the exceptions that might happen during the content parsing stage? Do you store all the error messages in exception-objects before display it to user?
Thank you.
As the job is delayed, I would like to give all the errors in the CSV file at once. So that users do not end up iterating multiple times (fixing 1 error at a time).
One thing you can do is store all errors in a DB (in an suitable object). This would also enable to you analyze what kind of errors users are generally having and help them in reducing those.

Resources