all
the part of log :
** Reason for termination == **
{normal,
{gen_server,call,
[<0.9723.458>,
{create_jtxn_mon,
{player,34125,0,"gulexi",
why does it report error log when the reason is normal?
thanks for your help~~~
It seems like you made a call to a gen_server that exited with reason normal before it sent a response to the caller.
In general, if a gen_server exits with reason ServerExitReason during a call, gen_server:call will exit with the exit reason {ServerExitReason, {gen_server, call, [...]}}, even if ServerExitReason is normal. (See the source)
That is, the exit reason is not normal but {normal, ...}, and that's why you get a log message.
Related
I've hit an assertion in code and wondering if there's a way to create a wrapper around the assert that would enable breaking out and continuing execution or some other function that would enable a way to suppress the assert through the lldb debugger.
assert(condition(), makeCriticalEvent().event.description, file: fileID, line: UInt(line))
This is the standard assertion in apples libraries here. When I hit this assertion I tried continuing execution but it stays stuck at the assertion. I'd like to silence the assertion (likely through the lldb debugger by typing some command). Anyone have an idea how to do this?
You have to do two things. First, you have to tell lldb to suppress the SIGABRT signal that the assert delivers. Do this by running:
(lldb) process handle SIGABRT -p 0
in lldb. Normally SIGABRT is not maskable, so I was a little surprised this worked. Maybe because this is a SIGABRT the process sends itself? I don't think there's any guarantee suppressing SIGABRT's has to work in the debugger so YMMV, but it seems to currently. Anyway, do this when you've hit the assert.
Then you need to forcibly unwind the assert part of the stack. You can do that using thread return, which returns to the thread above the currently selected one w/o executing the code in that frame or any of the others below it. So just select the frame that caused the assert, go down one frame on the stacks and do thread return.
Now when you continue you won't hit the abort and you'll be back in your code.
I have been seeing below error message for quite some time now but could not figure out what leads to the failure.
Error:
concurrent.futures._base.CancelledError: ('sort_index-f23b0553686b95f2d91d4a3fda85f229', 7)
On restart of dask cluster it runs successfully.
If running a dask-cloudprovider ECSCluster or FargateCluster the concurrent.futures._base.CancelledError can result from a long-running step in computation where there is no output (logging or otherwise) to the Client. In these cases, due to the lack of interaction with the client, the scheduler regards itself as "idle" and times out after the configured cloudprovider.ecs.scheduler_timeout period, which defaults to 5 minutes. The CancelledError error message is misleading, but if you look in the logs for the scheduler task itself it will record the idle timeout.
The solution is to set scheduler_timeout to a higher value, either via config or by passing directly to the ECSCluster/FargateCluster constructor.
There is a gen_server that keeps some sensible information in it's state (password and so on)
Lagger is enabled,
So in case of crash gen_server's state is dump to the crash log like:
yyyy-mm-dd hh:mm:ss =ERROR REPORT====
** Generic server XXX terminating
** Last message in was ...
** When Server state == {state, ...}
** Reason for termination ==
As a result sensible information is written to the log file.
Are there any way to prevent state of the gen_server to be written into the log files/crash dumps?
You could implement the optional format_status callback function. That means that whenever the gen_server crashes, you get the chance to format the state data to your liking before it gets logged, for example by removing sensitive information.
You can add this into your app.config:
{lager, [{error_logger_redirect, false}]}
to prevent lager from redirecting error logs. You should also try to catch error (that causes gen_server to crash) and handle it in some graceful way. Our you can keep passwords salted and just let it crash.
Can you help me to understand why it is recommended to use:
while ((s = sem_timedwait(&sem, &ts)) == -1 && errno == EINTR)
continue; // Restart when interrupted by handler
(EINTR: The call was interrupted by a signal handler)
Instead of simply:
s = sem_timedwait(&sem, &ts);
In witch cases I have to manage EINTR ?
The loop will cause the system call to be restarted if a signal is caught during the execution of the system call, so it will not go on to the next statement unless the system call has either succeeded or failed (with some other error). Otherwise the thread will continue execution with the next statement when the system call is interrupted by a signal handler.
For example, if you want to be able to abort this sem_timedwait() by sending a particular signal to the thread, then you would not want to unconditionally restart the system call. Instead you may want to mark that the operation was aborted and clean up. If you have multiple signal handlers, the signal handler can set a flag which can be checked when EINTR is encountered in order to determine whether to restart the system call.
This only matters if the thread catches any signals using a signal handler, and the sigaction() SA_RESTART flag was not used to automatically restart any interrupted system call. However, if you are not using any signal handlers and did not intend for your code to be affected by a signal handler, it is still good practice to use the loop so that it will continue to work as you intended even if your code is later used in the same program as other code which uses signal handlers for unrelated purposes.
im currently writing an error_logger handler and would like to get the stacktrace where the error happend (more precisely: at the place, where error_logger:error* was called). But I cannot use the erlang:get_stacktrace() method, since i'm in a different process.
Does anyone know a way to get a stacktrace here?
Thanks
get_stacktrace() returns "stack back-trace of the last exception". Throw and catch an exception inside error_logger:error() and then you can get the stacktrace.
error() ->
try throw(a) of
_ -> a
catch
_:_ -> io:format("track is ~p~n", erlang:get_stacktrace())
end.
I have not fully debugged it, but I suppose that the error functions simply send a message (fire and forget) to the error logger process, so at the time your handler is called after the message has been received, the sender might be doing something completely different. The message sent might contain the backtrace, but I highly doubt it.