How to log to a file from the Erlang shell? - erlang

Keep forgetting that Logging chapter of the the Kernel User's Guide already has an answer.

Paraphrasing the 2.9 Example: Add a handler to log info events to file section in the Logging chapter of the Kernel User's Guide:
1. Set log level (default: notice)
Globally: logger:set_primary_config/2
For certain modules only: logger:set_module_level/2
Accepted log levels (from least severe to most):
debug, info, notice, warning, error, critical, alert, emergency
Note:
The default log level in the Erlang shell is notice, so if you leave it as is, but set a lower level (such as debug or info) when adding a log handler in the next step, those level of logs will never get through.
Example:
logger:set_primary_config(level, debug).
2. Configure and add log handler
Specify the handler configuration map, for example:
Config = #{config => #{file => "./sample.log"}, level => debug}.
And add the handler:
logger:add_handler(to_file_handler, logger_std_h, Config).
logger_std_h is the standard handler for Logger.
3. Filter out logs below a certain level on Erlang shell
Following the examples above, all levels of logs will be printed. To restore the notice default, but still save every level of logs in the file), use logger:set_handler_config/3.
logger:set_handler_config(default, level, notice).
Work in progress: Log each process' events into their own logfile
This module documents my (partially successful) attempts; will revisit and expand on this section when time permits. My use case was that the FreeSWITCH phone server would spawn an Erlang process to handle the call, and so logging each of them into their own files made sense at the time.

Related

Is DRBD split-brain handler called only when after-sb-Xpri is set to disconnect?

I wanted to run a script if a split brain is detected. In the documentation, it is mentioned that we can do that by providing the path of the script like
resource <resource>
handlers {
split-brain <handler>;
...
}
But, below that, for the after-sb-0pri configuration, it is mentioned that
disconnect: Do not recover automatically, simply invoke the
split-brain handler script (if configured), drop the connection and
continue in disconnected mode.
So, my question is that, will the configured script be run only when after-sb-0pri is set to disconnect, or will that run for any set value
Document Link: https://linbit.com/drbd-user-guide/users-guide-drbd-8-4/#s-configure-split-brain-behavior
DRBD should invoke the split-brain handler whenever a split-brain is detected. I.E. Anything that would log a Split-Brain detected ... in the system/kernel logs. The documentation attempts to explain this at the beginning of chapter 5.17.1 with:
DRBD invokes the split-brain handler, if configured, at any time split
brain is detected.
Additionally, disconnect is the default value for after-sb-0pri. So, even if not explicitly set that will still be the behavior.

Write to the system's standard error in Progress

I am writing a small program in Progress that needs to write an error message to the system's standard error. What ways, simple if at all possible, can I use to print to standard error?
I am using OpenEdge 11.3.
When on Windows (10.2B+) you can use .NET:
System.Console:Error:WriteLine ("This is an error message") .
together with
prowin32 2> stderr.out
Progress doesn't provide a way to write to stderr - the easiest way I can think of is to output-through an external program that takes stdin and echoes it to stderr.
You could look into LOG-MANAGER:WRITE-MESSAGE. It won't log to standard output or standard error, but to a client-specific log. This log should be monitored in any case (specifically if the client is an application server).
From the documentation:
For an interactive or batch client, the WRITE-MESSAGE( ) method writes the log entries to the log file specified by the LOGFILE-NAME attribute or the Client Logging (-clientlog) startup parameter. For WebSpeed agents and AppServer servers, the WRITE-MESSAGE() method writes the log entries to the server log file. For DataServers, the WRITE-MESSAGE() method writes the log entries to the log file specified by the DataServer Logging (-dslog) startup parameter.
LOG-MANAGER:WRITE-MESSAGE("Got here, x=" + STRING(x), "DEBUG1").
Will write this in the log:
[04/12/05#13:19:19.742-0500] P-003616 T-001984 1 4GL DEBUG1 Got here, x=5
There are quite a lot of options regarding the LOG-MANAGER system, what messages to display, where the file is placed, etc.
There is no easy way, but in Unixen you can always do something like this using OUTPUT THROUGH (untested):
output through "cat >&2" no-echo unbuffered.
Alternatively -- and this is tested -- if you just want error messages from a batch-mode program to go to standard out then
output through "tee" ...
...definitely works.

collecting logg4j log using apache flume

guys
I met a problem.I use logg4j and apache-flume to collect logs.the architecture is use logg4j remote print,the config like this:
log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname=192.168.152.49
log4j.appender.flume.Port=44446
log4j.appender.flume.layout=org.apache.log4j.PatternLayout
while the configure of flume like this:
a1.sources.r1.type=avro
a1.sources.r1.bind=192.168.152.49
a1.sources.r1.port=44446
it works!but the question is when the flume closed.the application which use logg4j can't print log!so is anybody can tell me.
how to fix this problem
It depends on how you want to handle Flume being down. With the regular Log4jAppender, you can enable unsafe mode which will log the error in the log4j LogLog, but otherwise fail silently. To do that you can set log4j.appender.flume.UnsafeMode = true. You can see an example here:
https://github.com/kite-sdk/kite-examples/blob/master/logging/src/main/resources/log4j.properties#L20
With unsafe enabled, any events you log while Flume is down will be lost.
If you want to be able to point to multiple Flume agents and have it balance the load between them as well as fail over if one of them goes down, you can use the LoadBalancingLog4jAppender instead. The docs here should help:
http://flume.apache.org/FlumeUserGuide.html#load-balancing-log4j-appender

Default process flags

Is there a way to instruct the Erlang VM to apply a set of process flags to every new process that is spawned in the system?
For example in testing environment I would like every process to have save_calls flag set.
One way for doing this is to combine the Erlang tracing functionalities with a .erlang file.
Specifically, you could either use the low-level tracing capabilities provided by erlang:trace/3 or you could simply exploit the dbg:tracer/2 function to create a new tracing process which executes your custom handler function every time a tracing message is received.
To automate things a bit, you could then create an Erlang Start Up File in the directory where you're running your code or in your home directory. The Erlang Start Up File is a special file, called .erlang, which gets executed every time you start the run-time system.
Something like the following should do the job:
% -*- Erlang -*-
erlang:display("This is automatically executed.").
dbg:tracer(process, {fun ({trace, Pid, spawn, Pid2, {M, F, Args}}, Data) ->
process_flag(Pid2, save_calls, Data),
Data;
(_Trace, Data) ->
Data
end, 100}).
dbg:p(new, [procs, sos]).
Basically, I'm creating a new tracing process, which will trace processes (first argument). I'm specifying an handler function to get executed and some initial data. In the handler function, I'm setting the save_calls flag for newly spawned processes, whilst I'm ignoring all other tracing messages. I set the save_calls' option to 100, using the Initial Data parameter. In the last call, I'm telling dbg that I'm interested only in newly created processes. I'm also setting the sos (set_on_spawn) option to ensure inheritance of the tracing flags.
Finally, note how you need to use a variant of the process_flag function, which takes an extra argument (the Pid of the process you want to set the flag for).

Why is my Ruby script utilizing 90% of my CPU?

I wrote a admin script that tails a heroku log and every n seconds, it summarizes averages and notifies me if i cross a certain threshold (yes I know and love new relic -- but I want to do custom stuff).
Here is the entire script.
I have never been a master of IO and threads, I wonder if I am making a silly mistake. I have a couple of daemon threads that have while(true){} which could be the culprit. For example:
# read new lines
f = File.open(file, "r")
f.seek(0, IO::SEEK_END)
while true do
select([f])
line = f.gets
parse_heroku_line(line)
end
I use one daemon to watch for new lines of a log, and the other to periodically summarize.
Does someone see a way to make it less processor-intensive?
This probably runs hot because you never really block while reading from the temporary file. IO::select is a thin layer over POSIX select(2). It looks like you're trying to block until the file is ready for reading, but select(2) considers EOF to be ready ("a file descriptor is also ready on end-of-file"), so you always return right away from select then call gets which returns nil at EOF.
You can get a truer EOF reading and nice blocking behavior by avoiding the thread which writes to the temp file and instead using IO::popen to fork the %x[heroku logs --ps router --tail --app pipewave-cedar] log tailer, connected to a ruby IO object on which you can loop over gets, exiting when gets returns nil (indicating the log tailer finished). gets on the pipe from the tailer will block when there's nothing to read and your script will only run as hot as it takes to do your line parsing and reporting.
EDIT: I'm not set up to actually try your code, but you should be able to replace the log tailer thread and your temp file read loop with this code to get the behavior described above:
IO.popen( %w{ heroku logs --ps router --tail --app my-heroku-app } ) do |logf|
while line = logf.gets
parse_heroku_line(line) if line =~ /^/
end
end
I also notice your reporting thread does not do anything to synchronize access to #total_lines, #total_errors, etc. So, you have some minor race conditions where you can get inconsistent values from the instance vars that parse_heroku_line method updates.
select is about whether a read would block. f is just a plain old file, so you when get to the end reads don't block, they just return nil instantly. As a result select returns instantly rather than waiting for something to be appending to the file as I assume you're expecting. Because of this you're sitting in a tight busy loop, so high cpu is to be expected.
If you are at eof (you could either check f.eof? or whether gets returns nil), then you could either start sleeping (perhaps with some sort of back off) or use something like listen to be notified of filesystem changes

Resources