Log only if logger is explicitly turned on (set at a certain level - ignoring root/parents/hierarchy) - log4j2

I've spent quite a bit more time than necessary to figure this out, and came up with zilch! This may not be the most kosher way to do what I need, but in the legacy version, I found it easy enough to use in practice. Basically, I'd have a specially named logger, and I want it to be enabled (i.e. logging diagnostic) only if it's explicitly enabled (i.e. specified in the XML configuration file).
The hierarchical fashion the logger configuration works would cancel this out if ROOT logger is set at a lower level than my log lines. It would log this particular logger's lines, even if I don't want it to (yes, I know this is not the expected usage).
In legacy log4j, there were 2 ways to get the "level": getLevel and getEffectiveLevel. Effective traversed up the hierarchy, and the regular one returned the value at my particular logger. Using this, I could check the Level, and if it's null (i.e. not explicitly specified in my configuration), I'd programatically set the level to be OFF, and disable it.
There doesn't seem to be a way to access that piece of information in the straightforward part of the API. Looking through the object model in debugger, I've found the following map that could be checked if it contains the name of my logger or not, but it seems a bit convoluted:
LoggerContext.getContext(false).getConfiguration().getLoggers()
Is there another/simpler way to achieve this? Or should I just take my square peg, and move away from the round hole?! :)

Related

Why use defaults/preferences/*.js vs setting Firefox Add-on defaults in JavaScript?

The more I think about it, I wonder why should anyone bother to use the defaults/preferences/*.js file to set defaults, versus setting the defaults in JavaScript?
I am doing work on an older XUL/Overlay based add-on created by someone else, which actually uses a prefs.js file in addition to setting all the defaults in the JavaScript (in case the prefs.js failed for some inexplicable reason?), which essentially makes all the preferences into user-prefs directly after installation. This confused me at first, as the defaults were showing up as modified (user-set) when looking at the prefs in about:config. Then I realized it was unconditionally setting some of the defaults (very large strings).
So I realized, not only do I maintain the same prefs in 3 locations (prefs.js, interface and content scripts), but the prefs.js file is largely redundant, which seems to add additional maintenance for no reason. This just seems silly, and I am looking for a better way, to just store prefs and manage them in one location (which probably is why the prefs.js should be used exclusively).
Now, I realize, this question has the potential to be flagged as an "opinion", and may or may not have a specific "right" answer. But I think it is a valid question, and I would like to learn more about the pros and cons of using a prefs.js file, versus setting all prefs during initialization in a shared JS code. Are there any performance concerns, or objective list of criteria that I could use to make this determination? Is it possible that the prefs.js mechanism would ever fail? Is it safe to assume it never will fail? Was it more prone to failure back in the FF 1.0-3.5 days?
Default preferences:
Can be changed in a new version of the add-on without overriding the value set explicitly by the user.
Do not clutter up the user's profile after the add-on is uninstalled
Are located in the same place, not scattered thoughout the code. It's easy to see what preferences there are. They also appear in about:config, which allows you to have UI for "hidden" prefs - for the sake of power users.
It is a pity they did not make the get*Pref API return the default value in case the user-set value is invalid, and I understand not wanting to maintain the default in two locations. This whole API was "frozen" a long time ago, and will eventually go away with the rest of XPCOM, so it doesn't really matter...

Is there any easy way to adding NSLog or any logging statement in all methods?

I am about to complete the project and I want to add logging in it. I know there are some good loggers are available in market(CocoaLumberjack). But for that I need to add log statement into each and every method. As project is about to complete there are lots of methods. So, is there any way or work around ? Without adding log statement into all methods If I will add at any central place and it will work for all.
I am not sure if there is such Objective-C runtime function that is called before every method.
This method will be helpful in all cases like I am or any new dev is writing new method then he don’t need to remember to add log statement.
Edit:
This is just for a debugging purpose. And I will add a way to control the logs like turn off and print detail of certain level.
Don't do this. Seriously. Users don't appreciate their log files filling up because of chatty software, and you'll also annoy every other developer by obscuring messages that are actually important.
You should only use NSLog() this way as an absolute last resort during debugging. Even then, there are better approaches (e.g. you could use dtrace; if you get it to dump all objc_msgSend() invocations, you'll see almost all of your method calls, aside from those that pass through objc_msgSendStret() and the floating point ones if applicable to your platform).
If you really must make a chatty application, create your own log file, write your own logging function (ideally using asl), and it’s a good idea even then to have a set of flags that can be controlled e.g. from user defaults to enable different kinds of debug output.
How about a category of NSObject with this override?
-(BOOL)respondsToSelector:(SEL)aSelector {
printf("Excessive Log: %s\n", [NSStringFromSelector(aSelector) UTF8String]);
return [super respondsToSelector:(aSelector)];
}

Understanding file mapping

I try to understand mmap and got the following link to read:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
I understand the text in general and it makes sense to me. But at the end is a paragraph, which I don't really understand or it doesn't fit to my understanding.
The read-only page table entries shown above do not mean the mapping is read only, they’re merely a kernel trick to share physical memory until the last possible moment. You can see how ‘private’ is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it’s what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that’s it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
The folloing to lines doesn't match for me. I see no sense.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
It is private. So it can't see changes by others!
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
Don't know what the author means with this. Is their a flag "MAP_READ_ONLY"? Until a write occurs, every pointer from the programs virtual-pages to the page-table-entries in the page-cache is read-only.
Can you help me to understand this two lines?
Thanks
Update
It seems it got it, with some help.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
Although a mapping is private, the virtual page really can see the changes by others, until it modifiy itselfs a page. The modification becomes is private and is only visible to the virtual page of the writing program.
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
I'm told that pages itself can also have permissions (read/write/execute).
Tell me if I'm wrong.
This fragment:
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
is telling you that the kernel cheats a little bit in the name of optimization. Even though you've asked for a private mapping, the kernel will actually give you a shared one at first. Then, if you write the page, it becomes private.
Observe that this "cheating" doesn't matter (doesn't make any difference) if all processes which are accessing the file are doing it with MAP_PRIVATE, because no actual changes to the file will ever occur in that case. Different processes' mappings will simply be upgraded from "fake cheating MAP_PRIVATE" to true "MAP_PRIVATE" at different times according to whenever each process first writes to the file. This is probably a common scenario. It's only if the file is being concurrently updated by other means (MAP_SHARED with PROT_WRITE or else regular, non-mmap I/O operations) that it makes a difference.
I'm told that pages itself can also have permissions (read/write/execute).
Sure, they can. You have to ask for the permissions you want when you initially map the file, in fact: the third argument to mmap, which will be a combination of PROT_READ, PROT_WRITE, PROT_EXEC, and PROT_NONE.

Delphi logging with multiple sinks and delayed classification?

Imagine i want to parse a binary blob of data. If all comes okay, then all the logs are INFO, and user by default does not even see them. If there is an error, then user is presented with error and can view the log to see exact reason (i don't like programs that just say "file is invaid. for some reason. you do not want to know it" )
Probably most log libraries are aimed at quickly loading, classifying and keeping many many log lines per second. which by itself is questionable, as there is no comfort lazy evaluation and closures in Delphi. Envy Scala :-)
However that need every line be pre-сlassified.
Imagine this hypothetical flow:
Got object FOO [ok]
1.1. found property BAR [ok]
1.1.1. parsed data for BAR [ok]
1.2 found property BAZ [ok]
1.2.1 successfully parsed data for BAR [ok]
1.2.2 matching data: checked dependancy between BAR and BAZ [fail]
...
So, what can be desired features?
1) Nested logging (indenting, subordination) is desired then.
Something like highlighted in TraceTool - see TraceNode.Send Method at http://www.codeproject.com/KB/trace/tracetool.aspx#premain0
2) The 1, 1.1, 1.1.1, 1.2, 1.2.1 lines are sent as they happen in a info sink (TMemo, OutputDebugString, EventLog and so one), so user can see and report at least which steps are complete before error.
3) 1, 1.2, 1.2.2 are retroactively marked as error (or warning, or whatever) inheriting from most specific line. Obviously, warning superseeds info, error superseeds warning and info, etc/
4) 1 + 1.2 + 1.2.2 can be easily combined like with LogMessage('1.2.2').FullText to be shown to user or converted to Exception, to carry the full story to human.
4.1) Optionally, with relevant setup, it would not only be converted to Exception, but the latter even would be auto-raised. This probably would require some kind of context with supplied exception class or supplied exception constructing callback.
5) Multisink: info can be just appended into collapsible panel with TMemo on main form or currently active form. The error state could open such panel additionally or prompt user to do it. At the same time some file or network server could for example receive warning and error grade messages and not receive info grade ones.
6) extra associated data might be nice too. Say, if to render it with TreeView rather than TMemo, then it could have "1.1.1. parsed data for BAR [ok]" item, with mouse tooltip like "Foo's dimensions are told to be 2x4x3.2 metres"
Being free library is nice, especially free with sources. Sometimes track and fix the bug relying solely on DCUs is much harder.
Non-requiring extra executable. it could offer extra more advanced viewer, but should not be required for just any functionality.
Not being stalled/abandoned.
ability to work and show at least something before GUI is initialized would be nice too. Class constructors are cool, yet are executed as part of unit visualization, when VCL is not booted yet. If any assertion/exception is thrown from there, user would only see Runtime error 217, with all the detail lost. At least OutputDebugStreen can be used, if nothing more...
Stack tracing is not required, if needed i can do it and add with Jedi CodeLib. But it is rarely needed.
External configuration is not required. It might be good for big application to reconfigure on the fly, but to me simplicity is much more important and configuration in code, by calling constructors or such, is what really matters. Extra XML file, like for Log4J, would only make things more fragile and complex.
I glanced few mentioned here libraries.
TraceTool has a great presentation, link is above. Yet it has no info grade, only 3 predefined grades (Debug/Error/Warning) and nothing more, but maybe Debug would suit for Info replacement... Seems like black box, only saving data into its own file, and using external tool to view it, not giving stream of events back to me. But their messages nesting and call chaining seems cool. Cools is also attaching objects/collection to messages.
Log4D and Log4Delphi seems to be in a stasis, with last releases of 2007 and 2009, last targeted version Delphi 7. Lack documentation (probably okay for log4j guy, but not for me :- ) Log4Delphi even had test folder - but those test do not compile in Delphi XE2-Upd1. Pity: In another thread here Log4delphi been hailed for how simple is to create custom log appender (sink)...
BTW, the very fact that the only LOG4J was forked into two independent Delphi ports leaves the question of which is better and that both lack something, if they had to remain in split.
mORMot part is hardly separated from the rest library. Demo application required UAC escalation for use its embedded SQLite3 engine and is frozen (no window opened, yet the process never exits normally) if refused Admin grants. Another demo just started infinite stream of AV exceptions, trying to unwind the stack. So is probably not ready yet for last Delphi. Though its list of message grades is excessive, maybe even a bit too many.
Thank you.
mORMot is stable, even with latest XE2 version of Delphi.
What you tried starting were the regression tests. Among its 6,000,000 tests, it includes the HTTP/1.1 Client-Server part of the ORM. Without the Admin rights, the http.sys Server is not able to register the URI, so you got errors. Which makes perfectly sense. It's a Vista/Seven restriction, not a mORMot restriction.
The logging part can be used completely separated from the ORM part. Logging is implemented in SynCommons.pas (and SynLZ.pas for the fast compression algorithm used for archival and .map embedding). I use the TSynLog class without any problem to log existing applications (even Delphi 5 and Delphi 6 applications), existing for years. The SQLite3 / ORM classes are implemented in other units.
It supports the nesting of events, with auto-leave feature, just as you expect. That is you can write:
procedure TMyClass.MyMethod(const Params: integer);
begin
TSynLog.Enter;
// .... my method code
end;
And adding this TSynLog.Enter will be logged with indentation corresponding to the recursive level. IMHO this may meet your requirements. It will declare an ISynLog interface on the stack, which will be freed by Delphi at the "end;" code line, so it will implement an Auto-Leave feature. And the exact unit name, method name and source code line number will be written into the log (as MyUnit.TMyClass.MyMethod (123)), if you generated a .map file at compilation (which may be compressed and appended to the .exe so that your customers logs will contain the source line numbers). You have methods at the ISynLog interface level to add some custom logging, including parameters and custom state (you can log objects properties as JSON if you need to, or write your custom logging data).
The exact timing of each methods are tracked, so you are able to profile your application from the data supplied by your customer.
If you think the logs are too much verbose, you have several levels of logging, to be customized on the client side. See the blog articles and the corresponding part of the framework documentation (in the SynCommons part). You've for instance "Fail" events and some custom kind of events. And it is totally VCL-independent, so you can use it without GUI or before any GUI is started.
You have at hand a log viewer, which allow client-side profiling and nested Enter/Leave view (if you click on the "Leave" line, you'll go back to the corresponding "Enter", e.g.):
If this log viewer is not enough, you have its source code to make it fulfill your requirements, and all the needed classes to parse and process the .log file on your own, if you wish. Logs are textual by default, but can be compressed into binary on request, to save disk space (the log viewer is able to read those compressed binary files). Stack tracing and exception interception are both implemented, and can be activated on request.
You could easily add a numeration like "1.2.1" to the logs, if you wish to. You've got the whole source code of the logging unit. Feel free to ask any question in our forum.
Log4D supports nested diagnostic contexts in the TLogNDC class, they can be used to group together all steps which are related to one compound action (instead of a 'session' based grouping of log events). Multi-Sinks are called Appenders in log4d and log4delphi so you could write a TLogMemoAppender with around twentyfive lines of code, and use it at the same time as a ODSAppender, a RollingFileAppender, or a SocketAppender, configurable at run time (no external config file required).

What information should I be logging in my web app?

I finishing up a web application and I'm trying to implement some logging. I've never seen any good examples of what to log. Is it just exceptions? Are there other things I should be logging? What type of information do you find useful for finding and fixing bugs.
Looking for some specific guidance and best practices.
Thanks
Follow up
If I'm logging exceptions what information specifically should I be logging? Should I be doing something more than _log.Error(ex.Message, ex); ?
Here is my logical breakdown of what can be logged within and application, why you might want to and how you might go about doing it. No matter what I would recommend using a logging framework such as log4net when implementing.
Exception Logging
When everything else has failed, this should not. It is a good idea to have a central means of capturing all unhanded exceptions. This shouldn't
be much harder then wrapping your entire application in a giant try/catch unless you are using more than on thread. The work doesn't end here
though because if you wait until the exception reaches you a lot of useful information would have gone out of scope. At the very least you should
try to collect specific pieces of the application state that are likely to help with debugging as the stack unwinds. Your application should always be prepared to produce this type of log output, especially in production. Make sure to take a look at ELMAH if you haven't already. I haven't tried it but I have heard great things
Application Logging
What I call application logs includes any log that captures information about what your application is doing on a conceptual level such as "Deleted Order" or "A User Signed On". This kind of information can be useful in analyzing trends, auditing the system, locking it down, testing, security and detecting bugs of coarse. It is probably a good idea to plan on leaving these logs on in production as well, perhaps at variable levels of granularity.
Trace Logging
Trace logging, to me, represents the most granular form of logging. At this level you focus less on what the application is doing and more on how it is doing it. This is one step above actually walking through the code line by line. It is probably most helpful in dealing with concurrency issues or anything for that matter which is hard to reproduce. You wouldn't want to always have this running, probably only turning it on when needed.
Lastly, as with so many other things that usually only get addressed at the very end, the best time to think about logging is at the beginning of a project so that the application can be designed with it in mind. Great question though!
Some things to log:
business actions, such as adding/deleting items. Talk to your app's business owner to come up with a list of things that are useful. These should make sense to the business, not to you (for example: when user submitted report, when user creates a new process, etc)
exceptions
exceptions
exceptions
Some things to NOT to log:
do not log information simply for tracking user usage. Use an analytics tool for that (which tracks the client in javascirpt, not in the client)
do not track passwords or hashes of passwords (huge security issue)
Maybe you should log page/resource accesses which are not yet defined in your application, but are requested by clients. That way, you may be able to find vulnerabilities.
It depends on the application and its audience. If you are managing sales or trading stocks, you probably should log more info than say a personal blog. When you need the log most is when an error is happening in your production environment, but can't reproduce it locally. Having log level and log hierarchy would help in such situations, because you can dynamically increase the log level. See log4j's documentation and log4net.
My few cents.
Besides using log severity and exceptions properly, consider structuring your log statements so that you could easily look though the log data in the future. For example - extracting meaningful info quickly, doing queries etc. There is no problem to generate an ocean of log data, the problem is to convert this data into information. So, structuring and defining it beforehand helps in later usage. If you use log4j, I would also suggest using mapped diagnostic context (MDC) - this helps a lot for tracking session contexts. Aside from trace and info, I would also use debug level where I usually keep temp. items. Those could be filtered out or disabled when not needed.
You probably shouldn't be thinking of this at this stage, rather, logging is helpful to consider at every stage of development to help diffuse potential bugs before they arise. Depending on your program, I would try to capture as much information as possible. Log everything. You can always stop logging certain components or processes if you don't reference that data enough. There is no such thing as too much information.
From my (limited) experience, if you don't want to make a specific error table for each possible error type, construct a generic database table that accepts general information as well as a string that you can populate with exception data, confirmation messages during successful yet important processes, etc. I've used a generic function with parameters for this.
You should also consider the ability to turn logging off if necessary.
Hope this helps.
I beleive when you log an exception you should also save current date and time, requested url, url refferer and user IP address.

Resources