Can I to generate log with cpu and memory using Log4j? - memory

I have a web application and I need to implement a library that generates a log with information of the "health" of the application and also the server hardware.
The generated log must contain processor usage information (cpu) and memory.
Can I do this with Log4j? If not, is there any Java library that allows me to get this information?

First of all, you need a library. Two options:
SIGAR, (examples).
JavaSysMon, (example).
After this, you can use log4j for log that information.

Related

Azure Data Lake Store concurrency

I've been toying with Azure Data Lake Store and in the documentation Microsoft claims that the system is optimized for low-latency small writes to files. Testing it out I tried to perform a big amount of writes on parallel tasks to a single file, but this method fails in most cases returning a Bad Request. This link https://issues.apache.org/jira/secure/attachment/12445209/appendDesign3.pdf shows that HDFS isn't made to handle concurrent appends on a single file, so I tried a second time using the ConcurrentAppendAsync method found in the API, but although the method doesn't crash, my file's never modified on the store.
What you have found out is correct about how parallel writes will work. I am assuming you have already read the documentation of ConcurrentAppendAsync.
So, in your case, did you use the same file for the Webhdfs write test and the ConcurrentAppendAsync? If that's the case, then ConcurrentAppendAsync will not work, as mentioned in the documentation. But you should have got an error in that case.
In any case, let us know what happened and we can investigate further.
Thanks,
Sachin Sheth
Program Manager - Azure Data Lake

Programmatically reading iOS app .crash file?

As per my understanding, when an iOS app crashes, the system creates a .crash file that can be accessed through Xcode. Is it possible to read this file programmatically from within the app itself?
My intention is to send this crash log (if exists) to a server so that we can analyse the log without user intervention. There are several crash reporting services (e.g. bugsense, crashlytics) that are able to do this, so I assume a custom implementation should be possible?
It is not possible to read an iOS system generated crash report from within your app. The reason is that your app runs in a sandbox and has no access to anything outside its sandbox. So if you try to read a file or directory from a path that is not within your sandbox, you won't get any data.
The only option to get crash reports is adding a library into your code that is able to detect crashes, create reports and send them somehow. Most send it to a server since getting thousands of crash reports via email is not very efficient, also considering the required symbolication process to get the class, method and line number information.
A popular and safe to use open source library is PLCrashReporter. Most services use this one, a few write their own functionality to achieve the functionality. There is an open source app available to compare different crash reporting services and libraries named CrashProbe (Disclaimer: I am one of its developers). The project has a website at http://crashprobe.com with a list of services already compared.
It is also important to note, that having a specific crash reporting library is not enough for good crash reports, since the symbolication process needs to be able to translate information in the crash report to point you to the correct class name, method name, file name and line number for each frame in the stack trace.
I'd recommend to compare services that seem to fit your needs (also using CrashProbe) and then decide which one is best for your use case.
I won't recommend any services, since this is not allowed according to the stackoverflow guidelines.

HTLM5 App Storage facilities

I need to create an iPad application. I was considering building an HTML 5 application based on Phone Gap. This application will be a file-synchronization application and will download pdf files and images from a central database.
Would an HTML 5 application makes sense here ? ... or do I need to go native because of storage requirements? I would need at least 1 GB of storage. From my research it seems that when you create an HTML 5 application, you have limited facilities for saving files.
Did some tests on this: My conclusion is that in such case, you need to go for the native option.
This is because of a number of reasons:
The HTML 5 storage/file api has limited functionality and it is not yet fully standardized.
Furthermore, devices have different approaches for handling files especially with regards to shared storage.
Also, in some cases it is not possible to load some file types into the browser. For example, on IE (on WP) it is not possible to launch a pdf from the local file system. Sharing the data with a pdf program is not trivial because the html files are coupled with the specific application.

Remote Profiling with D7 (NOT memory profiling, but timing...)

We have one machine in-house that is 20 times slower starting our Delphi 7 app than any other machine.
We would like to get a performance profile (not memory profile) to locate where it's spending its time.
AQTime, which we own, we've discovered doesn't do remote profiling.
We'd prefer not to take the time to build up an entire D7 IDE development environment just so we can use AQTime to profile our app on this one in-house machine.
The code is a bit too complex for us to want to meter it ourselves.
Any suggestions on a profiler that will gather high level (procedure or line number) statistics remotely?
Take a look at SamplingProfiler. It doesn't do "remote" profiling, but it also doesn't require a development environment. It just needs to be able to launch the program to be profiled (so it has to run on the same machine) and the program has to have a .MAP file generated by the linker in the same folder as the .EXE. If this is in-house, that shouldn't be a problem for you.
And if you look at the helpfile, you'll even find ways to have it only profile certain sections of your program, which AQTime can't do. That helps if you know the issue is in one specific place, such as the startup code.
Why not install AQTime on the machine and use it as a standalone profiler? No need for an "entire D7 IDE development environment".
You can also try my free/open source sampling profiler:
http://code.google.com/p/asmprofiler/wiki/AsmProfilerSamplingMode
(I get better results with it than with SamplingProfiler)
It uses all kinds of Delphi debug symbols (.map, TD32, .jdbg, etc)
You can use our Open Source TSynLog class to add profiling to any application, not only on the developer computer.
It is not an automated profiler, as other tools: you'll have to modify your code. But it can be run on request remotely and even with no communication at all, even from the end customer side.
You add some profiling calls to some method code, then entering and leaving the methods will be logged into a text file. Then a supplied log viewer is available, and has some dedicated method to do the profiling, and identify the slow methods.
(source: synopse.info)
The logging mechanism can be used to trace recursive calls. It can use an interface-based mechanism to log when you enter and leave any method:
procedure TMyDB.SQLExecute(const SQL: RawUTF8);
var ILog: ISynLog;
begin
ILog := TSynLogDB.Enter(self,'SQLExecute');
// do some stuff
ILog.Log(sllInfo,'SQL=%',[SQL]);
end; // when you leave the method, it will write the corresponding event to the log
It will be logged as such:
20110325 19325801 + MyDBUnit.TMyDB(004E11F4).SQLExecute
20110325 19325801 info SQL=SELECT * FROM Table;
20110325 19325801 - MyDBUnit.TMyDB(004E11F4).SQLExecute 00.000.507
Here the method name is set in the code ('SQLExecute'). But if you have an associated .map file, the logging mechanism is able to read this symbol information, and write the exact line number of the event. You can even use a highly compressed version of the .map file (900 KB .map -> 70 KB .mab, i.e. much better than zip or lzma), or embed its content to the executable at build time.
Adding profiling at method level is therefore just the matter of adding one line of code at the beginning of the method, as such:
procedure TMyDB.SQLExecute(const SQL: RawUTF8);
begin
TSynLogDB.Enter;
// do some stuff
end; // when you leave the method, it will write the corresponding event to the log
High-resolution timestamps are also logged on the file (here 00.000.507). With this, you'll be able to profile your application with data coming from the customer side, on its real computer. Via the Enter method (and its auto-Leave feature), you have all information needed for this.
By procedding steps by steps, you'll get very quickly to your application bottlenecks. And it would be possible to do the same on the end customer side, on request.
I used this on several applications, and found out very easily several bottlenecks, even on specific hardware, software and network configuration (you never know what your customers use), very easily.

What information should I be logging in my web app?

I finishing up a web application and I'm trying to implement some logging. I've never seen any good examples of what to log. Is it just exceptions? Are there other things I should be logging? What type of information do you find useful for finding and fixing bugs.
Looking for some specific guidance and best practices.
Thanks
Follow up
If I'm logging exceptions what information specifically should I be logging? Should I be doing something more than _log.Error(ex.Message, ex); ?
Here is my logical breakdown of what can be logged within and application, why you might want to and how you might go about doing it. No matter what I would recommend using a logging framework such as log4net when implementing.
Exception Logging
When everything else has failed, this should not. It is a good idea to have a central means of capturing all unhanded exceptions. This shouldn't
be much harder then wrapping your entire application in a giant try/catch unless you are using more than on thread. The work doesn't end here
though because if you wait until the exception reaches you a lot of useful information would have gone out of scope. At the very least you should
try to collect specific pieces of the application state that are likely to help with debugging as the stack unwinds. Your application should always be prepared to produce this type of log output, especially in production. Make sure to take a look at ELMAH if you haven't already. I haven't tried it but I have heard great things
Application Logging
What I call application logs includes any log that captures information about what your application is doing on a conceptual level such as "Deleted Order" or "A User Signed On". This kind of information can be useful in analyzing trends, auditing the system, locking it down, testing, security and detecting bugs of coarse. It is probably a good idea to plan on leaving these logs on in production as well, perhaps at variable levels of granularity.
Trace Logging
Trace logging, to me, represents the most granular form of logging. At this level you focus less on what the application is doing and more on how it is doing it. This is one step above actually walking through the code line by line. It is probably most helpful in dealing with concurrency issues or anything for that matter which is hard to reproduce. You wouldn't want to always have this running, probably only turning it on when needed.
Lastly, as with so many other things that usually only get addressed at the very end, the best time to think about logging is at the beginning of a project so that the application can be designed with it in mind. Great question though!
Some things to log:
business actions, such as adding/deleting items. Talk to your app's business owner to come up with a list of things that are useful. These should make sense to the business, not to you (for example: when user submitted report, when user creates a new process, etc)
exceptions
exceptions
exceptions
Some things to NOT to log:
do not log information simply for tracking user usage. Use an analytics tool for that (which tracks the client in javascirpt, not in the client)
do not track passwords or hashes of passwords (huge security issue)
Maybe you should log page/resource accesses which are not yet defined in your application, but are requested by clients. That way, you may be able to find vulnerabilities.
It depends on the application and its audience. If you are managing sales or trading stocks, you probably should log more info than say a personal blog. When you need the log most is when an error is happening in your production environment, but can't reproduce it locally. Having log level and log hierarchy would help in such situations, because you can dynamically increase the log level. See log4j's documentation and log4net.
My few cents.
Besides using log severity and exceptions properly, consider structuring your log statements so that you could easily look though the log data in the future. For example - extracting meaningful info quickly, doing queries etc. There is no problem to generate an ocean of log data, the problem is to convert this data into information. So, structuring and defining it beforehand helps in later usage. If you use log4j, I would also suggest using mapped diagnostic context (MDC) - this helps a lot for tracking session contexts. Aside from trace and info, I would also use debug level where I usually keep temp. items. Those could be filtered out or disabled when not needed.
You probably shouldn't be thinking of this at this stage, rather, logging is helpful to consider at every stage of development to help diffuse potential bugs before they arise. Depending on your program, I would try to capture as much information as possible. Log everything. You can always stop logging certain components or processes if you don't reference that data enough. There is no such thing as too much information.
From my (limited) experience, if you don't want to make a specific error table for each possible error type, construct a generic database table that accepts general information as well as a string that you can populate with exception data, confirmation messages during successful yet important processes, etc. I've used a generic function with parameters for this.
You should also consider the ability to turn logging off if necessary.
Hope this helps.
I beleive when you log an exception you should also save current date and time, requested url, url refferer and user IP address.

Resources