extracting error information from rails log files - ruby-on-rails

i am developing on 5 different rails projects, plus also refactoring some (moving from older rails versions to 2.3) - what is the best way to extract the error information from the logfiles, so i can see all the depreciation warnings, runtime errors and so on, so i can work on improving the codebase?
are there any services or libraries out there you can recommend, that actually help with rails logfile parsing?

Read about grep linux command.
http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/
I don't know what error log format is in Rails, but I guest every row with warning or error contain "warning" or "error" word.
Then it would be like this:
grep -E "error|warning" logfile.txt
for bot error and warnings
grep "error" logfile.txt
for errors
grep "warning" logfile.txt
for warnings
and if you want to see new errors and warnings in real time, try this
tail -f logfile.txt | grep -E "error|warning"
tail -f logfile.txt | grep "error"
tail -f logfile.txt | grep "warning"
Hope I could help you ;) and I hope that I'm not wrong about logs format in Rails

I've found the request-log-analyzer project to be very useful.
You can certain grep the log to find errors and dump them out, but this does an excellent job of gathering all the information about the different actions and how long they take.
Here's some sample output.
This is the first thing I run when I get a call saying "my site is slow and I need help fixing it."
Hoptoad and/or Exceptional are great for ongoing errors, but they don't track log running requests. Something like New Relic is good for that.

I use hoptoadapp, http://www.hoptoadapp.com/pages/home there is a free flavor, it logs your error messages to their database, and they provide an easy to use interface. All you have to do is install this plugin: http://github.com/thoughtbot/hoptoad_notifier.
It won't help for past errors, but it is great for isolating problems with a currently running app.

Related

iOS app logging somewhere other than Xcode console

There's an app I've started working on, that regularly logs to the console a lot of stuff, and it's really not convenient to use the console for additional debug logs. I don't want to erase these logs, because there are a few people that are maintaining these logs, and it might be important to them.
So I actually need to to write my debug stuff to a different place. one option is to write it to a file and watch it in Terminal using tail command, but an iOS app can only write inside its folder which, when using a simulator, is always changing each time I run the app. and I don't want to change the path int the tail command every time - I want a fast process.
Does anyone have an idea for such external log place that I can use easily?
Here's how to make it easier to find and tail your log file when running in Simulator (I use this myself):
1) Add the following code to your home directory's .bashrc then log out and back in again.
xcodelog()
{
find . -name xcode.log -type f -print0 | xargs -0 stat -f "%m %N" | sort -rn | head -1 | cut -f2- -d" "
}
2) Start your app in Xcode's simulator, such that at least something gets logged to your file. Oh, and the file your app is logging to needs to be named "xcode.log" unless you want to change the filename in the above code.
3) Open terminal and switch to your ~/Library/Developer/CoreSimulator directory. Then perform the following command (it displays the last 100 lines of it along with anything new you dump to it).
tail -n 100 -f $(xcodelog)
So the above command hunts for that file among all your simulator devices and their apps, hunting down the most recent "xcode.log" file you've written to (among all apps and devices in entire CoreSimulator subdirectory system).
To clear the most recent xcode.log file, you can do this command:
cat /dev/null > $(xcodelog)
I switched to this approach for all my logging when Xcode 8 lost support for plugins, along with the very fine XcodeColors plugin that would do ANSI color logging into Xcode's console. So I changed my log system to output colors that terminal would support when doing a tail of a file. So I can spot errors in red, warnings in orange, user step logging in yellow, and various degrees of important other info in progressive shades of gray. :)

Why does grep make my terminal unreadable when searching for #?

I'm trying to grep my repo searching for # sign to find some e-mail addresses because of this git issue.
I type grep -rnw '/my/path' -e '#' into the terminal and I get:
Why does it happen?
P.S. I think there is no sensitive information in the picture, but someone please tell me if you think there is.
If you have issues only on files within a .git folder, you might consider excluding it from your recursive grep, as in this answer.
grep --exclude-dir=".git" -nrw ...
On CentOS, the regular grep might not include that option though.

Sysinternals ProcDump -e usage

I am rather new to using the procdump.exe utility and I am trying to find out why a process I am running is crashing without generating a crash dump or writing out an unhandled exception to the log. I am using the following command line
procdump.exe -e -t pid C:\DumpFiles\Process.dmp
As I am running this against the process that is having issues, I don't see any dump file being generated though I am seeing the following exception many times:
Exception: E0434352.CLR
According to one website I looked at, that particular exception get generated whenever there is an unhandled exception, which isn't particularly helpful to me. Also, I am not sure how true that information I got was. I was wondering if there was a way to get procdump to spit out a dump file when it encounters an exception like that so I can see what is going on.
Thanks in advance!
E0434352.CLR is an error code that represents .NET exceptions and it is used by the CLR, therefore I assume that your process is managed code. Adding the '-g' switch (Run as a native debugger in a managed process) will provide you the information you're looking for.
As #yonisha said, "E0434352.CLR" is a general message for all .NET Exceptions.
But, you can check specific .NET Exception if you add "1" value to "-e" option as follows,
procdump -e 1 -f "" [Your Process ID]
After applying that option, procdump will print E0434352.CLR as .NET Exception like this.
[14:54:53] Exception: E0434F4D.System.IO.DirectoryNotFoundException ("Could not find a part of the path 'c:\myfile\test.dat'.")
Once you could identified what kind of .NET Exception, you can dump it with these options.
procdump -ma -e 1 -f "DirectoryNotFoundException" [Your Process ID] c:\temp\test.dmp

VBS printer script executing error

I have some trouble executing/using vbs scripts linked to printers. They are located in %windir%/System32/Printing_Admin_Scripts
The objective is to plan a weekly print task to preserve ink cartridge
Looking at the scripts, everything was available for me to create this task
The main script to use is prnqctl.vbs
Before to create my task, I have tried to test the script and this is what I got (sorry for the french version, I will try to update the screenshot in english later):
There is obviously something wrong.
I have tried to google the error code, nothing conclusive.
I have tried to run my script in admin mode and also under admin session, same problem
I have made some research on CIMWin32, it seems to be a dll and I can find it in some locations of my filesystem
My OS is W8.1.
If anybody has a suggestion or solution, I'm interested in
==>cscript C:\Windows\System32\Printing_Admin_Scripts\en-US\prnqctl.vbs -e
Unable to get printer instance. Error 0x80041002 Not found
Operation GetObject
Provider CIMWin32
Description
Win32 error code
The error culprit is clear: you should provide a valid -p argument. It's a mandatory parameter in case of -e operation:
==>cscript C:\Windows\System32\Printing_Admin_Scripts\en-US\prnqctl.vbs -e -p "Fax"
Success Print Test Page Printer Fax
==>

grep command that works on Ubuntu, but not on Fedora

I clean hacked Wordpress installations for clients on a regular basis, and I have a set of scripts that I have written to help me track down where the sites are hacked. One code snippet that I use on a regular basis worked fine on Ubuntu, but since switching to Fedora on Friday has quite behaving as expected. The command is this:
grep -Iri --exclude "*.js" "eval\s*(" * | grep -rivf ~/safeevals.txt >../foundevals.txt;
What it is supposed to happen (and did happen when I was using Ubuntu): grep through all non-binary files, excluding Javascript includes, for all occurances of the eval() function, then perform a negative match on a line by line basis against all known occurances of the eval() function in a vanilla installation of Wordpress (the patterns of which are in ~/safeevals.txt).
What is actually happening: The first part is working fine, as I ran it separately and it did find all instances of eval() in the installation. However, instead of greping through those results, after the pipe is it re-grepping through all of the files, returning a negative match of ~/safeevals.txt (ie. pretty much every line of every file in the installation).
Any idea why the second grep isn't acting on the piped data, or what I need to do to fix it? Thanks.
-Michael
Just tested on my Debian box: apparently, grep -r likes to assume a default argument of .. I am really wondering if that behaviour is valid. Anyway, I guess dropping the -r option from the second grep command will fix it.
Edit: rgrep defaulting to $PWD seems to be a recent change in grep, see this discussion on unix.stackexchange and the link there to the commit in the upstream grep code repository.

Resources