What are the numbers in glib errors and warnings? - glib

(myapp:11228): GLib-CRITICAL **: g_date_strftime: assertion `slen > 0' failed
What does the number after myapp mean? It doesn't correspond to any source-code line nor does its hex or binary interpretation correspond to a relevant location in the binary. The number has always mystified me and looking at the GLib source it appears that it is supposed to be a line number. What does the number mean, or what is it supposed to mean?

I'm pretty sure it's the pid, in case you want to track down the process that emitted the message and say, attach a debugger.

Related

How would I use this?

I'm trying to use this 'advanced lua obfuscator' named XFuscator to obfuscate some code I created. However, I am not sure about how to go about using this. Could you guys give me a brief explanation? Here's the github link: https://github.com/mlnlover11/XFuscator
Thanks in advance.
Download XFuscator source code to your computer.
Fix error in the file XFuscator\Step2.lua (see below)
Open console and cd to XFuscator root directory (where README.txt is located)
Run lua XFuscator.lua "path\to\your_program.lua" (lua should be in your PATH)
See the result (obfuscated program) is in path\to\your_program [Obfuscated].lua
Please note that obfuscated program can run only on the same OS and on the same Lua version (obfuscated program is heavily depends on math.random() and math.randomseed() behavior, these functions are OS-dependent and Lua-dependent).
You can play with option -uglify and levels of obfuscation (see usage message inside XFuscator.lua)
About the error:
In the file XFuscator/Step2.lua the logic of lines #5,#6,#12 is incorrect:
Line #12 of Step2.lua uses the number intact (double has 17 digits precision), while only 14 digits (that's the default Lua number format) are saved into obfuscated file on line #6. This inconsistency sometimes leads to different pseudorandom sequence and you see error message attempt to call a nil value when trying to execute your obfuscated program.
Not all Lua implementations are sensitive to fractional part of a number given as an argument to math.randomseed(); for example, PUC Lua simply ignores fractional part, only low 32-bits of integer part are accepted as seed (unfortunately, Lua manual keeps silence about that). So, it is better for the seed to be an integer.
How to fix the error:
Replace line #5
local __X = math.random()
with the following line:
local __X = math.random(1, 9^9)

How to change data length parameter in maxima software?

I need to use maxima software to deal with data. I try to read data from a text file constructed as
1 2 3
11 22 33
ect.
Following comands allow for loading data sufficiently.
load(numericalio);
read_matrix("path to the file");
The problem arises when I apply them to a more realistic (larger) data set. In this case the message appears Expression longer than allowed by the configuration setting.
How to overcome this problem? I cannot see any option in configuration menu. I would be grateful for advice.
I ran into the same error message today, at it seems to be related to the size of the output that wxMaxima receives from the Maxima executable.
If you wish to display the output regardless, you can change it in the configuration here:
Edit>Configure>Worksheet>Show long expressions
Note that showing a massive expression or amount of data may dramatically slow the program down, so consider hiding the output (use a $ instead of a ; at the end of your lines) if you don't need to visualize the data.

SPSS insert treats a warning as an error

Is it true that SPSS INSERT procedure treats warnings the same way as errors? I am running the INSERT procedure with ERROR = STOP keyword. The execution of the procedure stops after the first warning.
I would say it is a strange behaviour. For example, R source function stops the execution of the script only on errors, not on warnings.
It isn't always obvious which output indicates a warning and which an error, since both warnings and errors appear in a Warnings block. For example, if you run this code using the employee data.sav file shipped with Statistics,
missing values jobcat (1 thru 10).
desc variables=jobcat.
it will generate a Warning block that says
Warnings
No statistics are computed because there are no valid cases.
But if you retrieve the error level, which requires programmability, you will see that this is a level 3 error. Warnings are assigned error level 2. Warnings do not stop a command from running while higher levels do.
Levels 3, 4, and 5 are considered errors, although level 5, Catastrophic error, would be hard to report, since it means that the SPSS Processor has crashed.

dSYM Address Lookup

I have parsed out the addresses, file names and line numbers from a dSYM file for an iOS app. I basically have a table that maps an address to a file name and line number, which is very helpful for debugging.
To get the actual lookup address, I use the stack trace address from the crash report and use the formula specified in this answer: https://stackoverflow.com/a/13576028/2758234. So something like this.
(actual lookup address)
= (stack trace address) + (virtual memory slide) - (image load address)
I use that address and look it up on my table. The file name I get is correct, but the line number always points to the end of the function or method that was called, not the actual line that called the following function on the stack trace.
I read somewhere, can't remember where, that frame addresses have to be de-tagged, because they are aligned to double the system pointer size. So for 32-bit systems, the pointer size is 4 bytes, so we de-tag using 8-bytes, using a formula like this:
(de-tagged address) = (tagged address) & ~(sizeof(uintptr_t)*2 - 1)
where uintptr_t is the data type used for pointers in Objective-C.
After doing this, the lookup sort of works, but I have to do something like find the closest address that is less than or equal to the de-tagged address.
Question #1:
Why do I have to de-tag a stack frame address? Why in the stack trace aren't the addresses already pointing to the right place?
Question #2:
Sometimes in the crash report there seems to be a missing frame. For example, if function1() calls function2() which calls function3() which calls function4(), in my stack trace I will see something like:
0 Exception
1 function4()
2 function3()
4 function1()
And the stack trace address for function3() (frame 2, above) doesn't even point to the right line number (but it is the right file, though), even after de-tagging. I see this even when I let Xcode symbolicate a crash report.
Why does this happen?
For question #1, the addresses in an iOS crash report have three components that are taken into account: The original load address of your app, the random slide value that was added to that address when your app was launched, and the offset within the binary. At the end of the crash report, there should be a line showing the actual load address of your binary.
To compute the slide, you need to take the actual load address from the crash report and subtract the original load address. This tells you the random slide value that was applied to this particular launch of the app.
I'm not sure how you derived your table - the problem may lie there. You may want to double check by using lldb. You can load your app into lldb and tell lldb that it should be loaded at address 0x140000 (this would be the actual load address from your crash report, don't worry about slides and original load addresses)
% xcrun lldb
(lldb) target create -d -a armv7 /path/to/myapp.app
(lldb) target modules load -f myapp __TEXT 0x140000
Now lldb has your binary loaded at the actual load address of this crash report. You can do all the usual queries in lldb, such as
(lldb) image lookup -v -a 0x144100
to do a verbose lookup on address 0x144100 (which might appear in your crash report).
You can also do a nifty "dump your internal line table" command in lldb with target modules dump line-table. For instance, I compiled a hello-world Mac app:
(lldb) tar mod dump line-table a.c
Line table for /tmp/a.c in `a.out
0x0000000100000f20: /tmp/a.c:3
0x0000000100000f2f: /tmp/a.c:4:5
0x0000000100000f39: /tmp/a.c:5:1
0x0000000100000f44: /tmp/a.c:5:1
(lldb)
I can change the load address of my binary and try dumping the line table again:
(lldb) tar mod load -f a.out __TEXT 0x200000000
section '__TEXT' loaded at 0x200000000
(lldb) tar mod dump line-table a.c
Line table for /tmp/a.c in `a.out
0x0000000200000f20: /tmp/a.c:3
0x0000000200000f2f: /tmp/a.c:4:5
0x0000000200000f39: /tmp/a.c:5:1
0x0000000200000f44: /tmp/a.c:5:1
(lldb)
I'm not sure I understand what you're doing with the de-tagging of the addresses. The addresses on the call stack are the return addresses of these functions, not the call instruction - so these may point to the line following the actual method invocation / dispatch source line, but that's usually easy to understand when you're looking at the source code. If all of your lookups are pointing to the end of the methods, I think your lookup scheme may have a problem.
As for question #2, the unwind of frame #1 can be a little tricky at times if frame #0 (the currently executing frame) is a leaf function that doesn't set up a stack frame, or is in the process of setting up a stack frame. In those cases, frame #1 can get skipped. But once you're past frame #1, especially on arm, the unwind should not miss any frames.
There is one very edge-casey wrinkle when a function marked noreturn calls another function, the last instruction of the function may be a call -- with no function epilogue -- because it knows it will never get control again. Pretty uncommon. But in that case, a simple-minded symbolication will give you a pointer to the first instruction of the next function in memory. Debuggers et al use a trick where they subtract 1 from the return address when doing symbol / source line lookup to sidestep this issue, but it's not something casual symbolicators usually need worry about. And you have to be careful to not do the decr-pc trick on the currently-executing function (frame 0) because a function may have just started executing and you don't want to back up the pc into the previous function and symbolicate incorrectly.

how much time does grid.py take to run?

I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours..
this is the state of my 5 terminals now :
[root#localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt
Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt
Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt
Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt
Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt
Wrong input format at line 494
Traceback (most recent call last):
File "grid.py", line 223, in run
if rate is None: raise "get no rate"
TypeError: exceptions must be classes or instances, not str
I had redirected the outputs to files , but those files for now contain nothing..
And , the following files were created :
sbiz_nonbiz_feat.txt.out
sbiz_nonbiz_feat.txt.png
sarts_nonarts_feat.txt.out
sarts_nonarts_feat.txt.png
sgames_nongames_feat.txt.out
sgames_nongames_feat.txt.png
sref_nonref_feat.txt.out
sref_nonref_feat.txt.png
snews_nonnews_feat.txt.out (--> is empty )
There's just one line of information in .out files..
the ".png" files are some GNU PLOTS .
But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ?
Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines..
Thanks and regards
Your data is huge, 144 000 lines. So this will take sometime. I used large data such as yours and it took up to a week to finish. If you using images, which I suppose you are, hence the large data, try resizing your image before creating the data. You should get approximately the same results with your images resized.
The libSVM faq speaks to your question:
Q: Why grid.py/easy.py sometimes generates the following warning message?
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
Nothing is wrong and please disregard the message. It is from gnuplot when drawing the contour.
As a side note, you can parallelize your grid.py operations. The libSVM tools directory README file has this to say on the matter:
Parallel grid search
You can conduct a parallel grid search by dispatching jobs to a
cluster of computers which share the same file system. First, you add
machine names in grid.py:
ssh_workers = ["linux1", "linux5", "linux5"]
and then setup your ssh so that the authentication works without
asking a password.
The same machine (e.g., linux5 here) can be listed more than once if
it has multiple CPUs or has more RAM. If the local machine is the
best, you can also enlarge the nr_local_worker. For example:
nr_local_worker = 2
In my Ubuntu 10.04 installation grid.py is actually /usr/bin/svm-grid.py
I guess grid.py is trying to find the optimal value for C (or Nu)?
I don't have an answer for the amount of time it will take, but you might want to try this SVM library, even though it's an R package: svmpath.
As described on that page there, it will compute the entire "regularization path" for a two class SVM classifier in about as much time as it takes to train an SVM using one value of your penalty param C (or Nu).
So, instead of training and doing cross validation for an SVM with a value x for your C parameter, then doing all of that again for value x+1 for C, x+2, etc. You can just train the SVM once, then query its predictive performance for different values of C post-facto, so to speak.
Change:
if rate is None: raise "get no rate"
in line 223 in grid.py to:
if rate is None: raise ValueError("get no rate")
Also, try adding:
gnuplot.write("set dgrid3d\n")
after this line in grid.py:
gnuplot.write("set contour\n")
This should fix your warnings and errors, but I am not sure if it will work, since grid.py seems to think your data has no rate.

Resources