I'm using GCC for make some powerpc64 executable, but sometimes between functions i have the following mistakes: Screenshot
Powerpc instructions format are still in 4 bytes, i tried some gcc commands (-fno-align-functions) but the compiler still fill bytes between functions.
I want my functions start directly after the end of the previous functions, without any values/zero filled (in the case of the screenshots the functions should start at 0x124).
Thanks.
The PPC64 ABI specifies a traceback table appended to functions. The zeroes may be due to the traceback table and not related to alignment. Try using the
-mtraceback=no command line option.
In addition to the traceback table issue noted in the previous answer, functions are normally aligned on a 16-byte boundary. This is important for various reasons, including so the compiler can align hot loops on a 16-byte boundary for improved icache performance. Assembly code from GCC will have a directive like:
.p2align 4,,15
before each function definition to enforce this. So even without the traceback table your function will not start at address 0x124 without more effort.
This behavior can be overridden using -fno-align-functions, or using optimization level -Os (optimize for size). I've tried both methods, and they both remove the .p2align directive. Using -fno-align-functions is preferable unless you really want smaller and potentially slower code.
(If you are compiling with -O0 or -O1, you won't see the directive either, but we do not recommend compiling at such low optimization levels for either size or speed.)
Related
In my objdump -t output, I see the following two lines:
00000000000004d2 l F .text.unlikely 00000000000000ec function-signature-goes-here [clone .cold.427]
and
00000000000018e0 g F .text 0000000000000690 function-signature-goes-here
I know l means local and g means global. I also know that .text is a section, or a type of section, in an object file, containing compiled program instructions. But what is .text.unlikely? Assuming it's a different section (or type-of-section) from .text - what's the difference?
In my GCC v5.4.0 manpage, I found the following switch:
-freorder-functions
which says:
Reorder functions in the object file in order to improve code
locality. This is implemented by using special subsections
".text.hot" for most frequently executed functions and
".text.unlikely" for unlikely executed functions. Reordering is done
by the linker so object file format must support named sections and
linker must place them in a reasonable way.
Also profile feedback must be available to make this option effective.
See -fprofile-arcs for details.
Enabled at levels -O2, -O3, -Os.
Looks like the compiler was run with optimization flags or that switch for this binary, and functions are organized in subsections to optimize spatial locality.
I’m using lua + luajit 2.0.4 and I’m wondering - Is it possible to restore the original parts of the code from the dumps of lua functions?
function a(l)
if l > 3 then
print(l*l)
end
end
local b = string.dump(a)
In this example, I am doing the string.dump of the 'a' function, and here I come to the questions like:
Is it possible to write this dump into a .txt file?
Is it possible to get the original names of functions, variables, and upvalues?
Is it possible to get strings, numbers, tables?
Is it possible to restore it to the full code, and if not, is it possible to get a disassembled listing?
"Yes" to all questions with a couple of caveats. For (1), make sure that "b" is used as part of the "mode" parameter in io.open on Windows, as the output of string.dump will have some binary content. For (2), it's only true when string.dump is used without the strip option, which was added in LuaJIT:
string.dump(f [,strip])
An extra argument has been added to string.dump(). If set to true,
'stripped' bytecode without debug information is generated. This
speeds up later bytecode loading and reduces memory usage.
For (4), I found this document to be very useful: http://files.catwell.info/misc/mirror/lua-5.2-bytecode-vm-dirk-laurie/lua52vm.html (it's for Lua 5.2, but most of the content applies to LuaJIT as well); it also include a section on the difference between full and stripped bytecode that may answer some of your questions.
I wrote the following code in the file "orgin.lua"
if test==nil then
print(aa["bb"]["cc"]) -- to produce a crash
end
print(1120)
when it crash ,it will generate the following information:
lua: origin.lua:3: attempt to index global 'aa' (a nil value)
In order to prevent decompilation and make sure the code is safe,I use the following command to convert my code:
luac -o -s test.lua origin.lua
I know the argument -s is strip debug information, then it do not show the number of rows when crash:
lua: ?:0: attempt to index global 'aa' (a nil value)
but how to bring debugging information when encryption then lua code use luac?Is there any solution?
There is no way to do this built into Lua, but there are some work-arounds.
If you only need line numbers, then one option is to leave the line numbers in the chunk. Line numbers are not that useful for reverse engineering (unluac currently doesn't use them at all), so it shouldn't affect security. Lua doesn't provide an option for this, but it is easy to modify Lua to leave them in when stripping. From ldump.c
n = (D->strip) ? 0 : f->sizelineinfo;
can be changed to
n = f->sizelineinfo;
(Disclaimer: untested)
A more complicated option would be to modify the Lua runtime to output the virtual machine program counter instead of the line number, and also output information describing the location of the current function in the chunk (e.g. top level, first function, second function nested in third function, etc). Then the line number could be looked up by the developer in a non-stripped version of the chunk. (Here is a reference to someone using this approach on lua-l -- no source code was provided, though.)
Note that preventing decompilation is not true security. It may help against casual attacks, but Lua bytecode is not hard to read.
luac does not encrypt the output. It compiles your Lua source code to bytecode, that's all. The code is neither encrypted nor does it run any faster, only the loadtime is shorter since the compilation step is not needed.
If you want your code to be encrypted, I suggest to encrypt the bytecode using e.g. AES-256 and then decode it in memory just before handing it to the Lua state. This way the bytecode is encrypted on disk, but decripted in memory.
The overhead is low. We use this technique since years.
Executing rustc -C help shows (among other things):
-C opt-level=val -- optimize with possible levels 0-3, s, or z
The levels 0 to 3 are fairly intuitive, I think: the higher the level, the more aggressive optimizations will be performed. However, I have no clue what the s and z options are doing and I couldn't find Rust-related information about them.
It seems like you are not the only one confused, as described in a Rust issue. It seems to follow the same pattern as Clang:
Os For optimising the size when compiling.
Oz For even more size optimisation.
Looking at these and these lines in Rust's source code, I can say that s means optimize for size, and z means optimize for size some more.
All optimizations seem to be performed by the LLVM code-generation engine.
These two sequences, Os and Oz, within LLVM, are pretty similar. Oz invokes 260 passes (I am using LLVM 12.0), whereas Os invokes 264. Oz' sequence of analyses and optimizations is almost a strict subsequence of Os', except for one pass (opt -loops), which appears in a different place within Os. This said, notice that the effects of the optimizations can still be different, because they use different cost models, e.g., constants that determine the behavior of optimizations. Thus, optimizations that have impact on size, like loop unrolling and inlining can behave differently in these two sequences.
I'm developing an application that parses Cobol programs. In these programs some respect the traditional coding style (programm text from column 8 to 72), and some are newer and don't follow this style.
In my application I need to determine the coding style in order to know if I should parse content after column 72.
I've been able to determine if the program start at column 1 or 8, but prog that start at column 1 can also follow the rule of comments after column 72.
So I'm trying to find rules that will allow me to determine if texts after column 72 are comments or valid code.
I've find some but it's hard to tell if it will work everytime :
dot after column 72, determine the end of sentence but I fear that dot can be in comments too
find the close character of a statement after column 72 : " ' ) }
look for char at columns 71 - 72 - 73, if there is not space then find the whole word, and check if it's a key word or a var. Problem, it can be a var from a COPY or a replacement etc...
I'd like to know what do you think of these rules and if you have any ideas to help me determine the coding style of a Cobol program.
I don't need an API or something just solid rules that I will be able to rely on.
I think you need to know the COBOL compiler for each program. Its documentation should tell you what conventions/configurations/switches it uses to decide if the source code ends at column 72 or not.
So.... which compiler(s)?
And if you think the column 72 issue is a pain, wait till you get around to actually parsing the COBOL itself. If you are not well prepared to handle the lexical issues of the language, you are probably very badly prepared to handle the syntactic ones.
There is no absolutely reliable way to determine if a COBOL program
is in fixed or free format based only on the source code. Heck it is sometimes difficult to identify
the programming language based only on source code. Check out
this classic polyglot - it is valid under 8 different language compilers. That
said, you could try a few heuristics that might yield
the correct answer more often than not.
Compiler directives imbedded in source code
Watch for certain compiler directives that determine code format.
Unfortunately, every compiler vendor uses their own flavour of directive.
For example, Microfocus COBOL uses the
SOURCEFORMAT directive. This directive will appear near the top of the program so a short pre-scan
could be used to find it. On the other hand, OpenCobol uses >>SOURCE FORMAT IS FREE and
>>SOURCE FORMAT IS FIXED to toggle between free and fixed format, different parts of the same program
could be formatted differently!
The bottom line here is that you will have to support the conventions of multiple COBOL compilers.
Compiler switches
Source code format can be also be specified using a compiler switch. In this case, there are no concrete
clues to go on. However, you can be reasonably sure that the entire source program will be either
fixed or free. All you can do here is guess. Unless the programmer is out to "mess with
your head" (and some will), a program in free format will have the keywords IDENTIFICATION DIVISION or ID DIVISION, starting before column 8.
Every COBOL program will begin with these keywords so you can use them as the anchor point for determining code format in the
absence of imbedded compiler directives.
Warning - this is far from fool proof, but might be a good start.
There won't be an algorithm to do this with 100% certainty, because if comments can be anything, they can also be compilable COBOL code. So you could theoretically write a program that means one thing if the comments are ignored, and something else entirely if the comments are treated as part of the COBOL.
But that's extremely unlikely. What's most likely to happen is that if you try to compile the code under the wrong convention, it will simply fail. So the only accurate way to do this is to try compiling/parsing the program one way, and if you come to a line that can't make sense, switch to the other style. You could also support passing an argument to the compiler when the style is already known.
You can try using heuristics like what you've described, but that will never be totally accurate. The most they can give you is a probability that the code is one or the other style, which will increase as they examine more and more lines of code. They could be useful for helping you guess the style before you start compiling, or for figuring out when the problem is really just a typo in the code.
EDIT:
Regarding ideas for heuristics, it's hard to say. If there were a standard comment sigil like // or # in other languages, this would be a lot easier (actually, there is, but it sounds like your code doesn't follow this convention). The only thing I can think of would be to check whether every line (or maybe 99% of lines, and not counting empty lines or lines commented with *) has a period somewhere before position 72.
One thing you DON'T want to do is apply any heuristics to the part after position 72. That is, you don't want to be checking the comments to see if they're valid COBOL. You want to check what you know is COBOL first, and see if that works by itself. There are several reasons for this:
Comments written in English are likely to have periods and quotes in them, so your first and second bullet points are out.
Natural languages are WAY harder to parse than something like COBOL.
The comments could easily have COBOL in them (maybe someone commented out the previous version of the line).
An important rule for comments is that they should never affect what the program does. If changing the comments can change how the program is compiled, you violate that.
All that in mind, my opinion is that you shouldn't use heuristics at all. You should always try to compile the program under both conventions unless one is explicitly specified. There's a chance that code will compile successfully under both conventions, and then you'll have two different programs and no way to tell which one is correct.
If that happens, you need to compare the two results (perhaps with a hash or something) to see if they're the same program. If they're the same, great, but if not, you'll need to force the user to explicitly choose a convention.
Most COBOL compilers will allow you to generate and analyze the post text manipulation phase.
The text preprocessor output can be seen (using OpenCOBOL for the example)
cobc -E program.cob
The text manipulation processor deals with any COPY ... REPLACING compiler directives, as well as converting SOURCE FORMAT IS FIXED (with line continuations, string literal concatenations, comment line removal, among other things) to the actual free format that the compiler lexical analyzer needs. A lot of the OpenCOBOL toolkits (Cross referencer and Animator, to name two) use source code AFTER the preprocessor pass. I don't think you'll lose any street cred if your parser program relies on post processed source code files.