Code changes to build z3 on Solaris - z3

I have need to get z3 building on Solaris 8. I took a look at the file scoped_timer.cpp, which is the only place that uses -D_LINUX_, and figure I can get the right code in there for Solaris, guarding it with -D_SOLARIS_. Also, src/util/hwf.cpp would need to be changed to provide definitions of fma() and nearbyint(), which aren't defined on Solaris 8. That can be done too, by defining fma(x, y, z) to be x*y + z, but then there would be two roundings instead of one, which is required by IEEE 754. Would this pose a problem for the purposes of z3? I would also need to change mk_util.py to set up compile and link options for Solaris. This also seems rather feasible as we are using g++ on Solaris, so the compile options would be similar. The link options probably would require additional libraries. I am willing to do some of the legwork, but I may need help along the way. Would anyone be willing to work with me, and would this be an welcome addition?

would this be an welcome addition?
I guess we can take a pull request when it is ready (and not obscuring other settings) and there are at least two users for this.
The usual conditions apply for taking pull requests https://github.com/Z3Prover/z3/wiki/Contribution-Guidelines.
Of course you can have your own forks without merging changes back.
The other issue may be that the endianness on your machines and the
constraints on memory alignment may expose further portability problems.
You should be able to find issues by running the regression tests under z3test repository (as well as the unit tests). We recently fixed some endianness related problems for ARM/PowerPC.

Related

View code generated by IBM's Enterprise COBOL compiler

I have recently started doing some work with COBOL, where I have only ever done work in z/OS Assembler on a Mainframe before.
I know that COBOL will be translated into Mainframe machine-code, but I am wondering if it is possible to see the generated code?
I want to use this to better understand the under workings of COBOL.
For example, if I was to compile a COBOL program, I would like to see the assembly that results from the compile. Is something like this possible?
Relenting, only because of this: "I want to use this to better understand the under workings of Cobol".
The simple answer is that there is, for Enterprise COBOL on z/OS, a compiler option, LIST. LIST will provide what is known as the "pseudo assembler" output in your compile listing (and some other useful stuff for understanding the executable program). Another compiler option, OFFSET, shows the displacement from the start of the program of the code generated for each COBOL verb. LIST (which inherently has the offset already) and OFFSET are mutually exclusive. So you need to specify LIST and NOOFFSET.
Compiler options can be specified on the PARM of the EXEC PGM= for the compiler. Since the PARM is limited to 100 characters, compiler options can also be specified in a data set, with a DDName of SYSOPTF (which, in turn, you use a compiler option to specify its use).
A third way to specify compiler options is to include them in the program source, using the PROCESS or (more common, since it is shorter) CBL statement.
It is likely that you have a "panel" to compile your programs. This may have a field allowing options to be specified.
However, be aware of a couple of things: it is possible, when installing the compiler, to "nail in" compiler options (which means they can't be changed by the application programmer); it is possible, when installing the compiler, to prevent the use of PROCESS/CBL statements.
The reason for the above is standardisation. There are compiler options which affect code generation, and using different code generation options within the same system can cause unwanted affects. Even across systems, different code generation options may not be desirable if programmers are prone to expect the "normal" options.
It is unlikely that listing-only options will be "nailed", but if you are prevented from specifying options, then you may need to make a special request. This is not common, but you may be unlucky. Not my fault if it doesn't work for you.
This compiler options, and how you can specify them, are documented in the Enterprise COBOL Programming Guide for your specific release. There you will also find the documentation of the pseudo-assembler (be aware that it appears in the document as "pseudo-assembler", "pseudoassembler" and "pseudo assembler", for no good reason).
When you see the pseudo-assembler, you will see that it is not in the same format as an Assembler statement (I've never discovered why, but as far as I know it has been that way for more than 40 years). The line with the pseudo-assembler will also contain the machine-code in the format you are already familiar with from the output of the Assembler.
Don't expect to see a compiled COBOL program looking like an Assembler program that you would write. Enterprise COBOL adheres to a language Standard (1985) with IBM Extensions. The answer to "why does it do it likely that" will be "because", except for optimisations (see later).
What you see will depend heavily on the version of your compiler, because in the summer of 2013, IBM introduced V5, with entirely new code-generation and optimisation. Up to V4.2, the code generator dated back to "ESA", which meant that over 600 machine instructions introduced since ESA were not available to Enterprise COBOL programs, and extended registers. The same COBOL program compiled with V4.2 and with V6.1 (latest version at time of writing) will be markedly different, and not only because of the different instructions, but also because the structure of an executable COBOL program was also redesigned.
Then there's opimisation. With V4.2, there was one level of possible optimisation, and the optimised code was generally "recognisable". With V5+, there are three levels of optimisation (you get level zero without asking for it) and the optimisations are much more extreme, including, well, extreme stuff. If you have V5+, and want to know a bit more about what is going on, use OPT(0) to get a grip on what is happening, and then note the effects of OPT(1) and OPT(2) (and realise, with the increased compile times, how much work is put into the optimisation).
There's not really a substantial amount of official documentation of the internals. Search-engineing will reveal some stuff. IBM's Compiler Cafe:COBOL Cafe Forum - IBM is a good place if you want more knowledge of V5+ internals, as a couple of the developers attend there. For up to V4.2, here may be as good a place as any to ask further specific questions.

llvm based code mutation for genetic programming?

for a study on genetic programming, I would like to implement an evolutionary system on basis of llvm and apply code-mutations (possibly on IR level).
I found llvm-mutate which is quite useful executing point mutations.
As far as I have understood, the instructions get count/numbered, one can then e.g. delete a numbered instruction.
However, introduction of new instructions seems to be possible as one of the availeable statements in the code.
Real mutation however would allow to insert any of the allowed IR instructions, irrespective of it beeing used in the code to be mutated.
In addition, it should be possible to insert library function calls of linked libraries (not used in the current code, but possibly available, because the lib has been linked in clang).
Did I overlook this in the llvm-mutate or is it really not possible so far?
Are there any projects trying to /already have implement(ed) such mutations for llvm?
llvm has lots of code analysis tools which should allow the implementation of the afore mentioned approach. llvm is huge, so I'm a bit disoriented. Any hints which tools could be helpful (e.g. getting a list of available library functions etc.)?
Thanks
Alex
Very interesting question. I have been intrigued by the possibility of doing binary-level genetic programming for a while. With respect to what you ask:
It is apparent from their documentation that LLVM-mutate can't do what you are asking. However, I think it is wise for it not to. My reasoning is that any machine-language genetic program would inevitably face the "Halting Problem", e.g. it would be impossible to know if a randomly generated instruction would completely crash the whole computer (for example, by assigning a value to a OS-reserved pointer), or it might run forever and take all of your CPU cycles. Turing's theorem tells us that it is impossible to know in advance if a given program would do that. Mind you, LLVM-mutate can cause for a perfectly harmless program to still crash or run forever, but I think their approach makes it less likely by only taking existing instructions.
However, such a thing as "impossibility" only deters scientists, not engineers :-)...
What I have been thinking is this: In nature, real mutations work a lot more like LLVM-mutate that like what we do in normal Genetic Programming. In other words, they simply swap letters out of a very limited set (A,T,C,G) and every possible variation comes out of this. We could have a program or set of programs with an initial set of instructions, plus a set of "possible functions" either linked or defined in the program. Most of these functions would not be actually used, but they will be there to provide "raw DNA" for mutations, just like in our DNA. This set of functions would have the complete (or semi-complete) set of possible functions for a problem space. Then, we simply use basic operations like the ones in LLVM-mutate.
Some possible problems though:
Given the amount of possible variability, the only way to have
acceptable execution times would be to have massive amounts of
computing power. Possibly achievable in the Cloud or with GPUs.
You would still have to contend with Mr. Turing's Halting Problem.
However I think this could be resolved by running the solutions in a
"Sandbox" that doesn't take you down if the solution blows up:
Something like a single-use virtual machine or a Docker-like
container, with a time limitation (to get out of infinite loops). A
solution that crashes or times out would get the worst possible
fitness, so that the programs would tend to diverge away from those
paths.
As to why do this at all, I can see a number of interesting applications: Self-healing programs, programs that self-optimize for an specific environment, program "vaccination" against vulnerabilities, mutating viruses, quality assurance, etc.
I think there's a potential open source project here. It would be insane, dangerous and a time-sucking vortex: Just my kind of project. Count me in if someone doing it.

Is setNumThreads(x) parallelizing my OpenCV code?

I really wonder if using OpenCV's setNumThreads(); really allows my code to run in parallel. I've searched a lot on the internet without finding any answer for my question.
Someone there have any answer for my question?
The effect depends greatly on the configuration options you select on cmake configure, see for example CMakeLists.txt, plus the catches of the different configuration options:
/* IMPORTANT: always use the same order of defines
1. HAVE_TBB - 3rdparty library, should be explicitly enabled
2. HAVE_CSTRIPES - 3rdparty library, should be explicitly enabled
3. HAVE_OPENMP - integrated to compiler, should be explicitly enabled
4. HAVE_GCD - system wide, used automatically (APPLE only)
5. HAVE_CONCURRENCY - part of runtime, used automatically (Windows only - MSVS 10, MSVS 11)
*/
And with those, you can understand the code itself. All that said, the parallelising engine won't do much if you're running an inherently sequential algorithm, which is practically everything under OpenCV... My guess is that if you would have several OpenCV programs running in parallel, you could see a meaningful difference.
Feel the need to build on miguelao's answer: most of OpenCV's functionality is NOT multithreaded. setNumThreads only effects multithreaded functions, such as calcOpticalFlowPyrLK.
Normally by default, OpenCV will use as many threads as you have cores. So setNumThreads won't give you a speed gain.
My main use for it is disabling multithreading, so that I may do my own with coarser granularity.

IDA not identifying statically compiled functions

I'm currently reverse engineering a file that appears to be statically compiled, however IDA Pro isn't picking up on any of the signatures! I feel like I am spending a lot of time stepping through functions that should be recognized by IDA, but they're not.
Anyway, maybe I am wrong... does anyone have any ideas? Has anyone run into this before?
IDA is a great disassembler, but it is not perfect. Some code, especially unlined/optimized code, simply cannot be disassembled into coherent functions in an automated fashion. This is what happens during compiling - coherent code is translated into instructions that the machine understands, not humans. IDA can make guesses and estimates, but it can't do everything. Reverse engineering will always involve some amount of manual interpretation to fill in the gaps.
If the compiler is not recognized by IDA (e.g. there were some changes in startup code), signatures won't be applied automatically. And if IDA doesn't know this compiler at all, it won't have any signatures. So:
if it has signatures but the compiler was not recognized automatically, apply them manually. For Delphi/C++ Builder, try b32vcl or bds.
if it doesn't have signatures for this compiler/library, you can create them yourself using FLAIR tools (assuming you have access to the original libraries)
This question is very broad, but I will try to give my opinion.
If the problem is that IDA is not correctly identifying Delphi, then you should try another software. There is a good tool called IDR (Interactive Delphi Reconstructor), however keep in mind that it runs the software before disassembling it and you should not run any not trustworthy programs on your PC (try virtual machine insted)
Otherwise, if the question is about IDA itself, then... IDA is not perfect at all, so it needs a reverse engineer to run it good, this will mean you have to statically identify some code, stack pointers, variables and etc. If it comes to Hex-Rays decompiler there are even more things to look for. For example it can identify not proper convention for a function and you will have to correct it or it can create too many variables that should be mapped by hand.
Also there are some databases for IDA's Flirt functions that could be useful to you. https://github.com/Maktm/FLIRTDB

How to prevent a program from being copied, using Delphi

I would like to know, let say I develop a program, have access to the clients computer, is there no way to develop the program so that is can only run on that machine, by writing in the computers unique identifier (if there is something like that) into the code and compiling the program. I'm using Delphi XE2
Yes, you can prevent some degree of unauthorized use by binding your executable to machine characteristics. You can do it yourself (problematic) or you can buy an off-the-shelf solution to do it for you (disclaimer--I work for one of the companies that produce solutions for these kinds of problems: Wibu-Systems). There are two problems with machine binding; we can help with one of them:
False positives: Machine characteristics can change due to user upgrades or weird driver behavior. That can cause your licensing system to report that the user is trying to abuse the license (a false positive). This is an endemic problem in these systems. (Shameless self-promotion: we have just released a new method of binding to reduce or eliminate these kinds of errors. We call it SmartBind(tm).
Crackability: Because any machine binding has to use OS calls to get hardware "fingerprint" info back for validation, a cracker can patch the dlls used to always return known "good" values, allowing for cracked software. These kinds of cracks are rampant on bittorrent sites. Unfortunately there is no great way around it, although our approach uses some crypto mojo to make it harder to do. For the ultimate in anti-piracy, you have to use a crypto device like a CmStick, HASP, or KeyLok. NSA can crack anything, of course, but the degree of difficulty of cracking a top-notch hardware-based solution like CodeMeter makes it unlikely unless the payoff is truly gigantic.
What I strongly suggest is that you look into commercial solutions to carefully study the available options. There are a number of vendors in this space and several good products to choose from (of course, I think our product is the best). Rolling your own solution will cause you lots of grief downstream as you try to deal with various configuration issues and potentially unhappy users.
The short answer is that there is no reliable way to prevent copying a program. Certainly there are techniques for identifying particular instances of the program, identifying machine hardware, etc, but for every one of those techniques, there is a countering technique to bypass it for users who really want to go to the trouble. Whether that is to hack your program and change what it looks for (or disable the checks altogether), to virtualize the hardware you are looking for, etc. There is always a way. It is just a matter of time and effort that someone is willing to put in.
If you want something simple this will give you the hard disk volume ID as a number which should be unique to each machine bar hacking.
function GetHDSerialNumber: Dword;
var dw:DWord; mc, fl : dword; c:string;
begin
c:=extractfiledrive(application.exename)+'\';
GetVolumeInformation(Pchar(c),nil,0,#dw,mc,fl,nil,0);
Result := dw;
end;
This works up to Delphi 2007, versions above that are unicode, you're on yer own with that problem.
While there is no such thing as hack-proof hardware the Wibu system mentioned has not been hacked yet, and it has strong anti-hack features including physical design features that make the most sophisticated hacking all but impossible.
Other solutions like i-Lock have been hacked, but so far Wibu is a good answer. I just bought their starter pack.

Resources