OpenCV static linking on 64 bit - opencv

I've written a simple application in OpenCV, and compiled it using the following command:
g++ -I ./include/opencv -Wall -o imageHash imageHash.h imageHash.cpp -lcv -lhighgui
What I'm trying to do next, is the following:
use static linking, so I can run this application without the need to install openCV on the traget machine
compile the app to a CPU independent form, so I can run this on 32 bit and 64 bit machines as well.
How do I modify the compilation command, to achive the following?
Thanks,
krisy

If you want it to run independently on 32 and 64 bit systems, compile in 32 bit mode. As for static linking, theoretically the way to do it is when you are building with cmake, under the build tab uncheck BUILD_SHARED_LIBS. The problem I faced is that this does not seem to work, so for right now you may be stuck with dynamic linking. To over-ride the install on other systems, just put the DLL's in the same directory as the exe.

Related

How to compile a project which requires SSE2 on MacBook with M1 chip?

I need to install a software which requires SSE2 on my macbook air with M1 chip (os Monterey).
When I am trying to compile the project I receive the following error:
/libRootFftwWrapper/vectorclass/vectorclass.h:38:4: error: Please compile for the SSE2 instruction set or higher
#error Please compile for the SSE2 instruction set or higher
^
and the error message links to the following lines in the code:
#include "instrset.h" // Select supported instruction set
#if INSTRSET < 2 // SSE2 required
#error Please compile for the SSE2 instruction set or higher
#else
I understand that only Intel chips equipped with SSE2, but is there any kind of a translator which can help me to build this project?
Update: problem is solved. Solution is in the answer section.
Alisa's solution may not be optimal for some people, so here is an alternative.
Rosetta 2 is basically an emulator; it takes compiled x86 machine code and runs it on ARM. I don't have an M1 CPU, but by all accounts it does a very good job of this.
That said, it can often be better to compile code directly to target the Arm CPU instead of relying on Rosetta. The compiler generally has more information about how the code works than an emulator which has to operate after all that additional context has been thrown away, so it can sometimes optimize code more effectively.
The problem Alisa is running into is that SSE intrinsics aren't designed to be portable, they're designed to let people achieve better performance by writing code which is very tightly coupled with the underlying architecture.
There are a couple projects which allow you to compile your SSE code using NEON, which you can think of as Arm's version of SSE, by providing alternate implementations of the SSE API. The two most popular are probably SSE2NEON and SIMD Everywhere (SIMDe) (full disclosure: I am the lead developer for the latter).
SSE2NEON simply implements SSE using NEON. SIMDe provides many implementations, including NEON, AltiVec/VSX (POWER), WebAssembly SIMD, z/Architecture, etc., as well as portable fallbacks which work everywhere.
Both projects work basically the same: instead of including <xmmintrin.h> (or some other x86-specific header, it depends on which ISA you want to use) you include either SSE2NEON or SIMDe. You then add any relevant compiler flags to set the target (e.g., -march=armv8-a+simd), and you're good to go.
If performance isn't a major concern, Rosetta 2 is probably the easiest option. Otherwise you may want to look into SSE2NEON or SIMDe.
Another consideration is whether you just want a quick fix or eventually want to port the code over to Arm... Rosetta 2 is not intended to be a long-term solution, but rather a stop-gap to allow existing code to continue working while people port their code. SSE2NEON and SIMDe both make it possible to mix x86 and Arm SIMD code in the same executable, so you can port your code gradually over time instead of having to flip one big switch to transition from x86 to Arm.
I managed to compile the project by using rosetta 2, as it was suggested in the comments below.
To install rosetta I used the following command:
$ softwareupdate --install-rosetta
Then I installed Homebrew, clang and cmake for x86_64 arch by using:
$ arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
$ arch -x86_64 /usr/local/bin/brew install llvm
$ arch -x86_64 /usr/local/bin/brew install cmake
I also had to re-tap Homebrew by using:
$ rm -rf "/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core"
$ arch -x86_64 /usr/local/bin/brew tap homebrew/core
as it was suggested by brew doctor.
finally, the project was compiled after removing previously generated CMakeCache:
$ make clean
$ arch -x86_64 /usr/local/bin/cmake build_dir
$ make
$ make install

clang/llvm compile fatal error: 'cstdarg' file not found

Trying to convert a large gcc/makefile project into clang. Got it roughly working for x86, but now I'm trying to get cross compilation working.
The way it currently works is that we use Linaro's 7.1.1 arm compiler alongside its companion sysroot directory for base libraries/headers. I installed clang-6.0 and then the base clang(not sure if that mattered).
I used some commands I found to redirect clang to clang-6.0 and when I execute 'clang -v' and got
clang version 6.0.0-1ubuntu2~16.04.1 (tags/RELEASE_600/final)
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
....
Found candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/9
....
Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/6.5.0
....
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/7.4.0
Candidate multilib: .;#m64
Selected multilib: .;#m64
It does not find the current compiler we use which is at
/usr/local/gcc-linaro-7.1.1-2017.08-i686_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++(also a directory for *x86_64*)
I only found references to setting --sysroot, but not to a specific compiler. Definitely still lost about the relationship between clang+llvm+other compilers. I even saw somewhere saying I needed to compile llvm before I could use it?
I very roughly made changes in our make files to get the following output, basically all I had to add was '-target arm-linux-gnueabuhf' and reordered the mcpu/mfloat/marm/march so they came after -target in case it mattered
clang --sysroot=/usr/local/sysroot-glibc-linaro-2.25-2017.08-arm-linux-gnueabihf -c -std=c++0x
-g -DDEBUG_ON -target arm-linux-gnueabihf -mcpu=cortex-a7 -mfloat-abi=hard -marm -march=armv7ve
-Wall -fexceptions -fdiagnostics-show-option -Werror .... -I/usr/local/gcc-linaro-7.1.1-2017.08-x86_64_arm-linux-gnueabihf/arm-linux-gnueabihf/include .... and many more
I think the problem probably lies with the change I made which is the actual 'clang' call that replaced
/usr/local/gcc-linaro-7.1.1-2017.08-i686_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++ ....
End up with
fatal error: 'cstdarg' file not found
#include <cstdarg>
As said before I can already cross-compile with gcc, so I've already come across issues with std libraries that require 'build-essentials', 'g++-multilibs', etc. So they're already installed.
Looked and really haven't found anything too useful to me, I'm on linux mint 18.3 and the closest things I found were issues people had on mac and windows.
So I came across some posts mentioning setting --gcc-toolchain=/your/choice/of/cross/compiler but they also mention it not working. I discovered that if you combine this with the installation of llvm-6.0-dev(or maybe llvm-6.0-tools, tools installed dev so not 100%) it at least worked for me.
Any compiler clang or gcc needs to know where a header file is defined. The standard headers, standard libraries, c-runtime, and libc are all packaged together for each target e.g., arm64, x86 in a directory called 'sysroot'. When we compile a program we need to pass the path to sysroot for a compiler to know where to look for standard headers during compilation, and where to look for common libraries (libc, libstdc++ etc) during linkage.
Normally when we compile a program for the same machine the compiler uses the standard headers available in '/usr/include' and libraries from '/usr/lib'. When cross-compiling programs we should supply the sysroot as compiler flag. e.g. gcc --sysroot="/path/to/arm64/sysroot/usr" test.cpp. Same for clang. Most often pre-packaged cross compilers come with a script/binary that has 'sysroot' path embedded into it. e.g., aarch64-linux-gnu-gcc (https://packages.ubuntu.com/xenial/devel/gcc-aarch64-linux-gnu).
... the closest things I found were issues people had on mac and windows.
On mac the clang compiler will have the similar configuration as linux. So the details you found there should be totally applicable to yours.
More details on sysroot and cross-compilation:
https://elinux.org/images/1/15/Anatomy_of_Cross-Compilation_Toolchains.pdf

OTOOL alternate for linux

I have a reverse engineering set up on a Mac machine. This set up does some reverse engineering on iOS applications(.ipa files). I'm migrating the setup from Mac to a linux machine.
Currently on Mac, I'm using OTOOL on ipa binary files and using the following commands:
otool -L /iOS/binary/path
otool -lv /iOS/binary/path
otool -hv /iOS/binary/path
Now, I've to do the same operation, i.e reverse engineer the iOS applications, but now on the linux machine. AFAIK, OTOOL is not available for linux machine.
I've come across JTOOL which I think is most relevant till now. I can use it on linux, and it does something similar to OTOOL, but not exactly same. E.g. while using the -L command on JTOOL, I also need to specify architecture. However, OTOOL gives the shared libraries for all the available architectures.
I tried ldd, but I'm getting the error - "not a dynamic executable".
I tried objdump, but it asks for the object file.
I'm not sure which tool can I use. I to figure out the alternate tool which can do same as OTOOL. Or, if not same, then what changes do i need to make to use the alternate tool.

How do you compile Halide for iOS?

The README claims it can compile to armv7, but I cannot find the magic incantation to make it work.
I started down the rabbit hole of changing the Makefile to set the arch=armv7, fixing the resulting compilation errors, etc, but that doesn't seem like the right way to go about it.
There recommended cmake flags are:
cmake -DLLVM_TARGETS_TO_BUILD="X86;ARM;NVPTX" -DLLVM_ENABLE_ASSERTIONS=ON -DCMAKE_BUILD_TYPE=Release ..
But alas, the bin directory contains only a .a and a .so, both of which are compiled for x86_64. There are no dylibs.
I can successfully run the test iOS app in the simulator, linking with the x86 libraries, but I cannot build on a device since there are no arm binaries.
Here is a link to the Halide test app I'm trying to build:
https://github.com/halide/Halide/tree/master/apps/HelloiOS
You should use AOT compilation for iOS. The JIT in principle works on ARM (the architecture), but not on iOS (the OS).
Clarification: are you trying to build Halide to run on ARM, or merely to generate code for ARM? (If the latter, any target will do, as all builds of Halide can generate code for all known targets.)

How do I compile boost into a .a library for use in an xcodeproj?

I currently have a file called libboost_serialization.a left over from another developer, but when I try to compile I get Undefined symbols for architecture x86_64: and a ton of errors. I'm assuming this is because the .a file has been built for 32 bit, not 64, so I'm trying to recompile boost for 64 bit.
I'm having trouble, though. I've came across many guides like this that go over how to get boost installed onto your system, but nothing on compiling boost into a .a for use in a project. How would I go about doing this?
In essence you need to enable static libraries when compiling the boost libraries.
Download and unpack the source code
boot-strap boost build by executing ./bootstrap.sh
Then execute b2 with the option link=static, for example, I use ./b2 link=static --prefix=/usr/local and then install the result with sudo ./b2 link=static --prefix=/usr/local install

Resources