How to compile a project which requires SSE2 on MacBook with M1 chip? - sse

I need to install a software which requires SSE2 on my macbook air with M1 chip (os Monterey).
When I am trying to compile the project I receive the following error:
/libRootFftwWrapper/vectorclass/vectorclass.h:38:4: error: Please compile for the SSE2 instruction set or higher
#error Please compile for the SSE2 instruction set or higher
^
and the error message links to the following lines in the code:
#include "instrset.h" // Select supported instruction set
#if INSTRSET < 2 // SSE2 required
#error Please compile for the SSE2 instruction set or higher
#else
I understand that only Intel chips equipped with SSE2, but is there any kind of a translator which can help me to build this project?
Update: problem is solved. Solution is in the answer section.

Alisa's solution may not be optimal for some people, so here is an alternative.
Rosetta 2 is basically an emulator; it takes compiled x86 machine code and runs it on ARM. I don't have an M1 CPU, but by all accounts it does a very good job of this.
That said, it can often be better to compile code directly to target the Arm CPU instead of relying on Rosetta. The compiler generally has more information about how the code works than an emulator which has to operate after all that additional context has been thrown away, so it can sometimes optimize code more effectively.
The problem Alisa is running into is that SSE intrinsics aren't designed to be portable, they're designed to let people achieve better performance by writing code which is very tightly coupled with the underlying architecture.
There are a couple projects which allow you to compile your SSE code using NEON, which you can think of as Arm's version of SSE, by providing alternate implementations of the SSE API. The two most popular are probably SSE2NEON and SIMD Everywhere (SIMDe) (full disclosure: I am the lead developer for the latter).
SSE2NEON simply implements SSE using NEON. SIMDe provides many implementations, including NEON, AltiVec/VSX (POWER), WebAssembly SIMD, z/Architecture, etc., as well as portable fallbacks which work everywhere.
Both projects work basically the same: instead of including <xmmintrin.h> (or some other x86-specific header, it depends on which ISA you want to use) you include either SSE2NEON or SIMDe. You then add any relevant compiler flags to set the target (e.g., -march=armv8-a+simd), and you're good to go.
If performance isn't a major concern, Rosetta 2 is probably the easiest option. Otherwise you may want to look into SSE2NEON or SIMDe.
Another consideration is whether you just want a quick fix or eventually want to port the code over to Arm... Rosetta 2 is not intended to be a long-term solution, but rather a stop-gap to allow existing code to continue working while people port their code. SSE2NEON and SIMDe both make it possible to mix x86 and Arm SIMD code in the same executable, so you can port your code gradually over time instead of having to flip one big switch to transition from x86 to Arm.

I managed to compile the project by using rosetta 2, as it was suggested in the comments below.
To install rosetta I used the following command:
$ softwareupdate --install-rosetta
Then I installed Homebrew, clang and cmake for x86_64 arch by using:
$ arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
$ arch -x86_64 /usr/local/bin/brew install llvm
$ arch -x86_64 /usr/local/bin/brew install cmake
I also had to re-tap Homebrew by using:
$ rm -rf "/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core"
$ arch -x86_64 /usr/local/bin/brew tap homebrew/core
as it was suggested by brew doctor.
finally, the project was compiled after removing previously generated CMakeCache:
$ make clean
$ arch -x86_64 /usr/local/bin/cmake build_dir
$ make
$ make install

Related

How to build clang with the memtag sanitiser enabled

I have spent a few hours trying to get the built-from-sources version of clang (v15) to work with the memtag sanitiser. For those of you who don't know what that is, it is simply a version of the address sanitiser that leverages the Memory Tagging features of ARM.
Anyway, while I can use it normally with the repository version of clang (v10), using the version built from sources just does not work.
Here is the command I use for both: clang main.c -S -march=armv8+memtag -fsanitize=memtag with clang which is either the repository-version or the built-from-sources version. Although the former works seamlessly, the latter does not.
I've tried to built llvm with different parameters, but none seemed to have done the trick. Here's my current building configuration:
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_PROJECTS="clang;clang-tools-extra;lld;lldb;openmp;polly;pstl;compiler-rt" -DLLVM_TARGETS_TO_BUILD="AArch64" ../llvm
I wonder if there is some parameter I have to specify to build clang with this sanitiser enabled.
PS: using the -fsanitize=memtag flag does not give any error: with the built version of clang it simply does not insert the instrumentation code.
If anybody is able to give me some insight I would really appreciate it. Thanks ;)

How to use meson to build glib

I need to upgrade glib for a specific project. It currently uses glib 2.28.8. I have three problems.
I've never used meson and ninja before, so I checked glib's INSTALL.in and it just said to run meson _build followed by ninja -C _build. So I ran meson _build and got the following output:
$ meson _build
The Meson build system
Version: 0.47.2
Source dir: /srv/devel/build/glib-2.65.0
Build dir: /srv/devel/build/glib-2.65.0/_build
Build type: native build
meson.build:227: WARNING: Identifier 'in' will become a reserved keyword in a future release. Please rename it.
meson.build:227:14: ERROR: Expecting eol got id.
if vs_crt_opt in ['mdd', 'mtd']
So the basic build doesn't work. Why?
For our purposes, we use the following configure command:
PKG_CONFIG_PATH=$(OUTPUT_DIR)/lib/pkgconfig ./configure --prefix=$(OUTPUT_DIR) --disable-dtrace --disable-selinux ac_cv_path_MSGFMT=/bin/true CPPFLAGS="-fPIC -I$(OUTPUT_DIR)/include" LDFLAGS="-L$(OUTPUT_DIR)/lib" --enable-static --disable-shared
How do I specify that in meson?
I will also need to build in Windows. Any gotchas there?
Thanks!
EDIT: I tried older versions of glib, going back to 2.62.0 and when I run meson _build I get the error meson.build:1:0: ERROR: Meson version is 0.47.2 but project requires >= 0.49.2.. So that's probably a big part of the problem for question (1). This is running on CentOS 6 & 7, so I'll probably have to get and install a current meson package.
So the basic build doesn't work. Why?
You correctly figured this out in your edit: GLib 2.64 requires Meson 0.49.2, and it seems that Meson 0.47.2 is so old as to not be able to correctly parse GLib’s meson.build.
It looks from your build output that you’re trying to build GLib 2.65.0. Note that 2.65 is an unstable release series. Even minor versions of GLib (2.62.x, 2.64.x, etc.) are stable; odd ones are unstable. Using an unstable release is fine, as long as you know what you’ve signed up for: it may contain bugs, and new APIs introduced in that unstable series may change or be removed before the first stable release (in the case of 2.65.x, the corresponding first stable release will be 2.66.0).
For our purposes, we use the following configure command:
You’ll want something like:
meson --prefix "$(OUTPUT_DIR)" -Dselinux=disabled -Ddefault_library=static _build
You can see from the b_staticpic option’s default value that -fPIC is the default for static libraries, so (I believe) doesn’t need to be explicitly specified.
There should be no need to disable dtrace support since it’s disabled by default. If you did need to disable it, you’d do that with -Ddtrace=false.
The custom -L and -I arguments should be covered by use of --prefix.
Overriding the msgfmt tool to disable internationalisation is not a supported way of building GLib and you’re on your own with that one.
There is some good documentation on the built-in options in Meson here and here.
I will also need to build in Windows. Any gotchas there?
That’s too broad a question to be answered on StackOverflow.

clang/llvm compile fatal error: 'cstdarg' file not found

Trying to convert a large gcc/makefile project into clang. Got it roughly working for x86, but now I'm trying to get cross compilation working.
The way it currently works is that we use Linaro's 7.1.1 arm compiler alongside its companion sysroot directory for base libraries/headers. I installed clang-6.0 and then the base clang(not sure if that mattered).
I used some commands I found to redirect clang to clang-6.0 and when I execute 'clang -v' and got
clang version 6.0.0-1ubuntu2~16.04.1 (tags/RELEASE_600/final)
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
....
Found candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/9
....
Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/6.5.0
....
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/7.4.0
Candidate multilib: .;#m64
Selected multilib: .;#m64
It does not find the current compiler we use which is at
/usr/local/gcc-linaro-7.1.1-2017.08-i686_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++(also a directory for *x86_64*)
I only found references to setting --sysroot, but not to a specific compiler. Definitely still lost about the relationship between clang+llvm+other compilers. I even saw somewhere saying I needed to compile llvm before I could use it?
I very roughly made changes in our make files to get the following output, basically all I had to add was '-target arm-linux-gnueabuhf' and reordered the mcpu/mfloat/marm/march so they came after -target in case it mattered
clang --sysroot=/usr/local/sysroot-glibc-linaro-2.25-2017.08-arm-linux-gnueabihf -c -std=c++0x
-g -DDEBUG_ON -target arm-linux-gnueabihf -mcpu=cortex-a7 -mfloat-abi=hard -marm -march=armv7ve
-Wall -fexceptions -fdiagnostics-show-option -Werror .... -I/usr/local/gcc-linaro-7.1.1-2017.08-x86_64_arm-linux-gnueabihf/arm-linux-gnueabihf/include .... and many more
I think the problem probably lies with the change I made which is the actual 'clang' call that replaced
/usr/local/gcc-linaro-7.1.1-2017.08-i686_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++ ....
End up with
fatal error: 'cstdarg' file not found
#include <cstdarg>
As said before I can already cross-compile with gcc, so I've already come across issues with std libraries that require 'build-essentials', 'g++-multilibs', etc. So they're already installed.
Looked and really haven't found anything too useful to me, I'm on linux mint 18.3 and the closest things I found were issues people had on mac and windows.
So I came across some posts mentioning setting --gcc-toolchain=/your/choice/of/cross/compiler but they also mention it not working. I discovered that if you combine this with the installation of llvm-6.0-dev(or maybe llvm-6.0-tools, tools installed dev so not 100%) it at least worked for me.
Any compiler clang or gcc needs to know where a header file is defined. The standard headers, standard libraries, c-runtime, and libc are all packaged together for each target e.g., arm64, x86 in a directory called 'sysroot'. When we compile a program we need to pass the path to sysroot for a compiler to know where to look for standard headers during compilation, and where to look for common libraries (libc, libstdc++ etc) during linkage.
Normally when we compile a program for the same machine the compiler uses the standard headers available in '/usr/include' and libraries from '/usr/lib'. When cross-compiling programs we should supply the sysroot as compiler flag. e.g. gcc --sysroot="/path/to/arm64/sysroot/usr" test.cpp. Same for clang. Most often pre-packaged cross compilers come with a script/binary that has 'sysroot' path embedded into it. e.g., aarch64-linux-gnu-gcc (https://packages.ubuntu.com/xenial/devel/gcc-aarch64-linux-gnu).
... the closest things I found were issues people had on mac and windows.
On mac the clang compiler will have the similar configuration as linux. So the details you found there should be totally applicable to yours.
More details on sysroot and cross-compilation:
https://elinux.org/images/1/15/Anatomy_of_Cross-Compilation_Toolchains.pdf

Compiling COBOL program on mac yosemite 10.10.2

While I'm running my COBOL code:
$ cobc hello.cob
I'm getting an error:
clang: error: unknown argument: '-R/opt/local/lib'
(Today,) I installed GnuCOBOL as root with
$ port selfupdate
$ port install open-cobol
Yeah, this has to do with Apple aliasing gcc to clang, but clang isn't a drop-in replacement for gcc yet. So it breaks on a few things. There is no simple way to fix this. If you type gcc you get clang.
$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
Target: x86_64-apple-darwin12.5.0
Thread model: posix
I'm not going to list all the details here, (and I know links are frowned on here on SO, but the entire thread will need to be read to get to grips with this problem. (Scripts are involved that strip some arguments out).
There is very little the GnuCOBOL compiler authors can do about this. The Mac clang is actually defining GNUC as well, so the compiler code that tests for gcc features is currently ineffective, clang reporting itself as gcc. Under a real gcc, the run-path setting in the ELF output is necessary, so -R can't just be yanked out. I see this as slightly dirty pool on Apple's part, but, it is their system, to wall off as they see fit.
http://sourceforge.net/p/open-cobol/discussion/help/thread/e1b4af35/
Changes to GnuCOBOL will try and workaround the issue, but that may take a while to get out into the wild.

OpenCV static linking on 64 bit

I've written a simple application in OpenCV, and compiled it using the following command:
g++ -I ./include/opencv -Wall -o imageHash imageHash.h imageHash.cpp -lcv -lhighgui
What I'm trying to do next, is the following:
use static linking, so I can run this application without the need to install openCV on the traget machine
compile the app to a CPU independent form, so I can run this on 32 bit and 64 bit machines as well.
How do I modify the compilation command, to achive the following?
Thanks,
krisy
If you want it to run independently on 32 and 64 bit systems, compile in 32 bit mode. As for static linking, theoretically the way to do it is when you are building with cmake, under the build tab uncheck BUILD_SHARED_LIBS. The problem I faced is that this does not seem to work, so for right now you may be stuck with dynamic linking. To over-ride the install on other systems, just put the DLL's in the same directory as the exe.

Resources