autoconf with -pthread - pthreads

Greetings. I am trying to create an autoconf configure script that automatically checks for which pthread option to use and, ideally, specifies -pthread when compiling with gcc.
It was my hope that AX_PTHREAD would work, but neither seems to work on MacOS 10.6.
I'm using AX_PTHREAD from http://www.nongnu.org/autoconf-archive/ax_pthread.html
For reasons that I do not understand, it just doesn't use the -pthread option for scripts build on a mac.
The problem seems to be that "none" is compiling without error, and as a result the other threads in the ax_pthread_flags variable aren't being checked.
So I've moved the -pthread case before the "none" case and added this case to the case statement:
-pthread)
PTHREAD_CFLAGS="-pthread"
PTHREAD_LIBS="-pthread"
;;
This seems to work, but I am not sure if it will work with non-GCC compilers. And I'm not even sure if I should care.
Equally annoying is the fact that the AX_PTHREAD macro only updates CFLAGS, not CPPFLAGS.
Is there a better way to test for the -pthread option using autoconf?

PostgreSQL has a hacked version of AX_PTHREAD that addresses some problems: http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/config/acx_pthread.m4 . PostgreSQL builds on Mac OS X, so give it a try perhaps.

Related

How to build clang with the memtag sanitiser enabled

I have spent a few hours trying to get the built-from-sources version of clang (v15) to work with the memtag sanitiser. For those of you who don't know what that is, it is simply a version of the address sanitiser that leverages the Memory Tagging features of ARM.
Anyway, while I can use it normally with the repository version of clang (v10), using the version built from sources just does not work.
Here is the command I use for both: clang main.c -S -march=armv8+memtag -fsanitize=memtag with clang which is either the repository-version or the built-from-sources version. Although the former works seamlessly, the latter does not.
I've tried to built llvm with different parameters, but none seemed to have done the trick. Here's my current building configuration:
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_PROJECTS="clang;clang-tools-extra;lld;lldb;openmp;polly;pstl;compiler-rt" -DLLVM_TARGETS_TO_BUILD="AArch64" ../llvm
I wonder if there is some parameter I have to specify to build clang with this sanitiser enabled.
PS: using the -fsanitize=memtag flag does not give any error: with the built version of clang it simply does not insert the instrumentation code.
If anybody is able to give me some insight I would really appreciate it. Thanks ;)

clang/llvm compile fatal error: 'cstdarg' file not found

Trying to convert a large gcc/makefile project into clang. Got it roughly working for x86, but now I'm trying to get cross compilation working.
The way it currently works is that we use Linaro's 7.1.1 arm compiler alongside its companion sysroot directory for base libraries/headers. I installed clang-6.0 and then the base clang(not sure if that mattered).
I used some commands I found to redirect clang to clang-6.0 and when I execute 'clang -v' and got
clang version 6.0.0-1ubuntu2~16.04.1 (tags/RELEASE_600/final)
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
....
Found candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/9
....
Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/6.5.0
....
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/7.4.0
Candidate multilib: .;#m64
Selected multilib: .;#m64
It does not find the current compiler we use which is at
/usr/local/gcc-linaro-7.1.1-2017.08-i686_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++(also a directory for *x86_64*)
I only found references to setting --sysroot, but not to a specific compiler. Definitely still lost about the relationship between clang+llvm+other compilers. I even saw somewhere saying I needed to compile llvm before I could use it?
I very roughly made changes in our make files to get the following output, basically all I had to add was '-target arm-linux-gnueabuhf' and reordered the mcpu/mfloat/marm/march so they came after -target in case it mattered
clang --sysroot=/usr/local/sysroot-glibc-linaro-2.25-2017.08-arm-linux-gnueabihf -c -std=c++0x
-g -DDEBUG_ON -target arm-linux-gnueabihf -mcpu=cortex-a7 -mfloat-abi=hard -marm -march=armv7ve
-Wall -fexceptions -fdiagnostics-show-option -Werror .... -I/usr/local/gcc-linaro-7.1.1-2017.08-x86_64_arm-linux-gnueabihf/arm-linux-gnueabihf/include .... and many more
I think the problem probably lies with the change I made which is the actual 'clang' call that replaced
/usr/local/gcc-linaro-7.1.1-2017.08-i686_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++ ....
End up with
fatal error: 'cstdarg' file not found
#include <cstdarg>
As said before I can already cross-compile with gcc, so I've already come across issues with std libraries that require 'build-essentials', 'g++-multilibs', etc. So they're already installed.
Looked and really haven't found anything too useful to me, I'm on linux mint 18.3 and the closest things I found were issues people had on mac and windows.
So I came across some posts mentioning setting --gcc-toolchain=/your/choice/of/cross/compiler but they also mention it not working. I discovered that if you combine this with the installation of llvm-6.0-dev(or maybe llvm-6.0-tools, tools installed dev so not 100%) it at least worked for me.
Any compiler clang or gcc needs to know where a header file is defined. The standard headers, standard libraries, c-runtime, and libc are all packaged together for each target e.g., arm64, x86 in a directory called 'sysroot'. When we compile a program we need to pass the path to sysroot for a compiler to know where to look for standard headers during compilation, and where to look for common libraries (libc, libstdc++ etc) during linkage.
Normally when we compile a program for the same machine the compiler uses the standard headers available in '/usr/include' and libraries from '/usr/lib'. When cross-compiling programs we should supply the sysroot as compiler flag. e.g. gcc --sysroot="/path/to/arm64/sysroot/usr" test.cpp. Same for clang. Most often pre-packaged cross compilers come with a script/binary that has 'sysroot' path embedded into it. e.g., aarch64-linux-gnu-gcc (https://packages.ubuntu.com/xenial/devel/gcc-aarch64-linux-gnu).
... the closest things I found were issues people had on mac and windows.
On mac the clang compiler will have the similar configuration as linux. So the details you found there should be totally applicable to yours.
More details on sysroot and cross-compilation:
https://elinux.org/images/1/15/Anatomy_of_Cross-Compilation_Toolchains.pdf

Configure Mac native Clang with Macports paths

When compiling projects that make use of libraries installed via MacPorts (boost, opencv, etc) I need to pass clang the library and include file locations via the -I and -L arguments.
Is there any "official" way to direct the Apple native clang look in these locations by default.
I guess I could just make a bash script with something to the effect of
clang -I/opt/local/include -L/opt/local/lib %#
and call that instead of the compiler, but is there a cleaner way to point clang to these locations automatically?
I am not looking for an Xcode based fix, instead I would like to be able to compile from the command line without having to manually type the above arguments in each time.
Any suggestions?
I had a similar question answered on the MacPorts mailing list[0].
Export these environment variables.
export CPPFLAGS='-isystem/opt/local/include'
export LDFLAGS='-L/opt/local/lib'
P.S. Hope you've not been waiting this long for an answer :)
[0] https://lists.macports.org/pipermail/macports-users/2017-July/043562.html

I want to know how to make a makefile for iOS "fat" library

I want to create a (non-xcode) makefile to create a fat library (emulator + device(s)) that can be imported into an XCode project using a makefile that calls the basic command line tools directly (not running XCODE from the command line, but the MAC Gcc and it's related utilities) - this is for .m, .mm, .c, and .cpp source files.
Ideal would be to find an example that works for a simple library (not by calling a makefile generator that makes an almost non human readable makefile)
anyway anyone know of such a thing or appropriate mechanism for doing the same?
Also an ability to extract the complier flags from an XCode project would be real handy :)
The purpose is I want to add a module to my cross platform libraries so I can integrate them into an iOS project.
Thanks!!
You can extract the compiler flags by viewing the build details or, more simply, running xcodebuild from the command line.
To create a fat binary, you either take advantage of the compiler toolchain's built-in support on the Mac OS X platform by passing multiple -arch arguments, like so:
clang -arch i386 -arch x86_64 -framework Foundation simple.m -o simple
Alternatively, you build the binary once for each desired architecture, then wrap all those binaries into a single fat binary using lipo. This is handy when working with ported Unix software; just change the build result directory each time, then smash them all together after building with lipo. Assuming you have simple-i386 and simple-x86_64, you would then do:
lipo simple-i386 simple-x86_64 -create -output simple
This would create a fat binary named simple containing simple-i386 and simple-x86_64.
Ok - I found this which is a great HOWTO o building a fat library using XCODE that outlines the process and how to create the projects
http://blog.boreal-kiss.net/2011/03/15/how-to-create-universal-static-libraries-on-xcode-4/
being a newbie to XCode and iOS development I had to discover a few things.
you can view the actual command line output of a build to see what the gcc flags are.
View->Navigators->Log - then control click on the messages list to "expand all Transcripts"
to see what stdout and stderr from the chosen build's build output.
You can execute an "external build tool" with your .bashrc and .bash_profile environment settings by making the command and arguments a login shell: "bash --login -c 'mybuildtool [my tools args] $(ACTION)', and thus bypass having to deal with the hard to maintain MacOSX launchd settings etc. this works for things like using ruby and rake as well as make etc.

What's the best way to compile Ruby from source on 64-bit RedHat Linux

On RedHat Enterprise Linux 5 the latest Ruby version available via RPM is 1.8.5. My Rails app requires 1.8.6 or above so I need to compile Ruby from source.
I have tried the following to build it and it seems to build ok, but then I'm seeing gcc compilation errors when trying to run a plug-in which requires RubyInline.
There seems to be a lack of decent documentation for building Ruby from source, suitable for running Rails apps.
Here's how I compiled Ruby:
./configure --prefix=/usr --with-openssl-include=/usr/include/openssl --with-openssl-lib=/usr/lib64/openssl/engines
make
sudo make install
I wonder whether there are specific compile flags I need to build this on a 64-bit system. The actual error I'm seeing is
error executing "gcc -shared -fPIC -g -O2 -I /usr/lib/ruby/1.8/x86_64-linux -I /usr/include -L/usr/lib -o \"/home/deploy/.ruby_inline/Inline_ImageScience_aa58.so\" \"/home/deploy/.ruby_inline/Inline_ImageScience_aa58.c\" -lfreeimage -lstdc++ ":
Any advice would be greatly appreciated
The best way would probably be to just "steal" a Ruby 1.8.6 RPM from Fedora. The second best way would be to steal a Ruby 1.8.6 SRPM from Fedora and build it yourself.
However, there is one thing you could do: add a --disable-pthread flag to the configure line and remove --enable-pthread if it's there. --enable-pthread makes MRI significantly slower, and is only needed if you want to use Ruby/Tk and your system's Tk library was built with --enable-pthread.
Ruby packages for Fedora (including SRPM)
Couldn't post as a comment on the correct answer so added here - editors feel free to tidy-up.

Resources