Compiling C Source for iOS - ios

I have some existing source code that is written in C that I want to build and include in my iOS project. The entire source package is very large and is built using existing Makefiles and GCC. It is producing static libraries (.a files) that I would love to move over to my iOS project. However, the static libraries the Makefile produces is for x86 processors, which obviously won't work on iOS.
Is there a way I can switch GCC to build for ARMv7/ARM64 instead, without making changes to the existing source (in most cases)? I know there is the -march switch for GCC or you can download ARM specific GCC compilers, so I know the general concept of building for a different architecture than the build machine.
To build for ARM on Mac OS, will I have to download a different GCC compiler or is that capability built into the default GCC?
I'm sorry for the lack of understanding of basic concepts here; I'm primarily a Java and Objective-C developer, so building source for different architectures is a mostly foreign concept to me.

Whilst GCC supports a good many CPU architecture and platforms, it is usually built for a single one. To compile for ARM you generally need an ARM-cross-compiling GCC targeted appropriately.
The default system compiler for MacOSX and iOS for all architectures is clang and has been for some time (the last version of GCC apple shipped in dev tools is creaking and obsolete, and definitely won't support ARMv8).
The usual way of getting clang is to install Xcode (free from the App Store). There's a option in the installer (and in the UI of Xcode) to install the command-line tool package. This installs sym-links in /usr/bin to the compiler, and installs a bunch of other stuff you might expect such as make.
clang is (mostly) command-line compatible with gcc, and furthermore, you'll find that if you run gcc from the command-line on a Mac with dev-tools installed, you in fact get clang.
$ gcc --version
Configured with: -- prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/c++/4.2.1
Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.0.0
Thread model: posix
clang comes with ARMv7, ARMv8, i686, x86_64 on MacOSX, and can be configured to compile for any of these from the command line (See documentation)
Given the above, there's a fair chance your code will compile with minimal changes to compiler-flags using the existing makefile. You might want to read the documentation for lipo - which allows you to produce multi-architecture binaries.

Related

runtime clang/clang++ with non-standard gcc install?

Is there a way to get clang/clang++ to use a gcc/g++ installation in a non-standard (i.e. not /usr) place?
I'm trying to get AMD's AOCC 4.0 compiler to work. They provide a pre-compiled version that you just unpack. The problem is that it seems to assume gcc is in /usr/lib/gcc/... In my case I'm on CentOS 7 so that's gcc 4.8.5. I want to use newer gcc's install in /sw/opt (and managed with environment modules) but even if the gcc is in my path, clang only finds that 4.8.5 version in /usr. This is also a problem in that I have a cluster that has no default gcc installed (but many gcc versions installed in /cluster/sw) and I can't get clang to see them.
When I want LLVM I usually just build from scratch and specify GCC_INSTALL_PREFIX but that only seems to be useful at build time and since AMD only provides executables I'm out of luck.
Ideally I'd like to get clang/clang++ to point to another gcc (en mass: include, libs, etc...) or not be dependent on gcc at all.
AOCC seems to be based on 14.0.6 if that matters:
AMD clang version 14.0.6 (CLANG: AOCC_4.0.0-Build#434 2022_10_28) (based on LLVM Mirror.Version.14.0.6)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /sw/opt/aocc-compiler-4.0.0/bin
After more poking around I've discovered that there is a clang option "--gcc-toolchain" that seems to address this. Some clang documentation also lists an option "--gcc-install-dir" but neither the 14.0.6 based version of AOCC nor the 16.0.0 based version of OneAPI (2023.0) seem to recognize it. I don't see it in the output of "clang --help" either so who knows.

Complete and isolated LLVM/musl toolchain

What I'm trying to achieve is to compile an GNU independent and isolated LLVM toolchain using musl as clib.
Recently LLVM 4.0 has been released with lot's of new cool features, including production ready LLD, so also the linking step could be handled by LLVM.
More or less the stack is:
clang
llvm
lld
compiler-rt
libcxx
libcxxabi
musl
Following this, it is actually possible to do so without much patching or such (apart from compiling musl), but sadly, there is no good documentation about that.
Any suggestions?
There is an example of using Clang + Musl together to compile "Hello World" in C here: https://github.com/njlr/portable-cxx
It only requires wget, tar and make to be installed. Clang and Musl are downloaded as part of the build process.
The key is to disable the usual include paths using -nostdinc and then add the Musl ones using -isystem.
I was solving the same problem with my NGTC (Non-GNU toolchain) project. Please take a look at my build scripts and patches.
I used this toolchain to build a small Linux distro without any code from GNU project: nenuzhnix.

clang/clang++ not detecting standard header files like iostream.h and stdio.h

I ran clang++ -v testfile.cpp and found that many standard headers were missing from the directory C:\LLVM\lib\clang\3.9.0\include. I downloaded a pre-built binary of clang 3.9.0 for 32 bit windows from this link.
Can someone please help me sort out this mess and explain me why the standard libraries are missing in the pre-build version of clang? I've searched the web for hours to get the answer and solution to this problem but couldn't find one. Thanks in advance.
why the standard libraries are missing in the pre-build version of clang?
Your Windows binary download comprises only binary build tools
plus a handful of clang-specific headers because you are supposed
to use clang, on Windows, in lieu of another native compiler that provides your
standard library. Similarly if you install clang on Linux you'll build against
the GCC standard library by default.
Your internet search seemingly failed to lead you to Installing clang++ to compile and link on Windows, which
explains how to integrate clang with the mingw-w64 GCC standard library for 32- and/or 64-bit work
in the manner that clang for Windows expects and supports.

Is Clang as (or more) portable than gcc for C++?

Suppose I have a C++ project, and I compile it with gcc and with clang. You can assume that the gcc compiled version runs in another linux machine. Will this imply (in normal circumstances) that the clang version will also run on the other linux machine?
Clang binraries are as portable as gcc binaries are, as long as you are linking to the same libraries and you aren't passing flags like -march=native to the compiler.
Clang has one huge advantage over gcc, it can deal with alsmost all libstdc++ versions,
while gcc is bound to its bundled version and often can't parse any older versions.
So the following often happens in production environments:
Install an LTS distro (Ubuntu 12.04 for example)
Keep gcc, glibc and libstdc++ untouched
Install a recent clang version for C++11, etc
Build the release binaries with clang
So (in my specific example) those binaries will work on all
distros with libstdc++ >= 4.6 and glibc >= 2.15.
This may be an interesting read for you.
If the program is a simple Hello world, it should work on the other machine when compiled through Clang.
But when the program is a real program with a lot a lines and compilation units, and calls to many external libs everything is possible depending on the program itself and the compilation options :
hardware requirements (memory) being different (mainly depends on compilation options)
use of different (versions of) libraries between gcc and clang
UB giving expected results in one and not in the other
different usages for implementation defined rules
use of gcc extensions not accepted by clang
For all of the above except 2 first, it should run on other machines it it runs on one
linux programs depend on their build environment. If your glibc version or kernel is different there will be lots of possibilities that the executable will not be able to run. You could use the interpreter language of llvm though, it compiles into bytecode which can be interpreted on various operating systems.
The answer is, well, depends.
The first hard requirement is the same CPU architecture. 64 Bit is not enough of a qualifier. If you compile of x64 you won't have much success running it on 64-Bit ARM.
The next big one is libraries. If you use any libraries in the program, the target system needs to have those libraries. This includes the kernel headers. So if you compile for e.g. a current kernel version, using the most cutting-edge features, then you will have no joy running that program on a very old version of Linux.
The last one is hardware dependencies. If you create a program that e.g. requires 4 GB of RAM and then try to run it on a small embedded device with 256 MB RAM, that won't work either.
To fit better to your changed question: From my experience there shouldn't be much of a difference in portability between Clang and gcc. Also googling didn't turn up anything, so it should basically work. But better always test stuff like that before you publish some binary in production.

Cross Compiling a library from intel to arm

I am using open source C++ library DCMTK from http://dicom.offis.de/dcmtk.php.en.
I have successfully compiled this library on Windows using VC++ IDE, MacOS Xcode, Mac iOS simulator.
But I am not able to compile this library on iOS device as it is ARM based architecture.
DCMTK library compiled very well on Intel architecture.
Now my problem statement is :-
I need to compile this DCMTK C++ library on ARM architecture by cross compilation.
I am using Ubuntu 64 bit machine for cross compilation.
I have installed binaries from GNU ARM tool chain from http://www.gnuarm.com/
I am using GCC toolchain 4.0 binutils-2.16.1, gcc-4.0.2-c-c++, newlib-1.14.0, insight-6.4, TAR BZ2 [65.5MB] binaries for Ubuntu 64 bit machine for ARM cross compilation.
After Installing these binaries on Ubuntu I have set PATH environment variable to
PATH=$PATH/gnu_arm/bin
For configuring the DCMTK C++ library I have run the following command on shell
CC=arm-elf-gcc CXX=arm-elf-g++ AR=arm-elf-ar RANLIB=arm-elf-ranlib ARFLAGS=cruv ./configure –prefix=$home_dicom –target=arm-elf –host=arm-elf –enable-std-includes –disable-threads
It creates a make file properly. Now I am trying to compile the code by using make command, but facing so many compilation errors like :-
1) I tried to compile my first dependent C++ library that is ofstd.
I got error for DIR*, struct dirent, opendir(), closedir() calls.
It includes for these calls, but I did not found any definitions for the above calls in this header file.
2) When I compile another library oflog I got the following errors like
error: nthos was not declared in this scope
error: ntohl was not declared in this scope
error: htons was not declared in this scope
error: htonl was not declared in this scope.
These calls are networking calls and are not defined in any of the header file from GNU ARM tool.
I tried to download the sources of ARM binaries and extracted the tar files and try to copy missing header files to installed GNU ARM on Ubuntu.
For some files it compiles after doing changes to copied header files, and for some again it gives compilation errors.. There is a loop of compilation errors for every file present under DCMTK library as some of the standard header files are missing.
Please suggest if there is any other tool chain available for ARM cross compilation on Ubuntu 64 bit machine.
Or any other good solution apart from this.
Thanks!!!
Amit
There are many areas for problems when it comes to cross compiling. There are three main flags for cross compiling. -host , -target, and -build. The -host flash is the machine in which the resulting binaries will run on. The -build flash is the system in which you will be compiling on. The -target flag is for building libraries that will be used in cross compiling. So if you were to build your own gcc tool chain. So in your case you won't set the target flag as we're not building a tool chain. the -host flag will be arm-elf. And the -build flag will be amd64.
Usually a cross compilation fails if there are inconsistencies between the regular c compiler and the cross compiler. I have compiled several libraries for the avr32 with a toolchain generated by buildroot, but in some cases (socat project for example) it hasn't been possible.
Your host, your target and the CXX flags look ok. I think it is not necessary to put the AR flag (that is the idea with the host and target option).
In other hand, this is an example for the expat libraries for the avr32:
./configure --host=avr32-linux --prefix=/home/juan/builds/build_expat/ CC=avr32-linux-gcc
make; make install
I can recommend you that tries to cross compile from an ia32 architecture. I had several problems with that ubuntu in the past.

Resources