How can I compile the VM and run Erlang programs on the Intel Xeon Phi coprocessor?
Intel Xeon Phi is not a typical x86_64 architecture, therefore it's not possible to run the official Erlang VM on it. The only way to do it is to use cross-compilation and build the VM yourself on a different (host) machine where Erlang is supported, so that it runs on a target system (Phi in this case).
Xeon Phi also supports slightly different instruction set than a typical x86_64 architecture, so you also need to edit the code. If you just cross-compile the VM without touching the code, you will probably get an error:
/tmp/iccvaLP3vas_.s: Assembler messages:
/tmp/iccvaLP3vas_.s:25794: Error: `mfence' is not supported on `k1om'
So first of all you need to add #ifndef clauses around memory fence instructions, which are not supported on Phi (mfence, lfence and sfence). This boils down to opening the erts/include/internal/i386/ethr_membar.h file and adding following preprocessor directives:
#ifndef __MIC__
...
#endif
around __asm__ statements in funtions ethr_mfence__, ethr_sfence__ and ethr_lfence__ e.g.
static __inline__ void
ethr_mfence__(void)
{
#if ETHR_SIZEOF_PTR == 4
if (ETHR_X86_RUNTIME_CONF_HAVE_NO_SSE2__)
ETHR_NO_SSE2_MEMORY_BARRIER__;
else
#endif
#ifndef __MIC__
__asm__ __volatile__ ("mfence\n\t" : : : "memory");
#endif
}
Now you can try to cross-compile it. First download the sources (in my case Erlang VM 17.5), then run:
$ cd otp_src_17.5
$ export ERL_TOP=`pwd`;
$ ./configure \
--host=k1om-unknown-linux-gnu \
--build=x86_64-pc-linux-gnu \
--without-termcap \
--without-javac \
--without-ssl \
--prefix=/path/to/my/new_installation \
CC=icc \
CFLAGS=-mmic \
LDFLAGS=-mmic \
DED_LD=icc \
DED_LDFLAGS="-mmic -shared -Wl,-Bsymbolic" \
DED_LD_FLAG_RUNTIME_LIBRARY_PATH="-Wl,-R"
$ make
$ make install
icc is the official Intel Compiler and -mmic flag is required for compiling for Xeon Phi. The host and build flags are the systems/architectures of respectively the machine where you compile and the machine which you compile for (Phi). In my case they had following values, but if it's different for you, you might want to use the config.guess script which automatically detects your OS/CPU architecture.
$ ./config.guess
$ x86_64-pc-linux-gnu
And that's it! Now you should be able to ssh on your Phi and run Erlang.
$ ssh my-phi-coprocessor
$ cd /path/to/my/new_installation/
$ export PATH=`pwd`/bin:$PATH;
$ erl -version
Erlang (SMP,ASYNC_THREADS) (BEAM) emulator version 6.4
Related
Windows 10, Ryzen 3700x, gcc 8.1.0 (Posix, SEH-enabled)
I am building clang, llvm, and compiler-rt (the PGO tools) from source. I have downloaded the clang+llvm source for 14.0.0, and built it successfully with the following:
cmake -G "MinGW Makefiles" -DLLVM_ENABLE_PROJECTS="clang;lld;compiler-rt" -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD=X86 ../llvm
After this, I can invoke clang, and build projects, ranging from simple "Hello, World", to a much more complex one. I am able to make use of -flto, with the addition of -fuse-ld=lld.
However, if I attempt to do ANY sort of PGO building, I fail. For example, here is the minimal example to demonstrate the problem.
Andrew#Ryzen3700x MINGW64 ~/Desktop
$ cat test.c
#include <stdio.h>
int main() {
printf("Hello, World!");
return 0;
}
Andrew#Ryzen3700x MINGW64 ~/Desktop
$ clang -fprofile-instr-generate test.c
Andrew#Ryzen3700x MINGW64 ~/Desktop
$ ./a.exe
Hello, World!
Andrew#Ryzen3700x MINGW64 ~/Desktop
$ llvm-profdata merge -output=test.profdata default.profraw
warning: default.profraw: malformed instrumentation profile data
error: no profile can be merged
Andrew#Ryzen3700x MINGW64 ~/Desktop
$ llvm-profdata show default.profraw
error: default.profraw: malformed instrumentation profile data: function name is empty
I am aware of many answers to this question, and none of them seem to apply. My .profraw file is not empty.
I will note that when I installed LLVM/Clang directly (not building on my own), the PGO portions DID work. However, after many hours I could not resolve linking issues regarding -flto.
I am new to EDK2.
For porting ekd2 firmware to a new ARM64 platform, it would be good to first get a minimum edk2 port which can run UEFI Shell at least, improvements can be added gradually based on that.
It seems that the first step is rather steep, e.g., how to determine a minimal set of "items" in .dsc and .fdf file for a platform? In my case, I would like to build the .fd for my platform and treat it as BL33 of TF-A, effectively I would like to build an edk2 firmware to replace u-boot.
It seems that such a guide is hard to find on the web. I found a old version of edk2 which contains some instructions, but apparently they are obsolete (not exist in latest master branch, while can be found in UDK branches such as UDK2014), and I am not sure why those documents are removed from master branch.
Currently I can build .fd for FVP (edk2-platforms/Platform/ARM/VExpressPkg/ArmVExpress-FVP-AArch64.dsc), and it seems that the build output FVP_AARCH64_EFI.fd is supposed to be treated as BL33. Theoretically this could be a prototype for my new ARM64 platform, but to me it's too complex to start with: the firmware is about 2.5MiB in size (as compare to 500K of u-boot), so I guess it's far from a "minimum" version. but it's hard to figure out what features to be removed (and how).
I am wondering if there is a detailed guide on such topic...
After 1 month of trial and error, today I managed to bring my ARM64 platform into a UEFI Shell environment. I treat it as my 1st milestone on the EDK2 journey. Below I will try to summarize the steps I took so far, as a tentative answer to my question above. Guidance/corrections/comments are welcomed.
Get familiar with UEFI/PI spec and EDK2 implementation by reading books/specs/articles. Well, UEFI/PI specs are thousands of pages long...how to start? My main reading list is:
"Beyond Bios--Developing with the Unified Extensible Firmware Interface", 3rd ed, by Vincent Zimmer, et al. As the authors explained, the book is a kind of high level summary of the thousands-paged specs. And I find that the book is well organized for a new comer to get familiar with various UEFI related concepts. The main purpose of the 1st read (before playing with edk2 code base) is to get familiar with concepts and architectural ideas, not the details yet. Related sections need to be consulted later when reading EDK2 implementations.
EDK2 specs, including:
EDKII User Manual
EDKII Build Specification
EDKII DSC/FDF/DEC/INF File Specification
Various articles on the web...
Get a reference platform which can correctly boot a FD image built from latest EDK2 source, and play with the boot manager and Shell environment a bit. In my case, I chose RPi4B. For me, this is very important, as the reference platform serves as a handrail during the whole process, that whenever I encounter bugs or have doubts, I check the source/log of the reference platform. This solves most of the problems I encountered. Btw, always generating "build log" and "build report" for both reference platform and the target platform, as the two files contains very detailed information for comparison and check. Consult the EDK2 build spec on how to generate these two files during build.
I use the following script to build for RPi4B platform:
#!/bin/bash
# https://github.com/tianocore/edk2-platforms#how-to-build-linux-environment
export WORKSPACE=/home/bruin/work/tianocore
export PACKAGES_PATH=$WORKSPACE/edk2:$WORKSPACE/edk2-platforms:$WORKSPACE/edk2-non-osi
pushd $WORKSPACE
rm -rf ./Build/RPi4
source edk2/edksetup.sh
echo "Building BaseTools..."
make -C edk2/BaseTools all
#sudo apt install acpica-tools # iasl
# pip install antlr4-python3-runtime # -Y EXECUTION_ORDER
echo "Building firmware for Pi4B..."
GCC5_AARCH64_PREFIX=aarch64-none-linux-gnu- build \
-n 4 \
-a AARCH64 \
-p Platform/RaspberryPi/RPi4/RPi4.dsc \
-t GCC5 \
-b NOOPT \
-v -d 9 -j RPi4-build.log \
-y RPi4-build-report.txt \
-Y PCD \
-Y LIBRARY \
-Y DEPEX \
-Y HASH \
-Y BUILD_FLAGS \
-Y FLASH \
-Y FIXED_ADDRESS \
-Y EXECUTION_ORDER \
all
How to use the build result RPI_EFI.fd on RPi4B, consult the following:
edk2-platforms/Platform/RaspberryPi/RPi4/Readme.md
readme.md inside https://github.com/pftf/RPi4/releases/download/v1.17/RPi4_UEFI_Firmware_v1.32.zip. btw, I need to replace the original start4.elf and fixup4.dat with the ones in the zip file, otherwise, the boot of RPi4 will fail, complaining something like below:
RpiFirmwareGetClockRate: Get Clock Rate return: ClockRate=0 ClockId=C
ASSERT [ArasanMMCHost] /home/bruin/work/tianocore/edk2-platforms/Platform/RaspberryPi/
Drivers/ArasanMmcHostDxe/ArasanMmcHostDxe.c(263): BaseFrequency != 0
It's worth to analysis the RPI_EFI.fd content to some extend, by using some UEFI utilities. I mainly use the GUI version UEFITool of sudo apt install uefitool uefitool-cli. Other tools are also available. The anotomy of RPI_EFI.fd is of help when reading EDK2 build specs for checking understanding of the concepts.
One special aspect of RPI_EFI.fd is that the 1st 128K is bl31.bin binary from ATF. I guess this is due to the special booting connfiguration methods for RPi. For my platform, I don't need such kind of packaging, I only need to build the UEFI image MY.fd, which is treated as BL33 image and packaged into fip.bin togehter with BL2 and BL31 images by ATF build script.
Another aspect to notice is the "reset vector" in the begining of the .fd file. This related to the entry point of UEFI image (and entry point of each EDK2 modules), as well as interpreting the BL instruction for AArch64. Basically, it can be summarized as below:
The first [Components] in RPI_EFI.fd is ArmPlatformPkg/PrePi/PeiUniCore.inf, which is of MODULE_TYPE = SEC.
What's this component: this is the first (and only) SEC (Security) module in RPi4. What the name PrePi and Pei implies?
... the PI spec is not tied to edk2 PEIMs, and I don't see where EDKII PEI modules are currently the only "acknowledged" silicon init environment. The edk2 tree itself seems to contain platforms that don't use the edk2 PEI module set at all, but (IIRC) jump from SEC to DXE. I believe "ArmPlatformPkg/PrePi" and "ArmVirtPkg/PrePi" are related to this.
--- https://listman.redhat.com/archives/edk2-devel-archive/2020-November/msg00021.html
Its entry point: all UEFI components have the same entry point (_ModuleEntryPoint).
By "component", it means either a UEFI driver and UEFI app, both are PE32 executables, usually with suffix .efi.
The .efis are converted from ELF executables (.dll) by GenFw tool: modifying the file headers.
To verify that "all components' entry point is _ModuleEntryPoint":
Check the .dll generating command line in build report (build -y <BUILD_REPORT_FILE>), we have two flags "aarch64-none-linux-gnu-gcc" -o xxx.dll -u _ModuleEntryPoint -Wl,-e,_ModuleEntryPoint ...:
-u: gcc --help -v|grep "undefined SYMBOL" gives -u SYMBOL --undefined SYMBOL: star with undefined reference to SYMBOL.
Wl,-e: ld --help|grep "entry" gives -e ADDRESS, --entry ADDRESS Set start address.
Check all .dll files that Entry point address == _ModuleEntryPoint: find . -type f -name "*.dll" -exec sh -c "readelf -a {} |grep -E 'Entry point address|_ModuleEntryPoint'" \;
Its entry point is the entry point of whole UEFI FD image (i.e., from bl33_base_addr jump to this _ModuleEntryPoint):
Topology of the UEFI Firmware File
A UEFI Firmware File (actually a UEFI Firmware Device - FD file) is a collection of UEFI binaries encapsulated into a single image. The format of this image is defined by the Platform Initialization Specification Volume 3. A Vector Table is located at the base of this file. A 'BL' branch instruction at the base of the firwmare (location of the Reset Entry into the Vector Table) will jump to the first 'SEC' module of the UEFI Firmware Image.
--- https://github.com/lzeng14/tianocore/wiki/ArmPkg-Debugging
To verify the statements above:
Disassember the reset vector (i.e., the 1st word) of generated .FD (we got offset=0x360):
$ xxd -l 4 -e TEST.fd <== dump 4 bytes in little endian
00000000: 140000d8 <== BL {PC}+(0xd8<<2); offset=0x360
Check the Entry point in .dll (we got offset=0x240):
$ aarch64-none-elf-objdump -t ArmPlatformPrePiUniCore.dll|grep _ModuleEntryPoint
0000000000000240 g F .text 0000000000000000 _ModuleEntryPoint
$ readelf -h ArmPlatformPrePiUniCore.dll|grep Entry
Entry point address: 0x240
Compare contents of two files at different offset (we got identicial content):
$ xxd -s 0x360 -l 64 TEST.fd <== skip 0x360 bytes, dump 64 bytes
00000360: 901e 0094 050a 0094 ea03 00aa a1cd 0a58 ...............X
00000370: 0200 e0d2 2200 c0f2 0240 a0f2 0200 80f2 ...."....#......
00000380: c303 a0d2 e3ff 9ff2 6304 00d1 6300 028b ........c...c...
00000390: 0400 a1d2 0400 80f2 2000 03eb 8400 0054 ........ ......T
$ xxd -s 0x240 -l 64 ArmPlatformPrePiUniCore.dll <== skip 0x240 bytes
00000240: 901e 0094 050a 0094 ea03 00aa a1cd 0a58 ...............X
00000250: 0200 e0d2 2200 c0f2 0240 a0f2 0200 80f2 ...."....#......
00000260: c303 a0d2 e3ff 9ff2 6304 00d1 6300 028b ........c...c...
00000270: 0400 a1d2 0400 80f2 2000 03eb 8400 0054 ........ ......T
Prepare an empty pkg, and make it build ok. The main purpuse is to do some exercise with EDK2 build system, and use the empty pkg as the start point for the new platform.
Make a copy of RaspberryPi.dec, change all gRaspberry to gMyPlatform.
Make a copy of RPi4.dsc and RPi4.fdf, and comment out all stuff in DSC and FDF file.
Replace all GUIDs in DSC/FDF/DEC files, generating new ones using online guid generator.
Note that PCD are declared in DEC files, and DEC files are refered by modules (INF files). As the empty package contains no module, no PCD definition will be available in FDF. So for a success build of the empty package, we need to comment out all PCD reference in FDF.
The NOOPT build command for MyPlatform is as below:
#!/bin/bash
export WORKSPACE=/home/bruin/work/tianocore
export PACKAGES_PATH=$WORKSPACE/edk2:$WORKSPACE/edk2-platforms:$WORKSPACE/edk2-non-osi
pushd $WORKSPACE
source edk2/edksetup.sh
echo "Building BaseTools..."
make -C edk2/BaseTools all
echo "Building UEFI firmware for MyPlatform..."
GCC5_AARCH64_PREFIX=aarch64-none-linux-gnu- build \
-n 4 \
-a AARCH64 \
-p Platform/MyCorp/MyPlatform/MyPlatform.dsc \
-t GCC5 \
-b NOOPT \
-v -d 9 -j MyPlatform-build.log \
-y MyPlatform-build-report.txt \
-Y EXECUTION_ORDER \
-Y PCD \
-Y LIBRARY \
-Y DEPEX \
-Y HASH \
-Y BUILD_FLAGS \
-Y FLASH \
-Y FIXED_ADDRESS \
all
popd
Add the 1st component ArmPlatformPrePiUniCore. This component is to prepare the HOBs for DXE phae. The main purpose is to get serial port working and memory config correct. Another purpose of this step is to familiar with steps for adding a component/module/lib. Below is a brief summary of the steps:
Uncomment the module's INF into both DSC ([Components] section), and FDF ([FV.FVMAIN_COMPACT]).
Rebuild the pkg, and resolve all Instance of library class [xxxLib] is not found errors reported, by updating [LibraryClasses] sections of DSC.
This step is a repeating process for dozens of times.
Some lib-class has multiple lib-instances, making sure choose the appropriate lib-instance (ref the build-report of RPi4).
if encounter ModuleEntryPoint.iiii:31: Error: immediate out of range: enable gArmTokenSpaceGuid.PcdFdBaseAddress and gArmTokenSpaceGuid.PcdFdSize in FDF.
if encounter undefined reference to _gPcd_BinaryPatch_PcdSerialClockRate: set PcdSerialClockRate in [PcdsPatchableInModule] section in DSC. FIXME: why? ref.
Check the PCDs listed in build log: inspect any abnormal PCD values, and supply correct values.
Customize platform-specific drivers or libraries.
SerialPortLib: locate the lib-class header file (MdePkg/Include/Library/SerialPortLib.h) by find edk2 -type f -name "*.dec" -exec grep -Hn SerialPortLib. The following functions are required:
SerialPortInitialize()
SerialPortWrite()
SerialPortRead()
SerialPortPoll()
SerialPortSetControl(): RETURN_UNSUPPORTED
SerialPortGetControl(): RETURN_UNSUPPORTED
SerialPortSetAttributes(): RETURN_UNSUPPORTED
ArmPlatformLib: interface header at Include/Library/ArmPlatformLib.h. The following functions are required:
ArmPlatformGetCorePosition(): return cpu idx in the cluster given the MPIDR value. this function is used in _ModuleEntryPoint for setting stack for secondary cores. Assuming one cluster for now.
ArmPlatformIsPrimaryCore()
ArmPlatformGetPrimaryCoreMpId()
ArmPlatformGetBootMode()
ArmPlatformPeiBootAction()
ArmPlatformInitialize()
ArmPlatformGetVirtualMemoryMap()
ArmPlatformGetPlatformPpiList()
etc...
Uncomment more modules in DSC/FDF, module by module...For driver/libs which are RPi platform specific, we can:
either search the edk2/edk2-platform for similiar driver or lib instances, or
copy the RPi4 implementation and comment out most of the content, make the pkg build success first, and then bug fixing.
Debugging: my current main debugging method is through adding "printf()", i.e., the edk2 macro DEBUG((DEBUG_INFO,)). One needs to set gEfiMdePkgTokenSpaceGuid.PcdDebugPrintErrorLevel to an appropriate value to see more debug info.
I am trying to build LLVM compilers so that I can enable OpenMP on the Apple M1.
I am using the LLVM development tree, (since I saw some OpenMP runtime go into that for this recently).
I have ended up with this script to invoke cmake:
# Xcode, Ninja
BUILD_SYSTEM=Ninja
BUILD_TAG=Ninja
cmake ../llvm \
-G$BUILD_SYSTEM -B ${BUILD_TAG}_build \
-DCMAKE_OSX_ARCHITECTURES='arm64' \
-DCMAKE_C_COMPILER=`which clang` \
-DCMAKE_CXX_COMPILER=`which clang++` \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_BUILD_WITH_INSTALL_RPATH=1 \
-DCMAKE_INSTALL_PREFIX=$HOME/software/clang-12.0.0/arm64 \
-DLLVM_ENABLE_WERROR=FALSE \
-DLLVM_TARGETS_TO_BUILD='AArch64' \
-DLLVM_ENABLE_PROJECTS='clang;openmp,polly' \
-DLLVM_DEFAULT_TARGET_TRIPLE='aarch64-apple-darwin20.1.0'
The compilers used here are
$ /usr/bin/clang --version
Apple clang version 12.0.0 (clang-1200.0.32.27)
Target: arm64-apple-darwin20.1.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
ninja can then successfully build clang, clang++ and the OpenMp runtime and install them. (As simple, Arm64 images targeting Arms64)
$ file ~/software/clang-12.0.0/arm64/bin/clang
/Users/jcownie/software/clang-12.0.0/arm64/bin/clang: Mach-O 64-bit executable arm64
$ ~/software/clang-12.0.0/arm64/bin/clang --version
clang version 12.0.0 (https://github.com/llvm/llvm-project.git 879c15e890b4d25d28ea904e92497f091f796019)
Target: aarch64-apple-darwin20.1.0
Thread model: posix
InstalledDir: /Users/jcownie/software/clang-12.0.0/arm64/bin
Which all looks sane, except that when I try to compile anything with them they are missing the include path to get system headers.
$ ~/software/clang-12.0.0/arm64/bin/clang hello.c
hello.c:1:10: fatal error: 'stdio.h' file not found
#include <stdio.h>
^~~~~~~~~
1 error generated.
So, after all that,
Does anyone know how to fix that include path problem?
Does anyone know how to configure and build a fat binary for the compilers (and libraries) so that the x86_64 embedded compiler targets x86_64 and the aarch64 binary aarch64? (This is what the Xcode clang and clang++ do...)
My attempt at this ended up with a compiler fat binary where both architectures targeted x86_64 :-(
Thanks
You can set -DDEFAULT_SYSROOT=/path/to/MacOSX11.1.sdk at build time or do export SDKROOT=/path/to/MacOSX11.1.sdk at runtime.
You need to compile with clang -arch arm64 -arch x86_64 to get a fat binary out of clang. You need to do this for Apple clang as well.
UPDATED 8 Feb 2021
Homebrew now supports the M1 based Arm machines, so using that is a better answer than the one below.
The info below is potentially still useful if you want to do this on your own, but using brew is likely to be much simpler.
Pre-brew answer
I haven't found a clean solution, but in case it helps anyone else, I do have a horrible hack.
The full recipe, then is configure with this script, then build and install.
# Xcode, Ninja
BUILD_SYSTEM=Ninja
BUILD_TAG=ninja
INSTALLDIR=$HOME/software/clang-12.0.0/arm64
cmake ../llvm \
-G$BUILD_SYSTEM -B ${BUILD_TAG}_build \
-DCMAKE_OSX_ARCHITECTURES='arm64' \
-DCMAKE_C_COMPILER=`which clang` \
-DCMAKE_CXX_COMPILER=`which clang++` \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=$INSTALLDIR \
-DLLVM_LOCAL_RPATH=$INSTALLDIR/lib \
-DLLVM_ENABLE_WERROR=FALSE \
-DLLVM_TARGETS_TO_BUILD='AArch64' \
-DLLVM_DEFAULT_TARGET_TRIPLE='aarch64-apple-darwin20.1.0' \
-DDEFAULT_SYSROOT="$(xcrun --show-sdk-path)" \
-DLLVM_ENABLE_PROJECTS='clang;openmp;polly;clang-tools-extra;libcxx;libcxxabi' \
# -DLLVM_ENABLE_PROJECTS='clang;openmp;polly'
That gives a compiler that finds the right headers, but won't link successfully if OpenMP is used because it doesn't pass on any useful -L path or add a necessary rpath.
To overcome that I created a small shell script that sits in my ~/bin, at the front of my $PATH, which adds those extra linker flags.
#
# A truly awful hack, but it seems necessary.
# Install this with execute permissions as clang and clang++ in
# a directory early in your path, so that it is executed when clang or
# clang++ is needed.
#
# For brew...
INSTALLDIR=/usr/local/opt/llvm
# For a local build.
INSTALLDIR=${HOME}/software/clang-12.0.0/arm64/
# Find out the name of this file, and then invoke the same file in the
# compiler installation, adding the necessary linker directives
CMD=`echo $0 | sed "s/\/.*\///"`
${INSTALLDIR}/bin/${CMD} -L${INSTALLDIR}/lib -Wl,-rpath,${INSTALLDIR}/lib $*
I am not recommending this particularly; there should clearly be a better way to make it work, but it'll do for now, and lets me get back to using the compiler rather than building it!
I was able to build with -DDEFAULT_SYSROOT="$(xcrun --show-sdk-path)" -DCMAKE_INSTALL_PREFIX=/Users/foo/lokal/ and install into the lokal/bin lokal/lib path. Once that is done you can use LD_LIBRARY_PATH=/Users/foo/lokal/lib and all the libraries should be found without mucking with anything else rpath related.
I have a tough need to compile Coreutils with llvm for other arch: arm/aarch64/mips/mips32/ppc/ppc32...
Since I install all the gcc-cross tools like mips-linux-gnu, powerpc64-linux-gnu and if I have a simple C program like that test.c
#include<stdio.h>
int main(){
printf("hello!");
return 0;
}
I can compile it to the arch, i.e.
clang --target=mips64-linux-gnuabi64 test.c -o test-mips64
➜ tests file test-mips64
test-mips64: ELF 64-bit MSB executable, MIPS, MIPS64 rel2 version 1 (SYSV), dynamically linked, interpreter /lib64/ld.so.1, BuildID[sha1]=7b33d55a0d08e6cd18d966341590dc351e346a78, for GNU/Linux 3.2.0, not stripped
I try to the same way for compile Coreutils that try to set
export CC=clang
export CXX=clang++
CFLAGS = "--target=mips64-linux-gnuabi64"
./configure --host=mips64-linux-gnuabi64
Howerver, every time got errors in configure or make...
How should I set the configure? Can I easily compile Coreuntils with llvm for other archs?
It's a bit tricky to get the command-line options right for cross-compiling. I got it to work with the commands below, assuming you're working on a Debian-based system (like Debian or Ubuntu). Here are the steps.
Install gcc-mips64-linux-gnuabi64 and gcc-powerpc64-linux-gnu.
Choose the correct arguments for CFLAGS
-B/usr/mips64-linux-gnuabi64/bin/ to indicate we want to use the linker ld within that directory. Do the same for powerpc.
--target=mips64-linux-gnuabi64 to indicate what our target for compilation is. Do the same for powerpc.
-I/usr/mips64-linux-gnuabi64/include to include header files. Do the same for powerpc.
Use ./configure --host=mips64-linux-gnuabi to configure for mips64 and ./configure --host=powerpc64-linux-gnueabi to configure for powerpc64.
Here are the commands to compile for mips64:
make clean
CFLAGS="-B/usr/mips64-linux-gnuabi64/bin/ --target=mips64-linux-gnuabi64 -I/usr/mips64-linux-gnuabi64/include" \
./configure --host=mips64-linux-gnuabi
make
And the commands to compile for powerpc64:
make clean
CFLAGS="-B/usr/powerpc64-linux-gnu/bin/ --target=powerpc64-linux-gnueabi -I/usr/powerpc64-linux-gnu/include" \
./configure --host=powerpc64-linux-gnueabi
make
Here is the output of file ./src/ls to demonstrate that it is a powerpc64 executable:
$ file ./src/ls
./src/ls: ELF 64-bit MSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), dynamically linked, interpreter /lib64/ld64.so.1, for GNU/Linux 3.2.0, BuildID[sha1]=97fe33981ca0112160f44a6fb678d6dc1b462114, not stripped
Below is a Dockerfile that can be used to reproducibly cross-compile coreutils for mips64 and powerpc64.
# Cross-compile GNU coreutils for mips64 and powerpc64 using clang.
# With help from https://medium.com/#wolfv/cross-compiling-arm-on-travis-using-clang-and-qemu-2b9702d7c6f3
FROM debian:buster
# Install compile-time dependencies.
RUN apt-get update \
&& apt-get install --yes \
clang \
curl \
gcc-mips64-linux-gnuabi64 \
gcc-powerpc64-linux-gnu \
make \
perl \
&& rm -rf /var/lib/apt/lists/*
# Download source code for release.
WORKDIR /tmp/coreutils
RUN curl -fsSL https://ftp.gnu.org/gnu/coreutils/coreutils-8.32.tar.xz \
| tar xJ --strip-components 1
# Compile and install for mips64.
RUN CFLAGS="-B/usr/mips64-linux-gnuabi64/bin/ --target=mips64-linux-gnuabi64 -I/usr/mips64-linux-gnuabi64/include" \
./configure --host=mips64-linux-gnuabi --prefix=/opt/coreutils-mips \
&& make \
&& make install
# Compile and install for powerpc64.
RUN make clean \
&& CFLAGS="-B/usr/powerpc64-linux-gnu/bin/ --target=powerpc64-linux-gnueabi -I/usr/powerpc64-linux-gnu/include" \
./configure --host=powerpc64-linux-gnueabi --prefix=/opt/coreutils-powerpc64 \
&& make \
&& make install
# Keep only the compiled programs from the previous stage.
FROM debian:buster
COPY --from=0 /opt /opt
I am current working on a simple build tool in Python that maybe help you.
Unfortunately, still at moment, lacks clang implementation, but works fine with GCC and MSVC.
Basically the thing mix Json parameters files to generate command line building.
CppMagic
I'm trying real hard to install vowpal wobbit and it fails when i run the make file, throwing:
cd library; make; cd ..
g++ -g -o ezexample temp2.cc -L ../vowpalwabbit -l vw -l allreduce -l boost_program_options -l z -l pthread
ld: library not found for -lboost_program_options collect2: ld returned 1 exit status make[1]: *** [ezexample] Error 1'
I then added the links to the boost library here by specifying -L/usr/local/lib
Now I get the following error:
g++ -g -o ezexample temp2.cc -L/usr/local/lib ../vowpalwabbit -l vw -l allreduce -l boost_program_options -l z -l pthread
ld: library not found for -lvw
collect2: ld returned 1 exit status
make: *** [ezexample] Error 1
I happened to get everything working on OS X 10.7 as follows:
Make sure you have a working Boost installation. As indicated on the Getting started page, usually we only need header files, but some Boost libraries must be built separately, including the program_options library which is used to process options from command line or config file. Go into your boost folder, and then at your shell prompt:
$ ./bootstrap.sh
$ ./bjam
This will compile and build everything. You should now have a bin.v2/ directory in your boost directory, with all built libraries for your system (static and threaded libs).
$ ls bin.v2/libs/
date_time iostreams python serialization test
filesystem math random signals thread
graph program_options regex system wave
More importantly, extra Boost libraries are made available in the stage/lib/ directory. For me, these are Mach-O 64-bit dynamically linked shared library x86_64.
The include path should be your_install_dir/boost_x_xx_x, where boost_x_xx_x is the basename of your working Boost. (I personally have boost_1_46_1 in /usr/local/share/ and I symlinked it to /usr/local/share/boost to avoid having to remember version number.) The library path (for linking) should read your_install_dir/boost_x_xx_x/stage/lib. However, it might be best to symlink or copy (which is what I did) everything in usual place, i.e. /usr/local/include/boost for header files, and /usr/local/lib for libraries.
Edit the Makefile from the vowpal_wabbit directory, and change the include/library paths to reflect your current installation. The Makefile should look like this (first 12 lines):
COMPILER = g++
UNAME := $(shell uname)
ifeq ($(UNAME), FreeBSD)
LIBS = -l boost_program_options -l pthread -l z -l compat
BOOST_INCLUDE = /usr/local/include
BOOST_LIBRARY = /usr/local/lib
else
LIBS = -l boost_program_options -l pthread -l z
BOOST_INCLUDE = /usr/local/share/boost # change path to reflect yours
BOOST_LIBRARY = /usr/local/share/boost/stage/lib # idem
endif
Then, you are ready to compile vowpal_wabbit (make clean in case you already compiled it):
$ make
$ ./vw --version
6.1
$ make test
You can also install vowpal wabbit on OS X using brew:
brew install vowpal-wabbit
Or you can just install boost, and then install vw from the github repo.
brew install boost
For installation on CentOS 7 (6.5 perl version is too old for latest vw source code), I've found the instructions at http://wkoplitz.blogspot.be/2012/12/vowpal-wabbit-on-centos.html to work fine:
yum install zlib-devel boost-devel
yum groupinstall "Development Tools"
git clone git://github.com/JohnLangford/vowpal_wabbit.git
cd vowpal_wabbit
./autogen.sh
make
make test
Good news:
As of the latest release VowpalWabbit version 9.1.0, vw no longer relies on Boost program_options
From the release highlights:
Removal of Boost Program Options dependency
For a long time we have depended on Boost Program Options
for command line options parsing. In this release, we have > replaced this dependency with our own implementation of
command line parsing. Apart from one place where we depend > on Boost Math in standalone mode, this means that VW core
and the command line tool are free of Boost dependencies
hopefully making the code a bit easier to build and package.
Vowpal Wabbit 9.1.0 release notes