After reading this question Is Erlang the C of the clustered computing world? , I am wondering the official Erlang OTP compiles with HiPE?
In other words, when I compile my .erl source with the OTP release R13 (as example), does it produce "object code" BEAM?
Looking at http://www.it.uu.se/research/group/hipe/ , it does not appear that a standalone HiPE compiler is maintained anymore.
By default HiPE is not used to compile OTP. It is known, however, that OTP libraries can be successfully compiled using HiPE with usually some performance boost (though it depends on your application).
When you run a erlc on your .erl file it produces BEAM file, which is NOT compiled to native code with HiPE. To compile an .erl file to native code using HiPE just run erlc +native file.erl.
Standalone HiPE compiler is not maintained anymore, since it was included into core Erlang/OTP distribution.
I think this depends on what options you passed to the configure script when you compiled the Erlang compiler. It certainly can include it but whether it does by default or not is another issue.
Related
Suppose I have a C++ project, and I compile it with gcc and with clang. You can assume that the gcc compiled version runs in another linux machine. Will this imply (in normal circumstances) that the clang version will also run on the other linux machine?
Clang binraries are as portable as gcc binaries are, as long as you are linking to the same libraries and you aren't passing flags like -march=native to the compiler.
Clang has one huge advantage over gcc, it can deal with alsmost all libstdc++ versions,
while gcc is bound to its bundled version and often can't parse any older versions.
So the following often happens in production environments:
Install an LTS distro (Ubuntu 12.04 for example)
Keep gcc, glibc and libstdc++ untouched
Install a recent clang version for C++11, etc
Build the release binaries with clang
So (in my specific example) those binaries will work on all
distros with libstdc++ >= 4.6 and glibc >= 2.15.
This may be an interesting read for you.
If the program is a simple Hello world, it should work on the other machine when compiled through Clang.
But when the program is a real program with a lot a lines and compilation units, and calls to many external libs everything is possible depending on the program itself and the compilation options :
hardware requirements (memory) being different (mainly depends on compilation options)
use of different (versions of) libraries between gcc and clang
UB giving expected results in one and not in the other
different usages for implementation defined rules
use of gcc extensions not accepted by clang
For all of the above except 2 first, it should run on other machines it it runs on one
linux programs depend on their build environment. If your glibc version or kernel is different there will be lots of possibilities that the executable will not be able to run. You could use the interpreter language of llvm though, it compiles into bytecode which can be interpreted on various operating systems.
The answer is, well, depends.
The first hard requirement is the same CPU architecture. 64 Bit is not enough of a qualifier. If you compile of x64 you won't have much success running it on 64-Bit ARM.
The next big one is libraries. If you use any libraries in the program, the target system needs to have those libraries. This includes the kernel headers. So if you compile for e.g. a current kernel version, using the most cutting-edge features, then you will have no joy running that program on a very old version of Linux.
The last one is hardware dependencies. If you create a program that e.g. requires 4 GB of RAM and then try to run it on a small embedded device with 256 MB RAM, that won't work either.
To fit better to your changed question: From my experience there shouldn't be much of a difference in portability between Clang and gcc. Also googling didn't turn up anything, so it should basically work. But better always test stuff like that before you publish some binary in production.
I'm new to erlang and rebar. In my rebar project I used a dependency containing native c code and during rebar compile I'm getting error:
Name cl.exe is not recognized as an internal or external command, operable program or batch file
I guess that rebar is trying to compile c files from my dependency using Microsoft's cl.exe compiler from VisualStudio, right? The problem is that I don't have VS installed and don't want to install it.
Why rebar is trying to use cl.exe? Can I configure rebar to use different compiler to compile c files?
According to the comments in the rebar port compiler code, you can provide an alternative C compiler by adding something like this to your rebar.config:
{port_env, [{"CC", "/path/to/gcc.exe"}]}.
You will most likely have to change the CFLAGS to match the compiler. To compile NIFs and ports the OTP headers and development libraries must be compiled and available.
Disclaimer: I'm fairly deep into my particular compiling issue but quite new to the world of compiling.
Background:
I'm working on a Windows 7, 64 bit machine. I'm attempting to compile a rather large fortran program using mingw-w64. The compile process is controlled by SCONS (similar or derived from gnu Make). I have successfully compiled this program via scons using g95 and mingw gfortran. I have attempted use tdm-gcc and 'ruben' builds of mingw-w64 with identical, unsatisfactory results. I am passing the -static argument to the compiler (gfortran.exe). I have tried using both the gfortran and x86_64-w64-mingw32-gfortran compile commands with identical results.
Problem:
When attempting to compile a 64bit version of my program, despite passing the -static argument to the gfortran compiler, when I attempt to run the built executable, it errors out claiming that it can't find various dll's (libgfortran, libgcc, libquadmath, generically speaking). If I copy those libraries to the working directory, the built program runs without error and performs as expected.
Anecdotal Summary:
mingw-w64 gfortran appears to be ignoring the -static compile flag
If there is any additional information I can provide to help in solving this, please let me know.
Thanks,
JTJ
I would like to install the fsharp compiler from Github on my Debian system, and the usual way would be to create a deb package first and then install it (so it is possible to uninstall it later, etc.). What is the easiest way to achieve this? All the examples of how to use dh_make assume you have a source tar.gz appropriately named, whereas I don't. Also I need to use some prefix for the autogen script:
./autogen.sh --prefix=/usr
I am not sure it this makes the task any more difficult.
This should actually be fairly simple to achieve with a binary package - which will also be cross-platform because the F# compiler itself is written in F#. The compiler itself is fairly standalone and depends only on a few BCL libraries. There are versions that run on Mono.
More important than installing the compiler is the integration with your platform's build system(s). Microsoft ships a Microsoft.FSharp.targets file for MSBuild, I don't know whether that will work with Mono's xBuild.
I have put together a blog post that explains where to find the various bits that make up the F# compiler and how to package them to compile on a platform that has only .NET and MSBuild (AppHarbor in my case), which you may find helpful.
I am using open source C++ library DCMTK from http://dicom.offis.de/dcmtk.php.en.
I have successfully compiled this library on Windows using VC++ IDE, MacOS Xcode, Mac iOS simulator.
But I am not able to compile this library on iOS device as it is ARM based architecture.
DCMTK library compiled very well on Intel architecture.
Now my problem statement is :-
I need to compile this DCMTK C++ library on ARM architecture by cross compilation.
I am using Ubuntu 64 bit machine for cross compilation.
I have installed binaries from GNU ARM tool chain from http://www.gnuarm.com/
I am using GCC toolchain 4.0 binutils-2.16.1, gcc-4.0.2-c-c++, newlib-1.14.0, insight-6.4, TAR BZ2 [65.5MB] binaries for Ubuntu 64 bit machine for ARM cross compilation.
After Installing these binaries on Ubuntu I have set PATH environment variable to
PATH=$PATH/gnu_arm/bin
For configuring the DCMTK C++ library I have run the following command on shell
CC=arm-elf-gcc CXX=arm-elf-g++ AR=arm-elf-ar RANLIB=arm-elf-ranlib ARFLAGS=cruv ./configure –prefix=$home_dicom –target=arm-elf –host=arm-elf –enable-std-includes –disable-threads
It creates a make file properly. Now I am trying to compile the code by using make command, but facing so many compilation errors like :-
1) I tried to compile my first dependent C++ library that is ofstd.
I got error for DIR*, struct dirent, opendir(), closedir() calls.
It includes for these calls, but I did not found any definitions for the above calls in this header file.
2) When I compile another library oflog I got the following errors like
error: nthos was not declared in this scope
error: ntohl was not declared in this scope
error: htons was not declared in this scope
error: htonl was not declared in this scope.
These calls are networking calls and are not defined in any of the header file from GNU ARM tool.
I tried to download the sources of ARM binaries and extracted the tar files and try to copy missing header files to installed GNU ARM on Ubuntu.
For some files it compiles after doing changes to copied header files, and for some again it gives compilation errors.. There is a loop of compilation errors for every file present under DCMTK library as some of the standard header files are missing.
Please suggest if there is any other tool chain available for ARM cross compilation on Ubuntu 64 bit machine.
Or any other good solution apart from this.
Thanks!!!
Amit
There are many areas for problems when it comes to cross compiling. There are three main flags for cross compiling. -host , -target, and -build. The -host flash is the machine in which the resulting binaries will run on. The -build flash is the system in which you will be compiling on. The -target flag is for building libraries that will be used in cross compiling. So if you were to build your own gcc tool chain. So in your case you won't set the target flag as we're not building a tool chain. the -host flag will be arm-elf. And the -build flag will be amd64.
Usually a cross compilation fails if there are inconsistencies between the regular c compiler and the cross compiler. I have compiled several libraries for the avr32 with a toolchain generated by buildroot, but in some cases (socat project for example) it hasn't been possible.
Your host, your target and the CXX flags look ok. I think it is not necessary to put the AR flag (that is the idea with the host and target option).
In other hand, this is an example for the expat libraries for the avr32:
./configure --host=avr32-linux --prefix=/home/juan/builds/build_expat/ CC=avr32-linux-gcc
make; make install
I can recommend you that tries to cross compile from an ia32 architecture. I had several problems with that ubuntu in the past.