apt-get method: I'm trying to install LLVM and Clang on Ubuntu 15.10. I used the commands sudo apt-get install llvm and sudo apt-get install clang. This seemed to have worked, and it only took a few minutes.
Manual method: However, most instructions online have me manually download and build the LLVM and Clang packages (e.g. see here: http://clang.llvm.org/get_started.html). I understand this method could take some time, even a few hours for building LLVM and Clang.
What's the difference between these two methods? Are they equivalent? I just want to make sure I have everything installed correctly. (My background is in Windows, so I'm missing the probably obvious difference.)
apt-get installs already compiled packages from the repository of the distribution. It also takes care of installing all dependencies. The package maintainer has compiled the package and makes sure that it dependencies (other packages and their versions) are met.
This approach is very convenient and should, by all means, be preferred. The only major advantage – or argument in favour – of a source installation is that you get more recent packages.
Compiling from source may be necessary when you want to benefit from features that are not yet available in the distribution’s version. In the case of the compiler it may also be that a newer version produces “better” binaries than an earlier version.
Another reason for choosing to compile software yourself may be that you want to influence the building process, e.g. different compiler settings or a different configuration with less dependencies. However, such cases are quite rare – in most case, it isn’t worth the trouble.
Also, as you’ve experienced yourself, installing a pre-compiled package takes only a few minutes (or even just seconds), while compiling will take some time depending on the software to compile and your hardware.
Bottom line, unless you have a good reason, use the distribution’s package(s).
Related
I'm running Centos 7 and am trying to build hipSYCL (see here)
The issue is that hipSYCL needs to have cmake info from the LLVM build (via the LLVM_DIR cmake variable).
This is problematic for me because building LLVM requires a massive 35Gb for the libraries and exes. I don't have that much memory to spare.
I did find a build of llvm-toolset-8.0 online for Centos 7 and installed it, but to my surprise, that didn't seem to work with LLVM_DIR because there's no cmake files (since I didn't build it locally).
So, my question would be, is there a way to build hipSYCL using pre-built LLVM-clang?
If I'm missing or misunderstanding something, I'd appreciate any help.
LLVM publishes the necessary cmake files, and the binary OS packages I've seen include it, generally in a directory called /usr/lib/llvm*/lib/cmake and in a package called something like llvm-*-dev.
I am quite often in areas where the WIFI connection is unreliable and slow, but occasionally I would like to upgrade a package from homebrew.
Unfortunately if a binary download fails, it will attempt to install from source, which will in most cases cause it to download even more dependencies, actually making the situation worse.
Is there a way to inhibit building from source? I would prefer to just let it fail and retry later when I have a better connection.
You can use:
brew install --force-bottle myformula
According to the brew man page:
If --force-bottle is passed, install from a bottle if it exists for the current or newest version of macOS, even if it would not
normally be used for installation.
Suppose I have a C++ project, and I compile it with gcc and with clang. You can assume that the gcc compiled version runs in another linux machine. Will this imply (in normal circumstances) that the clang version will also run on the other linux machine?
Clang binraries are as portable as gcc binaries are, as long as you are linking to the same libraries and you aren't passing flags like -march=native to the compiler.
Clang has one huge advantage over gcc, it can deal with alsmost all libstdc++ versions,
while gcc is bound to its bundled version and often can't parse any older versions.
So the following often happens in production environments:
Install an LTS distro (Ubuntu 12.04 for example)
Keep gcc, glibc and libstdc++ untouched
Install a recent clang version for C++11, etc
Build the release binaries with clang
So (in my specific example) those binaries will work on all
distros with libstdc++ >= 4.6 and glibc >= 2.15.
This may be an interesting read for you.
If the program is a simple Hello world, it should work on the other machine when compiled through Clang.
But when the program is a real program with a lot a lines and compilation units, and calls to many external libs everything is possible depending on the program itself and the compilation options :
hardware requirements (memory) being different (mainly depends on compilation options)
use of different (versions of) libraries between gcc and clang
UB giving expected results in one and not in the other
different usages for implementation defined rules
use of gcc extensions not accepted by clang
For all of the above except 2 first, it should run on other machines it it runs on one
linux programs depend on their build environment. If your glibc version or kernel is different there will be lots of possibilities that the executable will not be able to run. You could use the interpreter language of llvm though, it compiles into bytecode which can be interpreted on various operating systems.
The answer is, well, depends.
The first hard requirement is the same CPU architecture. 64 Bit is not enough of a qualifier. If you compile of x64 you won't have much success running it on 64-Bit ARM.
The next big one is libraries. If you use any libraries in the program, the target system needs to have those libraries. This includes the kernel headers. So if you compile for e.g. a current kernel version, using the most cutting-edge features, then you will have no joy running that program on a very old version of Linux.
The last one is hardware dependencies. If you create a program that e.g. requires 4 GB of RAM and then try to run it on a small embedded device with 256 MB RAM, that won't work either.
To fit better to your changed question: From my experience there shouldn't be much of a difference in portability between Clang and gcc. Also googling didn't turn up anything, so it should basically work. But better always test stuff like that before you publish some binary in production.
One of the reasons I'm still using macports is that it is easy to switch between versions of things you download. For example, if I want to change my GCC version to 4.8 all I have to do is
sudo port set --select gcc mp-gcc48
No mucking around in environmental variables. I see that there are multiple versions of gcc to be got from homebrew, but is there an easy way to activate and deactivate versions of things? I didn't notice anything in the documentation.
Option 1 is that you install multiple versioned packages in parallel. Then you'd call gcc-4.7 or gcc-4.8, etc.
Option 2 is to selectively brew link and brew unlink the package versions that you prefer to use. Note that an "unlinked" package is still installed and usable from /usr/local/opt/<package>/, it's just not in the default path.
Which one you use depends on how the individual packages are set up and how often you need to switch around. It's perhaps not quite as clear-cut as with MacPorts, but it works just fine.
I would like to install the fsharp compiler from Github on my Debian system, and the usual way would be to create a deb package first and then install it (so it is possible to uninstall it later, etc.). What is the easiest way to achieve this? All the examples of how to use dh_make assume you have a source tar.gz appropriately named, whereas I don't. Also I need to use some prefix for the autogen script:
./autogen.sh --prefix=/usr
I am not sure it this makes the task any more difficult.
This should actually be fairly simple to achieve with a binary package - which will also be cross-platform because the F# compiler itself is written in F#. The compiler itself is fairly standalone and depends only on a few BCL libraries. There are versions that run on Mono.
More important than installing the compiler is the integration with your platform's build system(s). Microsoft ships a Microsoft.FSharp.targets file for MSBuild, I don't know whether that will work with Mono's xBuild.
I have put together a blog post that explains where to find the various bits that make up the F# compiler and how to package them to compile on a platform that has only .NET and MSBuild (AppHarbor in my case), which you may find helpful.