Determining supported host target features without a compiler - clang

In LLVM-based compilers like rustc and clang, we can select a target cpu or a list of supported target features, like avx.
I can list the supported target CPUs with e.g. rustc -C target-cpu=help, or clang -print-supported-cpus.
One of the special values is native, which selects the current CPU that clang or rustc is running on. But now I want to know the correct value to use for a different machine, where no compiler is installed, and cannot be installed for security reasons. How can I know what target-cpu=native would evaluate to on that machine?
I can look at the list of supported CPUs, and the codename of my CPU as published by the vendor, but I don’t trust that this works in all cases, I would much rather confirm this with a runtime check.
Concretely:
Is there a reliable way to convert the output of e.g. lscpu into an LLVM target CPU name? Or if we can’t have the CPU name itself, how do the “flags” that lscpu lists translate to LLVM target-feature names?
Alternatively, does there exist a tool, which is not itself a compiler, that would print the LLVM target CPU for the host it’s running on?

Related

Can the same code and same compiler produce a different binary on different machines?

The idea of Nixos binary caches led me to considering this question.
In nix, every compiled binary is associated with a hash key which is obtained from hashing all the dependencies and build script, i.e. a 'derivation' in nix-speak. That is my understanding, anyway.
But couldn't the same derivation lead to different binaries, when compiled on different machines?
If machine A's processor has a slightly different instruction set than machine B's processor, and the compiler took this different instruction set into account, wouldn't the binary produced by compiling the derivation on machine A be distinguishable from the binary produced by compiling the derivation on machine B? If so, then couldn't different binaries could have the same derivation and thus the same nix hash?
Does the same derivation built on machines with different instruction sets always produce the same binary?
This depends on the compiler implementation and options passed to it. For example, GCC by default does not seems to pay attention to the specifics of the current processor, unless you specify -march=native or -mtune=native.
So yes, if you use flags like these or a compiler with default behavior like these flags, you will get a different output on a machine with a different model of cpu.
A build can be non-reproducible for other reasons as well, such as inappropriate use of clock values or random values or even counters that are accessed in non-deterministically interleaved patterns by threads.
Nix does provide a sandbox that removes some sources of entropy; primarily the supposedly unrelated software that may be present on a machine. It does not remove all of these sources for practical reasons.
For these reasons, reproducibility will have to be a consideration, even when packaging with Nix; not something that is solved completely by it.
I'll quote the menu "Achieve deterministic builds
" from https://reproducible-builds.org/docs/ and annotate it with the effect of Nix to the best of my knowledge. Don't quote me on this.
SOURCE_DATE_EPOCH: solved; set by Nixpkgs
Deterministic build systems: partially solved; Nixpkgs may include patches
Volatile inputs can disappear: solvable with Nix if you upload sources to the (binary) cache. Hercules CI does this.
Stable order for inputs: mostly solved. Nix language preserves source order and sorts attributes.
Value initialization: low-level problem not solved by Nix
Version information: not solved; clock is accessible in sandbox
Timestamps: same as above
Timezones: solved by sandbox
Locales: solved by sandbox
Archive metadata: not solved
Stable order for outputs: use of randomness not solvable by sandbox
Randomness: same
Build path: partially; linux uses /build; macOS may differ depending on installation method
System images: broad issue taking elements from previous items
JVM: same

openmp pthread support for avr-gcc

I've been working with a teensy for a multithreaded project using openmp compiling with gcc, however I'm joining a project that uses avr-gcc which doesn't seem to want to compile or recognize omp.h . I get the error "avr-gcc: error: unrecognized command line option '-pthread' " when I attempt to compile and am having trouble finding more information. I found this question about gcc-avr having slower updates AVR gcc version < gcc release versions -- why? but am wondering if avr-gcc hasn't yet added openmp support or doesn't for one reason or another and if there's a work around without requiring the team to switch compilers.
thanks for the direction it appears that avr-gcc doesn't provide headers that interact with operating systems which apparently pthreads does.
"Since sockets are a feature provided by the operating system, and you are compiling code that runs bare-metal on an Arduino microcontroller, which has no operating system running on top, the whole purpose of the sys/socket.h header is nullified.
This applies to any other kind of header or library function that interacts with the operating system, such as unistd.h, fcntl.h, pthread.h etc. In fact, avr-libc, the Standard C library for AVR-GCC, does not provide such headers.
You will need to look at the avr-libc documentation to find out more about the headers and functions that are provided and their usage."

Bazel: how disentangle the 81 ways to config a c/c++ build

I'm writing a new C/C++ library with tasks. It's low level so it'll need the ability to tune the build for CPU, O/S, whether libraries (or tasks) are build opt, dbg, or which network library is used (e.g. generic TCP, Solarflare, Mellanox, Infiniband). This seems like an ideal Bazel use-case.
I have basic Bazel working e.g. I can build sample tasks, libraries, with dependencies. And so far that works nicely.
Now, to the point: as the code is just C with some C++, it seems impractical to build a whole new toolchain. What Bazel ships with and/or can do on GCC/Clang is good enough; most defaults are fine. Can't I just customize it? Still, I need a simple way to accomplish the following:
allow developers to choose their compiler typically clang, or gcc, with version
allow developers to decide how to build their code e.g. x86-32 dbg libraries but x86-32 opt binaries. Or 64 bit versions of same ...
allow developers to select which network library to link in, and therefore build it but not the others
for dbg builds developers may want to toss in an extra compilation flag and be assured it's used everywhere when build libraries and tasks
to limit developers to valid configurations. For example, if I know the code works on Intel chips x86-32bit or 64 bit but PPC processors aren't supported ... then ... developers may have lots of dials to turn for x86, it's desirable to stop with an error when cpu=ppc
Which basic approach is best?
allow Bazel to auto-discover the platform, toolchain and instruct developers to modify it to task? How?
provide a copy-and-pasted-and-edited custom c/c++ tool chain .bzl file?
Focus on on CROSSTOOL only?
Ship the library with customized platform and tool chain files?
TIA

Implementing a programming language on the GraalVM architecture

What are the (architectural) differences in implementing a programming language on the GraalVM architecture – in particular between Graal, Truffle, and LLVM using Sulong?
I plan to reimplement an existing statically typed programming language on the GraalVM architecture, so that I can use it from Java without much of a hassle.
There are at moment three options:
Emit JVM bytecode
Write a Truffle interpreter
Emit LLVM bitcode, use Sulong to run it on GraalVM
Emitting JVM bytecode is the traditional option. You will have to work at the bytecode level, and you'll have to optimise your code your before emitting bytecode as the options that the JVM has for optimising it after it has been emitted are limited. To get good performance you may have to use invokedynamic.
Using Truffle is I'd say the easy option. You only have to write an AST interpreter and then code generation is all done for you. It's also the high performance option - in all languages where there is a Truffle version and a bytecode version, the Truffle version confidently outperforms the bytecode version as well as being simpler due to no bytecode generation stage.
Emitting LLVM bitcode and running on Sulong is an option, but it's not one I would recommend unless you have other constraints that lead you towards that option. Again you have to do that bitcode generation yourself, and you'll have to optimise yourself before emitting the bitcode as optimisations are limited after the bitcode is set.
Ruby is good for comparing these options - because there is a version that emits JVM bytecode (JRuby), a version using Truffle (TruffleRuby) and a version that emits LLVM bitcode (Rubinius, but it doesn't then run that bitcode on Sulong). I'd say that TruffleRuby is both faster and simpler in implementation than Rubinius or JRuby. (I work on TruffleRuby.)
I wouldn't worry about the fact that your language is statically typed. Truffle can work with static types, and it can use profiling specialisation to detect more fine-grained types again at runtime than are expressed statically.

How to restrict a BenchmarkDotNet job to run only on specific platforms?

I am writing an F# port of a program I wrote in native code in the past. I used BenchmarkDotNet to measure its performance. I also placed a native EXE in the application's output directory.
I set my native program as the baseline benchmark and saw it was 5x faster than my F# program. Just as I expected!
However, the native program is posted on GitHub and distributed as a Win64 binary only. In case somebody using another OS tries to run it, it will crash.
So, how to specify that this benchmark will only run on 64-bit Windows?
In BenchmarkDotNet, there is a concept of Jobs. Jobs define how the benchmark should be executed.
So, you can express your "x64 only" condition as a job. Note that there are several different 64x jit-compilers depends on runtime (LegacyJIT-x64 and RyuJIT-x64 for the full .NET Framework, RyuJIT-x64 for .NET Core, and Mono JIT compiler). You can request not only a specific platform but also a specific JIT-compiler (it can significantly affect performance), e.g.:
[<RyuJitX64Job>]
member this.MyAwesomeBenchmark () = // ...
In this case, a user will be notified that it's impossible to compile the benchmark for required platform.
Unfortunately, there is no way to require a specific OS for now (there is only one option: current OS). So, in your case, it's probably better to check System.Environment.Is64BitOperatingSystem and System.Environment.OSVersion at the start and don't run benchmarks on invalid operation systems.

Resources