I have a target which can only be built for Linux (in this case, because it depends on syscalls only available on Linux and there is no desire to try and make this cross-platform). How can I express this in my BUILD files?
I can see from the Platforms documentation that there exists a Linux platform definition as #bazel_tools//platforms:linux, but it is not clear to me how to make use of this to restrict a target. Trying to specify this in compatible_with results in an error like this:
(13:27:09) ERROR: /foo/BUILD:4:1: in compatible_with attribute of go_library rule //foo:go_default_library: constraint_value rule '#bazel_tools//platforms:linux' is misplaced here (expected environment). Since this rule was created by the macro 'go_library_macro', the error might have been caused by the macro implementation in /foo/BUILD:4:1
So I have a few related questions:
The error seems to indicate I've supplied the wrong type of rule to compatible_with. What is an environment and how do I provide one? (I've struggled to find documentation on this)
I gather that the migration to Platforms might not yet be complete and rules_go might not have been updated. If it's not possible with Platforms, is there an "old way" to do this instead?
Ideally, I would like this not to result in build errors when running commands like bazel test //:all on a different (non-Linux) platform – ie. I'd prefer it just exclude these, or something. Is this possible?
Thanks for your help 😊
Turns out that this is an open issue. Once this is fixed, I think it should be possible: https://github.com/bazelbuild/bazel/issues/3780
Related
I got some strange glibc-related linker errors for builds with distributed build cache configured on build nodes running different Linux distributions.
Now I somehow suspect build artifacts from those machines with different glibc versions getting mixed up, but I don't know how to investigate this.
How do I find out what Bazel takes into account when building the hash for a certain build artifact?
I know I can explicitly set environment variables which then will affect the hash. But how can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
And how do I check/compare what's been taken into account?
This is a complex topic and a multi-facet question. I am going to answer in the following order:
How do I check/compare what's been taken into account?
How to investigate against which glibc a build linked?
How can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
How do I check/compare what's been taken into account?
To answer this, you should look into the the execution look, specifically you can read up on https://bazel.build/remote/cache-remote#compare-logs. The *.json execution log should contain everything you need to know (granted, it might be a bit verbose) and is a little easier to process with shell-magic/your editor.
How to investigate against which glibc a build linked?
From the execution log, you can get all the required hashes to retrieve cached artifacts/binaries from your remote cache. Given these files, you should be able to use standard tools to get to the glibc version (ldd -r -v binary | grep GLIBC).
How can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
This depends on the way you have setup for compilation toolchain. The best case would be a fully hermetic compilation toolchain, where all necessary files are declared using attributes like https://bazel.build/reference/be/c-cpp#cc_toolchain.compiler_files.
But this would also mean to lock-down the compiler sysroot. This should include all libraries you are linking against if you want full hermeticity. If you want to use some system libraries, you need to tell bazel where to find them and to factor in their hash: https://stackoverflow.com/a/43419786/20546409 or https://www.stevenengelhardt.com/2021/09/22/practical-bazel-depending-on-a-system-provided-c-cpp-library/
If you use the auto-detected compiler toolchain, some tricks are used to lock-down the sysroot paths, but expect some non-hermiticity. https://github.com/limdor/bazel-examples/tree/master/linux_toolchain is a nice write-up how to move from the auto-detected toolchain to something more hermetic.
The hack
Of course, you can hack around this. Note, this is inherently a bad idea:
create a script that inspects the system, determines everything important like the glibc version, maybe the linux distribution (flavor)
creates a string describing this variation and hash-summing it
use that as the instance key/name for your remote cache
I'm trying to get bazel to use clang as the compiler on both Windows and Linux. (Debian 10 if that matters)
On Windows I managed to get it working by adding a windows-clang platform and registering the toolchain as described here.
Is there a similarly easy way to get switch to using clang on Linux?
I've dug around a but and as far as I can tell, you have two (or three options):
You can configure your won cc_toolchain. This may be arguably the "correct" solution.
You can try to tweak cc_configure() from #bazel_tools//tools/cpp:cc_configure.bzl and use it in your WORKSPACE. But that is both challenging and not necessarily pretty.
Neither of the two ways are likely to qualify as "easy". For that, the quickest would appear to be:
Automagic toolchain resolution considers environmental variable CC (only, not CXX) and if set its value is used for toolchain configuration. Hence, for instance this would do:
CC=/usr/bin/clang bazel build //:some_tgt
I hope I haven't missed anything, but I have not spotted a way to choose compiler through platforms (without having your own toolchain definition in place) as of today.
This is the error messsage I get.
I know it's kind of an eye roller, that it's difficult nigh impossible to tell what I may need without the source, but it seems like a deployment problem as people that installed the Qt SDK can run it. Plus, I figured I'd have better luck asking here than with a chinese developer that speaks google-english.
So here's what I've done:
I installed the MSVC2012.
I used a program called cffexplorer to see what the exe was looking for. I have the 7 or so .dlls that are at the top of the tree.
I found a recent (jun 2013) qwindows.dll from elsewhere on my system and put it in ./plugins (I've tried this file in ./, ./plugins, and ./plugins/platforms
I created a qt.conf with the following data (I determined the format from an existing Qt based app that works)
[Paths]
Plugins = plugins
Yet, I continue to get this message. Any suggestions on what I might look for to clear this up?
Ask the developer what compiler was used to build the application. Then you will need the right dll (that was built with the same compiler as the application). Also notice that (by default) the documentation says that qwindows.dll should be in the platforms folder in the same path as your executable, read more here. Depending on whether the developer used a Qt built with angle, you may also need: libEGL.dll and libGLESv2.dll. Dependency walker might help you find dependencies that are not there.
Sometimes in s/w companies, customers provide data in multiple formats. There are linkable and executable data that are said to be "Rehosted" and compiled object files that are said to be "Retargeted". I am trying to understand what rehosting and retargeting mean in this area. Is it similar to the Bootstrap theory in computer science? I have the understanding of the following process (if not incorrect):
PROBLEM:
I need to write a compiler for a new language called "MyLang" to run on PowerPC
Solution:
1. I need to write a compiler for a language "MyLang-Mini"; a subset of "MyLang" to run on PowerPC.
2. I need to write a compiler for "MyLang" using "MyLang-Mini" to run on PowerPC.
3. I run the compiler obtained from no. 1 through the compiler obtained from no. 2 to
obtain the compiler for MyLang to run on PowerPC.
IN BESPOKE "T" DIAGRAM (...ISH):
MyLang PowerPC MyLang PowerPC
MyLangMini MyLangMini PowerPC PowerPC(instr.)
PowerPC(instr.)
What I am getting confused about is rehosting and retargeting. How are they coonected to this concept? What am I rehosting and retargeting if I have some binary data such as .exe or .obj? I would appreciate some detailed explanation if possible please!
I know that this will embark onto "CROSS-COMPILERS", but would prefer expert opinions to be sure.
Thanks in advance.
I now know that in s/w engineering:
REHOSTING - If you have a third-party application linkable/executable that requires usage on your host machine, you do rehosting. The target in this case are most often the same (OS platform, processor, etc.). In worst case, there is a virtualisation required. The rehosted application will run as if it was one of the application running in the host machine
RETARGETTING - If you have a third-party source code, you might need to recompile that to match with your target environment. It may also be that you have third-party .o or .obj compiled models and you want to link them with your source code (retargeted) in order to host it on a host machine. Just like REHOSTED application, it will be as if the application was installed on the host machine.
It will be good to know how this is similar to the compiler rehosting and retargeting. Sorry, I am a newbee is this area and will appreciate even a slap on the wrist.
I'm producing builds using MSBuild, and build configurations set up in the dproj on the command line. It's slightly disconcerting that the size of the executables thus produced are different (not by much, but still!) to what an IDE build produces. Any ideas why? I would have thought the same compiler is used?
The main power of building from the Delphi command-line compiler is standardization - you explicitly identify the options (on the command line, in the .cfg files, etc), and the compiler follows the options provided exclusively. In contrast, the IDE has many other behaviors that are not clear and explicit - for example, it may search library paths not specified in the Project Options. My guess is that something's happening in the IDE build of which you're not entirely aware - and this is why standardized builds are done from the command line.
To see what IDE is doind, check
Tools | Options | Environment Options | Compiling and Running | Show Command Line
And you can check the compiler messages.
The first answer on using the command line for build consistency is right on and it is probably something you needn't worry about if you are relying on a build system where production files are always sourced from the console builds.
On the other hand, if you really do want to figure out what is going on you should turn on map files (at the full detail level) and compare/diff them. If there are differences between the two they will show up there. Any other differences that may exist are likely a result of a commmand line option being different (such as a conditional flag that may be set in the IDE settings).
This behavior has existed in every version of Delphi I've used. (5 - 2006). I wouldn't worry to much about it. When I first discovered it I spent a lot of time trying to resolve the difference. Did I miss a compiler flag? Is there a discrepancy between the IDE and the command line compiler's supported options?
In the end I decided it wasn't that big of an issue. Both consistently produced functionally equivalent executables.
If you supply exactly the same params to the command line compiler the produced executables will virtually be identical.
In fact the IDE just calls the commandline compiler. Compile your project in the IDE and look at the messages window. you will see the full dcc32.exe call ...