We have to pass a special linkopts flag to cc_library rules that use <filesystem>, specifically for the GCC version that ships with Debian 10 (gcc 8.3).
I don't want to make developers pass a --config=old_gcc or similar at the top level.
I was hoping an incantation kind of like this would work:
linkopts = select({
"#bazel_tools//tools/cpp:gcc": ["-lstdc++fs"],
"//conditions:default": [],
}),
But a) gcc is not a configurable attribute that select() can use and b) we more specifically should test the version number is 8 (we'll only support 8 or above).
How do I extract a is_gcc8-like config_setting I can select on like this for targets using <filesystem>? TIA!
One way to do this is to change to using a manual CROSSTOOL setup instead of relying on the automatic crosstool setup (documentation here). This would allow you to specify a set of linker flags to apply when compiling with a certain combination of --cpu and --compiler.
Related
In my objdump -t output, I see the following two lines:
00000000000004d2 l F .text.unlikely 00000000000000ec function-signature-goes-here [clone .cold.427]
and
00000000000018e0 g F .text 0000000000000690 function-signature-goes-here
I know l means local and g means global. I also know that .text is a section, or a type of section, in an object file, containing compiled program instructions. But what is .text.unlikely? Assuming it's a different section (or type-of-section) from .text - what's the difference?
In my GCC v5.4.0 manpage, I found the following switch:
-freorder-functions
which says:
Reorder functions in the object file in order to improve code
locality. This is implemented by using special subsections
".text.hot" for most frequently executed functions and
".text.unlikely" for unlikely executed functions. Reordering is done
by the linker so object file format must support named sections and
linker must place them in a reasonable way.
Also profile feedback must be available to make this option effective.
See -fprofile-arcs for details.
Enabled at levels -O2, -O3, -Os.
Looks like the compiler was run with optimization flags or that switch for this binary, and functions are organized in subsections to optimize spatial locality.
I have a bazel rule that produces an artifact. How should I do to add a post-processing step with the artifact produced as a dependency?
We have a big build system where our macros are used in several BUILD files. So now I need to add another step that would use the artifacts produced by a specific macro to create another artifact, and hopefully without having to update all BUILD files.
In a non-bazel context I would probably use something that triggers the extra step, but in the bazel context, the best thing I have come up with have been to add a new macro, that uses the rule created by the other macro as a dependency.
It is something like this today:
Macro M1 generate rule R1 - which produce artifact A.
Buildfile B use macro M1 and when that target is built artifact A is produced.
So I can now add a macro M2 that generate rule R2, which produce artifact B.Artifact A is a dependency to this rule.The users will use macro M2 instead.
But can I do this in some other way?
Example of a use case could be that I have a macro that produces a binary, and I now want to add e.g. signing. "The users" will still want to build that binary, and the signed artifact is created as a bi-product of little interest for the users.
You could update M1 to call M2.
M1 calling M2 merely declares rules. Typically macros look like this:
def M1(name, arg1, ...):
R1(name=name, arg1=arg1, ...)
When you build M1 rule "//foo:bar", you actually build R1 named "//foo:bar". So you must update M1 to call R1 using some other name than name, e.g. name + "dep", and call M2 with the name and pass R1's name as a dependency. So then if you build "//foo:bar", you'll build M2's underlying rule (R2), which depends on R1, and so Bazel first builds R1 (and produces A), and then R2 (consuming A)..
One more thing: Bazel preprocesses macros into actual rules, before it loads the rules in the BUILD file. You can inspect the result of this preprocessing, to see what rules you actually have in the package, like so:
bazel query --output=build //foo:*
I'm trying to understand config_setting for detecting the underlying platform and had some doubts. Could you help me clarify them?
What is the difference between x64_windows and x64_windows_(msvc|msys) cpus? If I create config_setting's for all of them, will only one of them trigger? Should I just ignore x64_windows?
To detect Windows, what is the recommended way? Currently I'm doing:
config_setting(
name = "windows",
values = {"crosstool_top": "//crosstools/windows"},
)
config_setting(
name = "windows_msvc",
values = {
"crosstool_top": "//crosstools/windows",
"cpu": "x64_windows_msvc",
},
)
config_setting(
name = "windows_msys",
values = {
"crosstool_top": "//crosstools/windows",
"cpu": "x64_windows_msys",
},
)
By using this I want to use :windows to match all Windows versions and :windows_msvc, for example, to match only MSVC. Is this the best way to do it?
What is the difference between darwin and darwin_x86_64 cpus? I know they match macOS, but do I need to always specify both when selecting something for macOS? If not, is there a better way to detect macOS with only one config_setting? Like using //crosstools with Windows?
How do detect Linux? I know you can detect the operating systems you care about first and then use //conditions:default, but it'd be nice to have a way to detect specifically Linux and not leave it as the default.
What are k8, piii, etc? Is there any documentation somewhere describing all the possible cpu values and what they mean?
If I wanted to use //crosstools to detect each platform, is there somewhere I can look up all available crosstools?
Thanks!
Great questions, all. Let me tackle them one by one:
--cpu=x64_windows_msys triggers the C++ toolchain that relies on MSYS/Cygwin. --cpu=x64_windows_msvc triggers the Windows-native (MSVC) toolchain. -cpu=x64_windows triggers the default, which is still MSYS but being converted to MSVC.
Which ones you want to support is up to you, but it's probably safest to support all for generality (and if one is just an alias for the other it doesn't require very complicated logic).
Only one config_setting can trigger at a time.
Unless you'e using a custom -crosstool_top= flag to specify Windows builds, you'll probably want to trigger on --cpu, e.g:
config_setting(
name = "windows",
values = {"cpu": "x64_windows"}
There's not a great way now to define all Windows. This is a current deficiency in Bazel's ability to recognize platforms, which settings like --cpu and --crosstool_top don't quite model the right way. Ongoing work to create a first-class concept of platform will provide the best solution to what you want. But for now --cpu is probably your best option.
This would basically be the same story as Windows. But to my knowledge there's only darwin for default crosstools, no darwin_x86_64.
For the time being it's probably best to use the //conditions:default approach you'd rather not do. Once first-class platforms are available that'll give you the fidelity you want.
k8 and piii are pseudonyms for 86 64-bit and 32-bit CPUs, respectively. They also tend to be associated with "Linux" by convention, although this is not a guaranteed 1-1 match.
There is no definitive set of "all possible CPU values". Basically, --cpu is just a string that gets resolved in CROSSTOOL files to toolchains with identifiers that match that string. This allows you to write new CROSSTOOL files for new CPU types you want to encode yourself. So the exact set of available CPUs depends on who's using Bazel and how they have their workspace set up.
For the same reasons as 5., there is no definitive list. See Bazel's github tools/ directory for references to defaults.
I'm setting up TFS 2015 on-prem and I'm having an issue on my last build step, Publish Build Artifacts. For some reason, the build agent appears to be archiving old binaries and I'm left with a huge filepath:
E:\TFSBuildAgent\_work\1a4e9e55\workspace\application\Development\project\WCF\WCF\obj\Debug\Package\Archive\Content\E_C\TFSBuildAgent\_work\1a4e9e55\workspace\application\Development\project\WCF\WCF\obj\Debug\Package\PackageTmp\bin
I'm copying the files using the example minimatch pattern to begin with:
**\bin
I'm only testing at the moment so this is not a permanent solution but how can I copy all binaries that are in a bin folder but not a descendant of obj?
From research I think that this should work, but it doesn't (It doesn't match anything):
**!(obj)**\bin
I'm using www.globtester.com to test. Any suggestions?
On a separate note, I'll look into the archiving issue later but if anyone has any pointers on it, feel free to comment. Thanks
In VSTS there are two kinds of pattern matching for URLs that are built-in to the SDKs. Most tasks nowadays use the Minimatch pattern as described in Matt's answer. However, some use the pattern that was used by the 1.x Agent's Powershell SDK. That format is still available in the 2.x Agent's Powershell SDK by the way.
So that means there are 5 kinds of tasks:
1.x agent - Powershell SDK
2.x agent - Node SDK
2.x agent - Powershell 1 Backwards compatibility
2.x agent - Powershell 3 SDK - Using find-files
2.x agent - Powershell 3 SDK - Using find-match
The ones in bold don't Minimatch, but the format documented in the VSTS-Task-SDK's find-files method.
The original question was posted in 2015, at which point in time the 2.x agent wasn't yet around. In that case, the pattern would, in all likelihood, be:
**\bin\$(BuildConfiguration)\**\*;-:**\obj\**
The -: excludes the items from the ones in front of it.
According to Microsoft's documentation, here is a list of
file matching patterns you can use. The most important rules are:
Match with ?
? matches any single character within a file or directory name (zero or one times).
Match with * or +
* or + matches zero or more characters within a file or directory name.
Match with # sign
# matches exactly once.
Match with Brackets (, ) and |
If you're using brackets with | it is treated as a logical OR, e.g. *(hello|world) means "Zero or more occurrances of hello or world"
Match with Double-asterisk **
** recursive wildcard. For example, /hello/**/* matches all descendants of /hello.
Exclude patterns with !
Leading ! changes the meaning of an include pattern to exclude. Interleaved exclude patterns are supported.
Character sets with [ and ]
[] matches a set or range of characters within a file or directory name.
Comments with #
Patterns that begin with # are treated as comments.
Escaping
Wrapping special characters in [] can be used to escape literal glob characters in a file name. For example the literal file name hello[a-z] can be escaped as hello[[]a-z].
Example
The following expressions can be used in the Contents field of the "Copy Files" build step to create a deployment package for a web project:
**\?(.config|.dll|*.sitemap)
**\?(.exe|.dll|.pdb|.xml|*.resx)
**\?(.js|.css|.html|.aspx|.ascx|.asax|.Master|.cshtml|*.map)
**\?(.gif|.png|.jpg|.ico|*.pdf)
Note: You might need to add more extensions, depending on the needs of your project.
Running z3 -p with the latest (unstable) Z3 shows a list of parameters grouped by module. The instructions read:
To set a module parameter, use <module-name>.<parameter-name>=value
Example: pp.decimal=true
In general, how do these instructions translate to the C API? In the current documentation, there seems to be a set of API calls dealing with "global" configuration, e.g., Z3_set_param_value, and another object-specific set of calls built around the Z3_params type, such as Z3_solver_set_params.
In particular, I was wondering if I can use Z3_set_param_value to globally set any parameter in any module. Other StackOverflow answers advertise the use of Z3_params objects even for global parameters, like timeout (or is it :timeout?), but it's not clear to me how this API maps to the module.parameter=value syntax.
The module/name parameters are mainly for the command-line version of Z3.
Global parameters are meant to be set once in the beginning and will then be valid for all subsequent calls. We introduced this parameter setting scheme together with the new strategies/goals/solvers/tactics/probes interface because we needed different configurations of tactics and the Z3_params object is meant to be used mainly for that. For instance, Z3_tactic_using_params creates a new tactic that is a reconfiguration of another tactic based on the options in the Z3_params object.
Note however, when creating tactics through the API, there are no modules (the tactic you create doesn't live in a Z3-internal `parameter module'). For example, in the strategies tutorial (see here), a tactic is constructed and applied as follows:
(check-sat-using (then (using-params simplify :arith-lhs true :som true)
normalize-bounds
lia2pb
pb2bv
bit-blast
sat))
So, the parameters "arith-lhs" and "som" are enabled for the simplifier. On the commandline, the same option is in the "rewriter" module, i.e., it would be rewriter.arith_lhs=true and if it is enabled on the commmand line, it will be enabled every time the simplifier is called.
A list of tactics and the list of parameters that it recognizes can be obtained by running (on Windows, Linux resp.)
echo (help-tactic) | z3 -in -smt2
echo "(help-tactic)" | z3 -in -smt2
Another thing to note is that parameters in a Z3_params object are not checked in any way, i.e., it is possible to provide a bogus parameter name and Z3 will not complain or issue a warning, the tactics will simply ignore that parameter.
The : in front of parameter names is a left-over of Lisp, which is the basis for the SMT2 format. See, e.g., here: Why colons precede variables in Common Lisp. They are only necessary when using the SMT2 input language. So, the SMT2 command
(set-option :timeout 2000)
is meant to be equivalent to the commandline parameter
timeout=2000
Since the question explicitly mentions the timeout parameter: We recently had some issues with timeout handling on OSX, it may be necessary to get the latest fixes, and of course there may be more bugs that we didn't find yet.
In the C API, the function Z3_global_param_set is used to set the global parameters, and also to set default module parameters. These parameters will be shared by all Z3_context objects created afterwards (i.e., pp.decimal) and they will be used if one of the built-in tactics is applied.