(From https://groups.google.com/d/msg/bazel-discuss/HEpui0DLvnA/RzuwICDmBgAJ)
Forgive me if this has been asked and answered by the group/devs.
The list of "Declared include source" files is a component of the action key for C++ compiles.
This means that the addition of a header-extension file to srcs or hdrs of a cc_* target results in the invalidation of all compile actions which can see the declared list contents (in the hdrs case, transitively).
Can anyone explain how this could be necessary, when include pruning should be providing the minimal set of possible invalidation sources for a compile?
Reputation prevents me from commenting on your answer and the repost means that I don't own the question, but there is more to the problem than just a 're-validation':
Declared inclusion sources are transitively derived, resulting in action invalidation and reexecution (not simply re-validated, the definition of which is fuzzy at best for me) of all compiles in dependent targets.
The point of this post was to discuss (hence bazel-discuss) whether there is any way that previous compiled outputs could be affected, logistically, by adding (not discussing removal of) the definition of a header file, without changing any source related to the previous compilation. The inputs set, which would have been pruned to match the used header files, should (must) be an accurate enough depiction of the only possible triggers for action reexecution. The capacity of any compile to depend upon the newly added header is nil without further changes to the actual content of the input set of an action.
When the rule's definition changes (you add a file to srcs/hdrs), Bazel must assume that change may affect the compilation result, even if none of the other files changed. (For example you just added a header that was missing before.)
If you rebuild the target Bazel reruns the compilation action. If the output of that (the object file) is the same as the last time Bazel ran the action, it's going to re-validate downstream actions without re-executing them.
Related
I want to create a tarball of a binary and all libs it depends upon using pkg_tar(). I can retrieve a list of the binary's dependencies with
deps = native.existing_rule('my_binary')['deps']
However, the items in the list lack the #repo_name// prefix that was specified in the cc_binary() rule. For example, #system//:ace becomes :ace; when I try to operate on :ace, bazel rightfully tells me there is no such target.
I've looked through the entire dictionary returned by native.existing_rule and don't see a way to find the missing info. Is it not possible to retrieve this information with native.existing_rule or similar?
I know I can write a macro that creates the cc_binary target and the pkg_tar target, sharing the list of deps between them. This would be more elegant - but it seems quite strange if the deps can't be retrieved from the rule.
Have you considered using aspects? You can attach an aspect to dependencies of a given target and propagate information (in this case, fully-qualified label strings?) up to the root.
Let me know if you need any additional guidance!
I have a fork of the JavaScriptCore framework, where I have added a function of my own, which is exported. The framework compiles just find. Running nm on the framework reveals that the function (JSContextCreateBacktrace_unsafe) is indeed exported:
Leo-Natans-Wix-MPB:JavaScriptCore.framework lnatan$ nm -gU JavaScriptCore.framework/JavaScriptCore | grep JSContextCreateBacktrace
00000000004cb860 T _JSContextCreateBacktrace
00000000004cba10 T _JSContextCreateBacktrace_unsafe
However, I am unable to obtain the pointer of that function using CFBundleGetFunctionPointerForName or dlsym; both return NULL. At first, I used dlopen to open my framework, then tried using CFBundleCreate and then CFBundleGetFunctionPointerForName but that also returns NULL.
What could cause this?
Update
Something fishy is going on. I renamed one of the JSC functions, and nm reflects this. However, dlsym is still able to find the function with the original name, rather than the renamed.
It's hard to track this down since it's highly dependent on your specific environment and circumstances, but it is very likely you're running into this issue because the system image has already been loaded and you haven't changed the name of the framework.
If you look at the source code for dlopen in dyld/dyldAPIS.cpp:1458, you'll notice the context passed to dyld is configured with matchByInstallName = true. This context is then passed to load which executes the various stages necessary for image loading. There are a few phases worth noting:
loadPhase2 in dyld/dyld.cpp:2896 extracts the ending of the framework path and searches for it in the search path
loadPhase5check in dyld/dyld:2712 iterates over all loaded images and determines if any of them have a matching install name, and if one does, it returns that instead of loading a new one.
loadPhase5load in dyld/dyld:2601 finally loads the image if it wasn't loaded/found by any earlier steps. (It's worth noting loadPhase5check is executed first, since image loading is a two pass process.)
Given all of the above, I'd try renaming your framework to something besides JavaScriptCore.framework. Depending on the install name of both the system framework and your framework, I'd also recommend changing the install name. (There are plenty of blog articles and StackOverflow posts that document how to do this using install_name_tool -id.)
I've got an iOS framework that has a dependency on the (presumably Google maintained) pod called '!ProtoCompiler'. In order to build my framework I'm going to need it in the sandbox. So, I have a genrule and can try to include it with
src = glob(['Pods/!ProtoCompiler/**/*']) but I get the following error:
ERROR: BUILD:2:1: //Foo:framework-debug: invalid label 'Pods/!ProtoCompiler/google/protobuf/any.proto' in element 1118 of attribute 'srcs' in 'genrule' rule: invalid target name 'Pods/!ProtoCompiler/google/protobuf/any.proto': target names may not contain '!'.
As is, this seems like a total blocker for me using bazel to do this build. I don't have the ability to rename the pod directory as far as I can tell. As far as I can tell, the ! prohibition is supposed to be for target labels, is there any way I can specify that this is just a file, not a label? Or are those two concepts completely melded in bazel?
(Also, if I get this to work I'm worried about the fact that this produces a .framework directory and it seems like rules are expected to produces files only. Maybe I'll zip it up and then unzip it as part of the build of the test harness.)
As far as I can tell, the ! prohibition is supposed to be for target
labels, is there any way I can specify that this is just a file, not a
label? Or are those two concepts completely melded in bazel?
They are mostly molded.
Bazel associates a label with all source files in a package that appear in BUILD files, so that you can write srcs=["foo.cc", "//bar:baz.cc"] in a build rule and it'll work regardless of foo.cc and baz.cc being a source file, a generated file, or a build rule's name that produces files suitable for this particular srcs attribute.
That said you can of course have any file in the package, but if the name won't allow Bazel to derive a label from it, then you can't reference them in the BUILD file. Since glob is evaluated during loading and is expanded to a list of labels, using glob won't work around this limitation.
(...) it seems like rules are expected to produces files only. Maybe
I'll zip it up and then unzip it as part of the build of the test
harness.
Yes, that's the usual approach.
I want to define a precompile string variable and use it in {$include} directive in delphi, for example:
{$define FILE_NAME "lockfile"}
{$include FILE_NAME'.txt.1'}
{$include FILE_NAME'.txt.2'}
...
For security reasons (this is part of our licensing system), we don't want to use normal strings and file reading functions. Is there any capability for this purpose in Delphi?
The $INCLUDE directive does not support indirection on the file name. So, the following code:
const
someconst = 'foo';
{$INCLUDE someconst}
leads to the following error:
F1026 File not found: 'someconst.pas'
If you must use an include file, you must apply the indirection by some other means. One way could be to use the fact that the compiler will search for the included file by looking on the search path. So, if you place each client specific include file in a different directory, then you can add the client specific directory to the search path as part of your build process.
FWIW, I find it hard to believe that this will make your program more immune to hacking. I think that a more likely outcome is that your program will be just as susceptible to hacking, but that it will become much more difficult and error prone for you to build and distribute the program.
You requirement may be better satisfied by the proper use of a VCS system. You need "branches" for every customer where customer-specific files contains customer-specific data. This will avoid to litter your code with complex directive to manage each customer - file names stays the same, just their content is different in each branch. Adding a new customer just requires to create a new branch and update files there.
Then you just need get each branch and compile it for each customer to get the final executable(s) with customer specific data built in.
I have been trying to make a task in my TFS builds more generic, and one of the things I am trying to do is copy some files to different directories depending on the build using the task. I toyed with the idea of using properties, but I couldn't think of a way to do that cleanly, so I tried to go with using item metadata, as I've been able to do so in another place in the same target file I'm working on, only this time, I'd like to use properties.
Here's what I want to do:
<ItemGroup>
<DestinationParent Include="$(DeploymentPath)">
<DestinationParentPath>$(DeploymentPath)</QuartzParentPath>
</DestinationParent>
</ItemGroup>
And later in the build, I tried to copy some files to the destination folder by referencing the item metadata:
<Copy SourceFiles="#(FilesToCopy)" DestinationFiles="#(FilesToCopy->'%(DestinationParentPath)\Destination\%(RecursiveDir)%(Filename)%(Extension)')" ContinueOnError="false" ></Copy>
Unfortunately, after the build runs, my BuildLog shows the following:
Copying file from "$(BinariesRoot)\%(ConfigurationToBuild.FlavorToBuild)\<File being copied>" to "\Destination\<File being copied>".
%(DestinationParentPath) had expanded to an empty string, for whatever reason. Using %(DestinationParent.DestinationParentPath) produced an error, telling me that I should simply be using %(DestinationParentPath). $(DeploymentPath) is expanded to the correct string as expected in several other places in the build.
Another source of confusion is that using %(ConfigurationToBuild.FlavorToBuild) yielded the correct value, i.e. Test, as can be seen in the following:
EDIT: this is defined under the root node Project, whereas the ItemGroup with DestinationParentPath is defined under a Target node. Does this also make a difference?
<ItemGroup>
<ConfigurationToBuild Include="Test|Any CPU">
<FlavorToBuild>Test</FlavorToBuild>
<PlatformToBuild>Any CPU</PlatformToBuild>
</ConfigurationToBuild>
</ItemGroup>
It does not seem as though the Include attribute is relevant when you're only interested in the string in the item's metadata since I'm pretty sure "Test|Any CPU" does not reference any actual file.
So once again, why is %(DestinationParentPath) expanding to an empty string?
EDIT: I forgot to mention that I also tried hard-coding the actual path for DestinationParentPath, but this still resulted in %(DestinationParentPath) expanding to an empty string.
EDIT: this is defined under the root node Project, whereas the ItemGroup with DestinationParentPath is defined under a Target node. Does this also make a difference?
Yes, it makes a difference. The ability to define an ItemGroup inside a Target is new to msbuild 3.5. Despite looking declarative, it's actually executed at runtime just as if you'd called the old style CreateItem / CreateProperty tasks. That alone leads to potential issues: you need to consider when the containing task is (first) called. Order of operations is not always obvious to the naked eye. May be wise to make the task where you use %(DestinationParentPath) dependent on the task where it's created, even if there's no "logical" dependency.
In addition, there are the age-old msbuild scoping quirks/bugs. Dynamically created properties & items are not visible to "sibling" tasks. Also, items updated in nested builds aren't always bubbled up.
Check out the workarounds in the links, you should be able to find something that works for you even if it's icky.