How to combine bazel aspects and cc_library - bazel

I want to build a rule that is very similar to cc_proto_library. The key features are that it would apply an aspect to all the transitive proto_library dependencies and generate .cc and .h files for all of the dependencies. In addition it would generate actions that would compile these into object files.
While I understand how I can do the file generation, I don't see how to easily do the object generation. The native module is not available for rule (or aspect) implementations, and I cannot use a macro on top of the aspect as I need the object files to be generated in the same package as the proto_library so that it is generated only once.
cc_proto_library can do this I believe because it is not written in Skylark and thus has access to more primitives. Is there anyway to do this with just Skylark?

This is unfortunately currently not possible. There is no Skylark API to the C++ rules/actions (what we call C++ sandwich). We have plans to implement this in Q1 2018. There are many tracking issues, this one looks the most relevant: https://github.com/bazelbuild/bazel/issues/2163.

Related

Migrating maven-shade-plugin rules to bazel

I am currently investigating bazel as a tool to speed up java builds. I have a somewhat complex build to handle, including shading of many libs.
This shading is performed today using maven-shade-plugin. I could not find a bazel equivalent.
The solution should be able to:
aggregate multiple input jars
filter in/out files
specify which artifacts to include
parameter relocations (!)
propose a mechanism equivalent to resource transformers https://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html
If this is out of reach, I would be very interested in some generic way to specify some input, output and "something" to lauch to produce the later from the former.
Any java_bin has an implicit _deploy.jar that contains all classes and is similar to the shaded jar:
name_deploy.jar: A Java archive suitable for deployment (only built if explicitly requested)
The deploy jar contains all the classes that would be found by a classloader that searched the classpath from the binary's wrapper script from beginning to end.
https://docs.bazel.build/versions/master/be/java.html#java_binary_implicit_outputs
But I don't think bazel provides any of the other features that you are requesting.

Using large non-bazel dependencies in a bazel project

I would like to use a very large non-bazel system in a bazel project. Specifically, ROS2. This dependency provides a large number of python, C, and C++ libraries which are built using its own hand-rolled buildsystem. Obviously, I would like to avoid having to translate the entire buildsystem over to bazel.
Broadly, what's the best way of me doing this? In instinct was to use a custom repository rule to download the source (since it's split across many repositories), then use a genrule to call the ROS2 build system. Then write my simple cc_import and py_library rules for each of the individual components that I need.
However, I'm having trouble with the bit where I need to call the foreign build system. It seems that genrules require a list of output files to be specified, while I would like it to make an entire build directory available.
Before I spent any more time on this, I thought I'd ask whether I'm on the right lines since I'm new to bazel. Is this a good strategy? How would you approach this problem? Are there any other projects that mainly use bazel, but call other build systems in this way that I can look at?
As of recent, you can use rules_foreign_cc to call native CMake or make/configure like projects.

Can Bazel aspects access the current target's java_common.provider

As part of our efforts to create a bazel-maven transition interop tool (that creates maven sized jars from more granular sized bazel jars),
we want the aspect that runs on bazel build to access the target's java_common.provider in order to fetch the jars and ijars from it.
Is that possible?
The short answer is yes, that's possible.
You can use the java_common module in an aspect implementation the same way you would use it in a rule implementation.
From the documentation on java_common.provider:
java_common.provider.compile_jars and java_common.provider.transitive_compile_time_jars refer to the ijars used at compilation time
java_common.provider.transitive_runtime_jars refer to the full jars used at runtime.
The full jars at compilation time are not yet available, but someone is working on exposing this feature. (Issue #3528 on GitHub.)
Make sure you also read the blog post on this topic: https://blog.bazel.build/2017/03/07/java-sandwich.html

How to identify what projects have been affected by a code change

I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.

How to obtain a file with the content of all include files explicitly included?

George "Mirage" Bakhtadze, the author of Cast II engine, has wrote about an include-based technique which can be used to create generic containers and algorithms. The source is avaiable from the repo at Github. For me, his include-based technique is very interesting and useful, because it can be used for older Delphi and it is compatible between Delphi and Free Pascal (and non-Windows OS ready).
It would be more useful for me if the _GenVector written in "gen_coll_vector.inc" has Sorted & Duplicates properties and related behaviors (behaving the same way as in TStringList).
However, it is less obvious for me to insert the code when there are many include directives (I wonder how George managed this in the first place). Therefore, I wonder whether it is possible to obtain a sample file with all include files explicitly included ? It might be more straightforward for me to start from there.
I mean that there is certain built-in pre-processor that works before the actual compiling and whether there is a way to keep these intermediate files ?
Delphi does not use a pre-processor. It is (and always has been, since Turbo Pascal days) a single-pass compiler. There is no intermediate step. When you {$I} to include files, they are inserted in place in memory during the compilation process. Therefore, there is no "intermediate file" that can be kept.

Resources