How do I instruct Bazel to generate C/C++ assembly files? - bazel

How do I get Bazel to write out the assembly for a C/C++ program? I'd like to inspect the assembly while not having to do many changes to my current build system.
For my specific use case I need generate the assembly for all linked files when I bazel(isk) test. I would like to take a look at the assembly to check on some inlining behavior.
If I can't narrow the assembly generation to only occur when testing, that's fine; it's not a big deal to have to switch that on and off.
What would be nice, however, is if there is a way to control this for several compilers with one configuration (file). Hence the question is not about producing assembly with gcc or clang, but about bazel.

--save_temps will do this for the configured compiler.

Related

Handling complex and large dependencies

Problem
I've been developing a game in C++ in my spare time and I've opted to use Bazel as my build tool since I have never had a ton of luck (or fun) working with make or cmake. I also have dependencies in other languages (python for some of the high level scripting). I'm using glfw for basic window handling and high level graphics support and that works well enough but now comes the problem. I'm uncertain on how I should handle dependencies like glfw in a Bazel world.
For some of my dependencies (like gtest and fruit) I can just reference them in my WORKSPACE file and Bazel handles them automagically but glfw hasn't adopted Bazel. So all of this leads me to ask, what should I do about dependencies that don't use Bazel inside a Bazel project?
Current approach
For many of the simpler dependencies I have, I simply created a new_git_repository entry in my WORKSPACE file and created a BUILD file for the library. This works great until you get to really complicated libraries like glfw that have a number of dependencies on their own.
When building glfw for a Linux machine running X11 you now have a dependency on X11 which would mean adding X11 to my Bazel setup. X11 Comes with its own set of dependencies (the X11 libraries like X11Cursor) and so on.
glfw also tries to provide basic joystick support which is provided by default in Linux which is great! Except that this is provided by the kernel which means that the kernel is also a dependency of my project. Now I shouldn't need anything more than the kernel headers this still seems like a lot to bring in.
Alternative Options
The reason I took the approach I've taken so far is to make the dependencies required to spin up a machine that can successfully build my game very minimal. In theory they just need a C/C++ compiler, Java 8, and Bazel and they're off to the races. This is great since it also means I can create a Docker container that has Bazel installed and do CI/CD really easily.
I could sacrifice this ease and just say that you need to have libraries like glfw installed before attempting to compile the game but that brings the whole which version is installed and how is it all configured problem back up that Bazel is supposed to help solve.
Surely there is a simpler solution and I'm overthinking this?
If the glfw project has no BUILD files, then you have the following options:
Build glfw inside a genrule.
If glfw supports some other build system like make, you could create a genrule that runs the tool. This approach has obvious drawbacks, like the not-to-be-underestimated impracticality of having to declare all inputs of that genrule, but it'd be the simplest way of Bazel'izing glfw.
Pre-build glfw.o and check it into your source tree.
You can create a cc_library rule for it, and put the .o file in the srcs. Even though this solution is the least flexible of all because you not only restrict the target platform to whatever the .o was built for, but also make it harder to reproduce the whole build, the benefits are sometimes worth the costs.
I view this approach as a last resort. Even in Bazel's own source code there's one cc_library.srcs that includes a raw object file, because it was worth it, as the commit message of 92caf38 explains.
Require that glfw be installed.
You already considered this option. Some people may prefer this to the other approaches.

How to deal with tangled uses dependencies in order to start unit testing?

I have a messy Delphi 7 legacy system to maintain and develop. I am already reading "Working effectively with legacy code" and I like this book very much.
In order to start following the advices in the book, I created a test project and tried to write a single test. To do this I need to add some unit to the test project, but here lies the problem: the system under test has horrific uses dependencies. One unit uses some other unit, that uses some other unit and so on, and so on. It seems that most units directly or indirectly use one particular unit, and this unit in turn has 170 dependencies in its uses clause. There are indirect circular dependencies also.
Currently I am trying to add all of the legacy system's units into the test project, but I am running into all kind of problems, like "unit xxx was compiled with a different version of xxx", and others.
So I wonder if I am doing something wrong. I have used unit testing before, but in my own projects, that were smaller and with better structure and modularization. What are the options I have in this situation? Am I missing something?
You will always have dependencies in your code. Well, as long as you have code re-use, you will have dependencies. Since you are testing a legacy system, wholesale re-structuring is out of the question.
So you simply need to accept the dependencies. The most convenient and practical approach is to have a single unit tests project. That project contains all your unit tests. Use the facilities of your runner program to run only specific tests at any one time.
This leads to your project have the same list of units in its .dpr file as the main project. That's what you have currently tried and it's the right approach.
Your problem sounds like you are sharing the DCU directory (unit output directory) between the main project and the unit tests project. And you have different compiler options for the two projects. That's the most likely explanation for the error you report.
There are a couple of obvious solutions:
Align the compiler options for both projects. Then they can share DCUs.
Have separate DCU directories for the two projects.
Option 2 is much more robust and is best practise. However, you should try to understand why the compiler options differ. It's quite possible that your compiler options in the new unit tests project will need to be changed so that the units under test compile and function as desired. In modern Delphi I would use option sets to ensure consistency of compiler options.
Now, there may be other technical problems that you are facing, and my explanation of the error may not be quite right since I'm having to guess a little. But the bottom line is that having the same list of units in your .dpr files is the way to go.

How should I maintain JDK7 projects, so that they automatically could be downgraded for JDK6?

I have few own APIs with around 2000 classes overall. Some of them use the new Path API from JDK7. Most other classes, however, do not rely on any new JDK APIs or new language features. So most classes could be used in a JDK6 environment (which I plan to do). Let's assume, I've annotated all JDK7-only classes with #Java7Only.
What I need now, is a way to create a JDK6-only subset of all my projects more-or-less automatically, without introducing new version branching or product lines (would be too complicated to maintain).
All projects are created using Netbeans, thus using Ant. Many projects depend on others.
Please help me evaluate, which ideas according to my problem is most appropriate. Which problems could occur with each idea?
Common first step for all ideas
Let an annotation processor search for #Java7Only-annotated classes and store the list to a properties file.
Idea 1 (specific)
Write a tool which would use the properties file to recursively copy the whole project, except JDK7-only files.
Build the copied project using JDK6 by invoking ant, thus getting a JDK6-compliant jar.
Idea 2 (specific)
Write a second annotation processor which would use the properties file to pass everything except JDK7-only files to a JavaCompiler instance.
Either build a jar using Java APIs or use Ant API for that.
(This would be a Java-only idea, but probably too complicated)
Idea X (abstract)
Somehow influence the Ant build process (by overwriting some targets?) and for each JDK6-compliant class: let Ant compile two versions of it (one time with JDK6 compiler, another time with JDK7 compiler).
(JDK7-only classes would be compiled only once, using the JDK7 compiler, of course)
Package each bunch to a separate jar.
Possible common problems to the ideas
Some projects dependent on others, so some actions (such as packaging) should consider this.
Remember: the JDK7 compiler generates downward incompatible class files, that's why every possible idea has to happen on sources-level (before or during the build process, not afterwards).
My thoughts on Idea 2:
Essentially this is invoking a compiler within a compiler. Annotation processors are run as part of compilation. Can this be done safely? Is there any static state in Sun's javac that would cause problems. (I don't know the answer but from memory there might be some static state that could cause problems in this scenario).
Idea 1 seems simpler and better to me.
But taking a step back, is it possible to separate out all the JDK 7 specific stuff into a separate module and compile it separately, into a different JAR?
Have the 'main' project, compiled using JDK 6 (which JDK 7 would have no problems reading because it is backwards compatible)
The JDK 7 specific module(s), with source in a different directory, which includes the 'main' JAR on the compilation classpath, could be built separately, with a different build.xml if necessary.
This only partially applies but I'd thought I'd mention it anyway.
The problem with just using -source 1.6 -target 1.6 options for validation is that you can still use Java 7 API when compiled using JDK 7.
I've used the Animal Sniffer Maven Plugin for a few projects now and it has proved quite useful. This plugin scans byte-code of your classes for JDK API usage. That is, you can tell it to fail the build if you attempt to use JDK 7 API when you are targeting JDK 6. This wont help much for separating out classes as you need but it could be useful as a final validation step combined with -source 1.6 -target 1.6 compiler options.
There is also an animal sniffer Ant plugin, as mentioned from the Animal Sniffer main page.

Does gcov give code coverage analysis for assembly language code

I have an application which i build using gcc on linux host for ARM target processor. This generated arm executable i execute on a ARM development board i have.
I want to do some code coverage analysis:
Will gcov show code coverage if i have ARM assembly source files in my build environment?
If my build environment has some X86 assembly source files, then will gcov show code coverage data?
Thank you.
-AD.
AFAIK, gcov works by preprocessing your C or C++ source code.
If you have pure assembly language files, I don't think gcov ever
sees them.
If it does, I'd be suprised if it understand how
to safely insert code in arbitrary-target assembly code,
with ARM being common enough so there's a faint chance.
The problem with instrumentating assembly code is the
test coverage probe code itself may require registers,
and there isn't a safe way to know, for an arbitrary piece
of assemblers, a) what registers are available, and b)
if there's an inserted instruction, will some other instruction
break because of the extra space (e.g., a hardwired jump relative
across the inserted instruction).

Why is the executable produced by Delphi 2009 IDE different to that produced on the command line?

I'm producing builds using MSBuild, and build configurations set up in the dproj on the command line. It's slightly disconcerting that the size of the executables thus produced are different (not by much, but still!) to what an IDE build produces. Any ideas why? I would have thought the same compiler is used?
The main power of building from the Delphi command-line compiler is standardization - you explicitly identify the options (on the command line, in the .cfg files, etc), and the compiler follows the options provided exclusively. In contrast, the IDE has many other behaviors that are not clear and explicit - for example, it may search library paths not specified in the Project Options. My guess is that something's happening in the IDE build of which you're not entirely aware - and this is why standardized builds are done from the command line.
To see what IDE is doind, check
Tools | Options | Environment Options | Compiling and Running | Show Command Line
And you can check the compiler messages.
The first answer on using the command line for build consistency is right on and it is probably something you needn't worry about if you are relying on a build system where production files are always sourced from the console builds.
On the other hand, if you really do want to figure out what is going on you should turn on map files (at the full detail level) and compare/diff them. If there are differences between the two they will show up there. Any other differences that may exist are likely a result of a commmand line option being different (such as a conditional flag that may be set in the IDE settings).
This behavior has existed in every version of Delphi I've used. (5 - 2006). I wouldn't worry to much about it. When I first discovered it I spent a lot of time trying to resolve the difference. Did I miss a compiler flag? Is there a discrepancy between the IDE and the command line compiler's supported options?
In the end I decided it wasn't that big of an issue. Both consistently produced functionally equivalent executables.
If you supply exactly the same params to the command line compiler the produced executables will virtually be identical.
In fact the IDE just calls the commandline compiler. Compile your project in the IDE and look at the messages window. you will see the full dcc32.exe call ...

Resources