I am trying to follow the instructions for contributors here:
https://bazel.build/contributing.html
I have a successful build off of master (i.e. bazel build //src:bazel), but the doc suggests also "you might want to build the various tools Bazel uses." I am trying to do that, for example:
cd src/java_tools/singlejar
bazel build //...
but it fails with:
ERROR: /Users/.../bazel/third_party/protobuf/3.2.0/BUILD:621:1: no such target '//external:gtest': target 'gtest' not declared in package 'external' defined by /Users/plaird/scone/public/bazel/WORKSPACE and referenced by '//third_party/protobuf/3.2.0:test_plugin'.
Do I need to build gtest locally, and then add it to the WORKSPACE file?
bazel build //..., no matter where you invoke it, will build everything in the project. It looks like what you probably want is bazel build //src/java_tools/singlejar/..., which will build all targets under that directory.
In general, though, you probably don't need to compile singlejar separately. I've been working on Bazel for several years and 99% of the time you don't have to build the tools separately.
In terms of the error you're getting, it would be nice if we could get //... building, but it hasn't been a huge priority. The protobuf code build is weird and I don't recommend trying to debug it, just jump into whatever you want to actually work on.
Related
I observe that my Bazel build agent frequently builds the project from scratch (including compiling grpc, which keeps unchanged) instead of taking results from cache. Is there a way, like query or cquery (pardon my ignorance) to determine why is the cache considered invalid for particular target? Or any techniques to tackle cache invalidation problem?
This is How the bazel build works :
When running a build or a test, Bazel does the following: Loads the BUILD files relevant to the target. Analyzes the inputs and their dependencies, applies the specified build rules, and produces an action graph. Executes the build actions on the inputs until the final build outputs are produced.
If you are having any clear assumptions can you please share the complete details!
This is most likely due to the rebuild sensitivity to particular environment variables. Many build actions will read from environment variables and use them to change the outputs. Bazel keeps track of this and will rebuild seemingly unchanged remote targets when your env changes.
To demonstrate this;
Build grpc (2x ensure it is cached the second time)
Change the PATH environment variable (your IDE may do this without you knowing)
mkdir ~/bin && export PATH=$PATH:~/bin
Rebuild grpc (This should trigger a complete rebuild)
There are a couple helpful flags to combat this rebuild sensitivity, and I'd recommend adding them to your bazelrc.
incompatible_strict_action_env: Freezes your environment and doesn't source environment variables from your shell.
action_env modify environment variables as needed for you build.
# file //.bazelrc
# Don't source environment from shell
build --incompatible_strict_action_env
# Use action_env as needed for your project
build --action_env=CC=clang
I am trying to build drake from source in order to use the Gurobi solver, and have followed the instructions to build from source using Bazel.
When building and testing, using the suggested bazel test --config gurobi --test_tag_filters=gurobi //..., it responds that all tests pass, indicating that the build is successful.
I changed the include dir from /opt/drake to /home/melyso/drake (path to the cloned repo) in the CmakeLists.txt file. The project builds successfully. However, when printing drake::solvers::GurobiSolver::is_available() to the terminal, I get back 0, i.e. false. What might I be doing incorrectly here?
For some reasons, practical or not, rxjs npm package stores BAZEL.build configuration in the package, so when I'm trying to build my project (which has node_modules folder) bazel tries automatically to build something that it's not supposed to build at all.
My question would be - what is canonical way of ignoring some specific folder while building bazel project recursively?
The only way to achieve what I'm looking for that I know of is to point to it explicitly in the command line
bazel build //... --deleted_packages=node_modules/rxjs/src (see user manual)
But I don't want to type this every time.
Bazel recently added a feature for ignoring folders (similar to gitignore).
Simply add node_modules to the .bazelignore file in the root of your project.
Yes, this is expressible as a bazel target pattern:
bazel build -- //... -//node_modules/rxjs/src/...
Full documentation is available at https://docs.bazel.build/versions/master/user-manual.html#target-patterns
$ ant clean compile notexist install
Here I get BUILD FAILED because notexist doesn't exist as a target, which is expected. But is there anyway to just ignore or skip unknown targets? Or maybe map unknown targets to a known target (which would be a no-op for me)?
Background
We are using Atlassian Bamboo for our CI server, and whenever we want to add an ant target to our build, we run the risk of breaking older branches of code. This is because if we run an old build through our CI, it may not contain the target and therefore fail. Thus we are reduced to editing the build.xml file, and either use depends or <ant>, but this doesn't give us the flexibility we desire.
Example
Today we have:
ant clean selenium.tests
We want to add a new target to test our REST services. Target is rest.tests. Thus, I want my command to be
ant clean selenium.tests rest.tests
But for old branches, rest.tests doesn't exist yet. Our solution up to now has been to add rest.tests as a dependency of selenium-tests (since our build.xml is under version control), but this means we can't run selenium.tests by itself.
In hindsight, we should have just created a proxy target, such as integration.tests (we already use test for unit tests) which would delegate to both of these two targets. But unless there is a solution to my original question, we can't even add integration.tests to the CI.
You may have to invoke ant programmatically. Discussed here in Running ANT Programmatically Through Java
The org.apache.tools.ant.Project provides list of targets and way to invoke it
I don't think there is a way to test that a target (rather than a file) exists as you call it. My suggestions are:
How often do you build old versions which have missing targets? If it sufficiently infrequent that you can take the pain of manually commenting out those calls in the CI config?
Can you limit your CI calls to a tidy set which will generally be there: clean, build, test, dist etc?
Do you need to split the CI config into multiple jobs? If you really need targets which aren't there in some versions maybe you have radically different versions of your product that merit separation into different CI jobs. Then it won't make sense to build the job for which the target doesn't exist.
When an ANT build step fails in my build I'd like to archive the logs in order to determine the problem. The relevant logs, however, are not located in the workspace, so I have to use a full path to them.
The standard artifact archiving feature does not work well with full paths, so first I have to copy the logs into the workspace within some build step so that I can later archive them. I do not want to incorporate the copying code into the original ANT script (it does not really belong there). On the other hand, since the build step fails the build I can't execute the code that copies the artifacts into the workspace as a separate build step as it is never reached.
I am considering using ANT -keep-going option, but how will I then fail the build?
Any other ideas (artifact plugins that handle full paths gracefully, for example)?
Update: I've worked around the problem by creating a symbolic link in the workspace to the directory that contains the files to be archived. Kludgy, but effective.
I would recommend using Flexible Publish plugin in conjunction with the Conditional Build Step plugin.
The Flexible Publish plugin allows you to schedule build steps AFTER the build steps have normally run. This allows you to catch both successful and failed builds and execute something - say a script that copies the files from OUTSIDE the workspace to INSIDE the workspace. The Conditional BuildSet plugin allows conditionalizing the steps so that they only run when the build fails. Using these two plugins, you can copy the files into the workspace upon failure, then archive them with the usual Jenkins mechanisms.