why is bazel faster than gradle - bazel

originally, I use gradle to build my android project, but recently, I migrate it to bazel, and I find that bazel is truly fast than gradle, so I want to know why, but the doc of bazel doesn't give much idea about this, can anyone help me?
Thanks very much!

Full disclosure: I work on Bazel.
That's not an easy question to answer for two reasons. First, performance is highly dependent on the scenario. For example, we'd generally expect a clean build to be slower than a build where only a single file has changed. Second, I don't know how Gradle works internally, and they've done a lot of work recently to improve Gradle performance.
But I can talk about Bazel and what we're doing to make it fast. We've been working on build performance for ~10 years, starting long before we made it public.
The key feature is that we require all dependencies to be declared, and we track them explicitly. If you use a header file in C++, or depend on a Java library, you must declare this dependency in your BUILD file (and we enforce that these are declared by sandboxing individual actions). There are three effects from this:
First, we can heavily parallelize the build, because we know which things depend on which other things.
Second, we can make incremental builds very fast, because we can tell what parts of the build have to be re-done when you change a specific file (BUILD file, header file, source file, ...).
Third, we almost never have to do clean builds. Other build tools often require 'make clean' to get into a predictable state - since Bazel knows all the dependencies, it can get to a predictable state on every single build.
Another effect is that we can cache remotely (i.e., across users), and even execute on another machine, although neither of these are fully supported at the time of this writing.

Related

How to setup Incremental Build in TFS?

I want to set up an Incremental Build in TFS as we want to deploy only modified files into Physical path, not the entire code.
We want the feature to build & deploy only the files that have been changed from the previous deployment. This will reduce the build and deployment time and the developers won't have to wait longer to see their changes deployed.
What you're describing is not an "incremental build". You a describing a much more complex situation than an incremental build.
What you're describing has never been an out-of-the-box option, and is in fact incredibly difficult to do properly, and ultimately would probably not impact things as much as you're hoping, anyway.
First of all, it's actually very difficult to determine a subset of files that have changed between deployments. And if you're building and deploying properly, then you're making a single build and deploying it along a pipeline of environments. This means that "what's different" at any given time is potentially different for every environment in your pipeline. Ex: DEV has version 5, QA has version 4, and PROD has version 3. So you have to start by assuming that you're going to use the oldest version. Build systems have no innate knowledge of "releases", so you'd have to build something into your build and release pipelines to track what source version constitutes the latest code in production.
Let's say you've solved that problem. You now have the ability to retrieve just the delta between what's deployed to production and the commit being built.
If you're working with compiled code, then you still need all of the source code, because you're going to have to rebuild the whole thing. Every assembly is going to get regenerated, and different metadata at compilation time is going to mean those assemblies are different even if the code that constitutes those assemblies is the same. And since assemblies can reference other assemblies, you have no straightforward way of determining at build time which assemblies have actually changed and need to be deployed. So you pretty much have no choice but to deploy all compiled assets every time. Note that this still applies to TypeScript or anything else that goes through a compiler/transpiler process; you need all of the code available, and it has to go through the entire build process.
So at this point, you still have to build your entire application to get the deployable output. Build time hasn't gone down at all. You've managed to bring down just a subset of static content (i.e. HTML pages, images, etc) to be deployed, though. That may have sped your deployments up a bit!
However, if the thing that's making your build and deployment process slow is that you have a ton of non-code-related static content, then you've gone through a very long and convoluted process to arrive at a much simpler solution: Move static content to a CDN and get it out of source control, or have a separate process that manages static content so that it can be deployed independently of unrelated application code.
You haven't really provided any information that can be used to provide a recommendation on how to proceed, but hopefully this answer is helpful in understanding why what you want to do is not going to solve your problem, unless you are dealing entirely with static content or scripts that don't require building.

Are the bazel buildtools primarily focused on single starlark files?

I'm taking a glance over at the buildtools repo (https://github.com/bazelbuild/buildtools) and trying to understand the scope of its responsibilities as it relates to the three phases of a bazel build (loading, analysis, execution)
The repo's description states that it is A bazel BUILD file formatter and editor. I find much logic in the repo written in go-lang that lends complete support for an AST parser, starlark syntax interpreting capabilities, reformatting and rewriting of BUILD files and what not. Basically there's logic designed to operate upon a single starlark file at a time. Rereading that repo description in this light leads me to conclude that buildtools is really a single file scoped effort and presents tools that only intersect functionality wise (perhaps only partially) to those loading operations bazel conducts while building.
Question: Is it accurate that the focus of buildtools is upon the single starlark file?
If that's true then all the multiple starlark file analysis logic and so forth seems to actually be maintained over at https://github.com/bazelbuild/bazel/tree/master/src/main/java/com/google/devtools/build/lib and I should not expect to find any tools for the analysis phase and beyond in the buildtools repo. Is that right?
I don't work on Buildtools, but we agree: these tools seem to focus on BUILD / .bzl files in isolation. They let you process these files in parallel, to do similar operations on them.
If you wonder whether these tools understand relations between these files, the answer seems to be no.
If you further wonder what tools do then, the answer is Bazel's query, cquery, and aquery. I'm not aware of a programmable API for these queries though; you have to run Bazel to perform them.
buildtools has tools working on a syntactic level (it looks at the syntax tree). These tools are outside of Bazel and have no knowledge of Bazel build phases. In the future, we may expand the code to work on multiple files (for the static analysis), but it will still be independent from Bazel phases.
https://github.com/bazelbuild/bazel/tree/master/src/main/java/com/google/devtools/build/lib/ is the source code of Bazel. The syntax/ directory includes the code for reading and evaluating the Starlark files. The code there is called by Skyframe. The interpreter is called by Skyframe many times in parallel, both during the loading and the analysis phases.
If you have a more specific question (what are you trying to do?), I can help more. :)

What tests should be run in preparation for making contributions to Bazel?

I am preparing for making a minor bug fix to bazel java code. Am working on a Linux distribution.
Following the instructions in https://bazel.build/contributing.html but I encounter problems with two of the test instructions:
In the section about "Compiling bazel" the third parapgraph state: "In addition to the Bazel binary, you might want to build the various tools Bazel uses. They are located in //src/java_tools/..., //src/objc_tools/... and //src/tools/... and their directories contain README files describing their respective utility." If I follow this the //src/tools/... fail because there is no xcrun command in the Linux environment I am using. I suppose this is MacOS platform specific tests?
The next paragraph instructs you to build a distribution package, that you then unpack in a new directory, and then do: "bazel test //src/... //third_party/ijar/...". I now get an error that windows.h is missing, which I suppose is Windows platform specific tests.
Some questions:
So is there an easy way to run tests only for the current platform?
Is the instructions good enough?
If the instructions should be updated, what is the best way to notify the ones managing that documentation page?
Thanks for your interest in contributing to Bazel! The bazel-dev mailing list is a better avenue for these questions.
The tests that you want to run largely depend on the changes you make, but when you make a pull request, the Bazel CI will run all of Bazel's tests to make sure that nothing breaks.
So is there an easy way to run tests only for the current platform?
It depends, and this is still a work in progress where we want to make Bazel more aware of platforms and toolchains without specifying additional flags.
In general, you don't need to modify or worry about the //src/*_tools packages unless you're making direct changes to them.
Is the instructions good enough?
The instructions will never be perfect, and we're always looking for ways to make it clearer and more concise.
If the instructions should be updated, what is the best way to notify the ones managing that documentation page?
Please file an issue on the GitHub repository or email the bazel-dev mailing list for further discussion.

Continuous TFS local deployment

I have a configured CI with TFS. What are the best ways to organize post-build (or even better post-test) deployment. My binaries are some libraries with single executable file.
Here is what I need:
Build on each commit. (This is configured and done)
When build is successful (or tests), grep binaries and drop it to some specific folder on the same build machine with full replacement of previous files and folders. (I`d like to be able to configure somehow the folder location)
Launch the application with some parameters and I need to have standart output redirection. For example: App.exe param=paramValue > log.txt
And before starting the application I need to kill the previous instance of it. (This is some kind of server instance that is alive all the time)
The most obvious solution that I tried was to do this with post-build script. But this try failed. See here
Use Release Management in conjunction with PowerShell (or better still, Desired State Configuration) scripts. Depending on your MSDN licensing, it could be free for you, and it's specifically designed from the ground up to handle managing releases.
Overextending the build process to also do deployment is an awful idea. The build tools were designed to build, and they're good at it! They're not good at the types of considerations you have when you're trying to do deployments.
The problem is that most CI solutions (TFS included) would get you to the point where you had binaries, then say "Welp, you're on your own! Have fun figuring out how to deploy this stuff!" This never ends well -- you end up with something inflexible and very difficult to troubleshoot and maintain.
The modern "devops" approach here is to have your application's requirements in source control, treated as code (in this case, as a DSC script or scripts).
One other consideration: It sounds like you're trying to treat a console application as a service. This is going to be a big, big pain for you, since most software that handles releases will not run in an interactive session. Turn it into a true Windows service and your life will be easier.

Working with MSBuild and TFS

I'm trying to work with MSBuild and TFS.
I've managed to create my own MSBuild script, that works great from the command-line. The script works with csproj files, and compiles, obfuscate, sign and copies everything that's needed.
However, looking at the documentation of TFS & Team Build, it appears that it expect solutions as the "input" for the script.
Also, I haven't found an easy/intuitive way of performing a "Get Latest Version" from the TFS as part of the script. I'm assuming that the Team Build automatically do a "Get Latest" on the solutions it's suppose to compile, but again - I don't (want to) work with solutions...
Any insights? any pointers? any links?
Team Build defines about 25 targets of its own. When you queue a Team Build, they are automatically run for you in the predefined order listed # MSDN. Don't modify this process. Instead, simply set a couple of these properties that determine how the tasks behave. For example, set <IncrementalGet> to "true" if you want ordinary Get behavior, or "false" if you want something closer to tf get /force.
As far as running your own MSBuild script, again this shouldn't be necessary. Start with the TFSBuild.proj file that's provided for you. It should only require minimal modifications to do everything you describe. Call your obfuscation & signing code by overriding a task like AfterCompile or AfterTest. Put your auto-deploy code in AfterDropBuild. Etc.
Even really complex scenarios are possible if you refactor appropriately. See past answers #1 #2.
As far as the actual compile, you're right that Team Build operates on solutions. I recommend giving it what it wants. I'll be the first to admit that *.sln files are ugly and largely undocumented, but at least you're offloading the work to a well tested & supported product.
If you really wanted to, you could give it a blank/dummy solution and override the CoreCompile task with your custom compiler logic. But this is really asking for trouble. At bare minimum, you lose all of Team Build's flexibility WRT building multiple platforms and flavors. More practically, you're bound to spend a lot of time debugging something that's designed to "just work" -- and there are no good MSBuild debuggers yet (that I know of). Not worth it, IMO.
BTW, the solution files do not affect the Get process. As you can see in the 1st link, the Get is done very early on, long before Team Build even reads the solution file(s). Apart from a few options like <IncrementalGet>, this is not controlled from MSBuild at all -- in particular, the paths to be downloaded are determined by the workspace mappings associated with the build definition. I.e., they are stored in the Team Build SQL database, not the filesystem, and managed with tools (like Team Explorer) that call the TFS webservice API.

Resources