Bazel: Mixing a Linux remote execution platform with a Mac OS local platform - bazel

Goal
I am using Bazel to build a multiplatform C++ client (iOS, OSX, Android, Windows).
iOS and OSX are built locally on my Mac (out of necessity). Android and Windows are built inside a Docker container.
At the end of the build I have a Bazel rule that takes each cc_binary rule for each platform and puts them in a .zip.
I'd like to utilize Bazel's remote execution API to build some of my binaries in the container and others locally and then reference a shared cache to collate them together -- all with one bazel build command.
Bazel support
Bazel claims that these types of multiplatform builds -- where the host (OSX x64), execution (Linux x64), and target platforms (many) are all different -- are possible.
See https://docs.bazel.build/versions/master/platforms.html
My experience
However, I hit this exact issue: https://github.com/bazelbuild/bazel/issues/5397 (where docker-sandbox is a correct proxy for remote builds.)
This, alongside the below Github issue, makes me question Bazel's claim about multiplatform builds.
https://github.com/bazelbuild/bazel/issues/5309
Fundamentally, these issues seem to say that local targets for one platform (e.g., OSX) cannot be built alongside remote targets on another platform (e.g. Linux).
Question
I was wondering:
(1) Is what I am trying to do fundamentally at odds with Bazel's design? If so, what is meant by Bazel being multiplatform?
(2) Is there a workaround I can employ that maintains hermiticity and stays within the Bazel build system? It could be possible to mount a Docker volume and then write a script that combines the Docker cache with my local cache, but it seems like Bazel was built to handle my use case. Am I missing something here?
Related questions: Does bazel support remote execution on different platforms? (Doesn't have a satisfactory answer.)

(1) Is what I am trying to do fundamentally at odds with Bazel's design?
In theory no, in practice yes. Bazel provides functionality which allows users to support your use-case, but it is not implemented by default.
Specifically, as described in the linked Bazel issues: Bazel rules currently make assumptions about the relationship between the host and target platforms which don't hold in your case, e.g. it will auto-detect the JDK files on your host (macOS) and then default to using these JDK files across all Java actions - regardless of target platform.
If so, what is meant by Bazel being multiplatform?
In practice, it means that you can run bazel build ... on multiple platforms and expect that Bazel will transform your inputs into outputs compatible for the current platform.
(2) Is there a workaround I can employ that maintains hermiticity and stays within the Bazel build system?
Yes, you can run bazel build ... from within a Windows VM or Docker container. This was the workaround that the Bazel team recommended when I asked this question.
Relevant advanced Bazel features:
If you want to build for multiple target platforms with one Bazel invocation, have a look at Bazel user-defined transitions (this would allow you to build the same rule for multiple platforms, e.g. iOS and macOS at once, but require you to write your own rules).
If you don't want to run bazel build from within containers/VMs, you can write your own C++ toolchain. At its core, Bazel gives each action a sandbox with all dependent files and guarantees that it will execute a specific command. In a custom C++ toolchain, you could tell Bazel to call a script instead of clang, which takes the command + the files and executes them from within a VM or container. This is likely a lot of work, but is definitely possible.

Related

Difference between "cargo build" and "anchor build"

I am following rust tutorials online, and I found that some websites are using cargo build command while others are using anchor build command to build the project.
What is the difference between these two commands?
Cargo is Rust's build manager.
Anchor is a framework specifically for solana/rust. It has extra features for a better development experience. It is similar to truffle framework for Ethereum.
With Anchor you can build programs quickly because it writes various
boilerplate for you such as (de)serialization of accounts and
instruction data.
in anchor projects you use Account for creating account to handle the serilaization #[derive(Accounts)]. Compare the serialization of both projects, you will see how tiring it is without cargo
Under the hood, anchor build does cargo build-bpf and then extracts the program's IDL at src/lib.rs.
cargo build-bpf (now cargo build-sbf) differs from cargo build because it specifically builds a Solana on-chain program, and not a general binary / library that can be used on your system.
And the IDL is an important feature of Anchor -- it exposes the program's interface to be consumed by any client.

Exposing non-empty docker container directories to host

Since one can have a nice Docker container to run an entire build in, it would be fantastic if the tools used by the container to build and run the code would be accessible to the host.
Imagine the following use-case:
Imagine that you're developing a Java application using OpenJDK 12 and Maven 3.6.1 in order to build, run all tests and package the entire application into an executable .jar file.
You create a Docker container that serves as a "build container". This container has OpenJDK 12 and Maven 3.6.1 installed and can be used to build and package your entire application (you could use it locally, during development and you could also use it on a build-server, triggering the build whenever code changes are pushed).
Now, you actually want to start writing some code... naturally, you'll go ahead and open your project in your favorite IDE (IntelliJ IDEA?), configure the project SDK and whatever else needs to be configured and start rocking!
Would it not be fantastic to be able to tell IntelliJ (Eclipse, NetBeans, VSCode, whatever...) to simply use the same tools with the same versions as the build container is using? Sure, you could tell your IDE to delegate building to the "build container" (Docker), but without setting the appropriate "Project SDK" (and other configs), then you'd be "coding in the dark"... you'd be losing out on almost all the benefits of using a powerful IDE for development. No code hinting, no static code analysis, etc. etc. etc. Your cool and fancy IDE is in essence reduced to a simple text editor that can at least trigger a full-build by calling your build container.
In order to keep benefiting from the many IDE features, you'll need to install OpenJDK 12, Maven 3.6.1 and whatever else you need (in essence, the same tools you have already spent time configuring your Docker image with) and then tell the IDE that "these" are the tool it should use for "this" project.
It's unfortunately too easy to accidentally install the wrong version of the tool on your host (locally), that could potentially lead to the "it works on my machine" syndrome. Sure, you'd still spot problems later down the road once the project is built using the appropriate tools and versions by the build container/server, but... not to mention how annoying things can become when having to maintain an entire zoo of tools an their versions on your machine (+ potentially having to deal with all kind of funky incompatibilities or interactions between all the tools) when you happen to work on multiple projects (one project needs JDK 8, the other JDK 11, the other uses Gradle, not Maven, then you also need Node 10, Angular 5, but also 6, etc. etc. etc.).
So far, I only came across all kind of funky workarounds, but no "nice" solution. The most tolerable I found so far is to manually expose (copy) the tools from the container on the host machine (e.g.: define a volume shared by both and then execute a manual script that would not copy the tools from the container into the shared volume directory so that the host can access them as well)... while this would work, it unfortunately involves a manual step, which means that whenever the container is updated (e.g.: new versions of certain tools are used or even additional, completely new ones) then the developer needs to remember to perform the manual copying step (execute whatever script explicitly) in order have all the latest and greatest stuff available to the host once again (of course, this could potentially mean updating IDE configs as - but this - version upgrades at least - can be mitigated to a large degree by having the tools reside at non-version specific paths).
Does anyone have some idea how to achieve this? VM's are out of the question and would seem like an overkill... I don't see why accessing Docker container resources in a read-only fashion should not be possible and reuse and reference appropriate tooling during both development and build.

Using build system to run tests or interact with clusters

What is the purpose of projects like these below that use Bazel for things other than building software?
https://github.com/bazelbuild/rules_webtesting
https://github.com/bazelbuild/rules_k8s
Are they just conveniently providing environment for run command (as opposed to building portable executables) or am I missing something?
The best I can tell is that Bazel could be used to run only subset of E2E tests based on knowledge what changed.
Disclaimer: I have only cursory knowledge about k8s and docker.
Bazel isn't just used for building and testing, it can also deploy, as you've discovered with the rules in those projects.
The best I can tell is that Bazel could be used to run only subset of E2E tests based on knowledge what changed.
Correct, but also extend tests to deployments. If you've only changed a single string in your Go binary that's injected into an image, Bazel is able to use rules_k8s, rules_docker, and rules_go to:
incrementally and reproducibly rebuild the minimum set files to
build the new Go executable
create a new image layer containing the Go executable (without using Docker)
push the image to the registry
redeploy changed pod(s) into cluster
The notable thing is that if you didn't change the source file, Bazel will always create an image with the same digest due to its reproducibility. That means that you can now trust the deployment workflow to not redeploy/restart a pod even if you do a bazel run twice or more.
For more, check out this BazelCon 2017 talk: Using Bazel for Fast, Correct Docker Deployments w/ Databricks
Fun fact: you can also use bazel run to start a REPL for your build target, from 0.15.0 onwards. Haskell and Scala rules use this.

How to efficiently do cross platform builds

I am setting up the build system for a team that produces APIs used on several platforms and architectures. There has been a lot of work already spent on setting up Ant to build all of the Java code, so I would prefer to stick with Ant if possible.
Where I am stumped is how to build the C++ software. Here are the platforms and languages I need to support:
Java - Linux - 32bit & 64bit: Ant
Java - Windows - 32bit & 64bit: Ant
C++ - Linux - 32bit & 64bit: Ant w/CppTasks (question #1)
C++ - Windows - 32bit: (question #2)
Note: C++ on Windows is MS Visual Studio C++ projects.
I think the answer to question #1 is CppTasks because that seems to be the standard way to build C++ from Ant.
For question #2, I could also use CppTasks, but the developers will be compiling in Visual Studio, so it seems beneficial to use their Visual Studio project for building, which means calling MSBuild from Ant.
Has anyone tried this before and has a good solution for building Java & C++ on both Linux and Windows?
Do you use a Continuous Build System like Jenkins?
With Jenkins, your builds can be automatically triggered by check in/commit, time of day, and/or on command. The great thing about Jenkins is that you can have it automatically build all of the various versions of your software.
For example, you'll probably need to run make on Linux C++ but use msbuild on Windows systems, and you'll need to trigger a build on a Linux machine and one for a Windows machine. Jenkins can be setup to do this automatically. When a commit happens, all your various builds on all of your systems can be triggered at once. Then, you can store the builds you need on Jenkins and have your users actually pull the type they need off the project they need.
There are so many ways this could be setup, but the easiest is to simply create four separate jobs (One for Java 32bit, Java 64bit, C++ Linux, and C++ Microsoft). You don't necessarily need a separate Microsoft Java build (at least in theory), but there's nothing stopping you.
You can have a single Jenkins server run "slave" jobs on other build systems, so you could have Jenkins live on the 64Bit Linux system, but use a 32bit Linux system as a slave to do the 32bit build, and call a Windows slave to do the Visual Basic build. That way, all of your jobs are located in a central place, but you can use the environments you want.
If you've never used a Continuous Build system, download Jenkins and play around with it. It's free and open source, and very, very easy to use. You can run it on any machine that has a JDK or JRE 1.6. If you download the Windows version, it even comes with the JRE already built in.
Your best bet is to use a continuous build system and allow it to handle the mess. By the way, there's also Bamboo, CruiseControl, and Hudson (which was split from Jenkins a few months ago)
TeamCity should fit the bill very well. It supports Ant and MSBuild natively and has a pretty good cross plartform story (written in Java but excellent integration with e.g. Win).
Dont see any benefit in wrapping you Win MSBuild-based builds in yet another build system.
The list for this looks a little bit different (in my opinion)
Java -Maven for all platforms
C++ - Maybe Maven as well (Check http://duns.github.com/maven-nar-plugin/).

Automated Build and Deploy of Windows Services

How would you implement an automated build and deploy system for Windows services. Things to keep in mind:
The service will have to be stopped on the target machine.
The service entry in the Windows registry might need to be created/updated.
Some, but not all, of the services might need to be automatically started.
I am willing to use TFS for this, but it isn't a requirement. The target machines will always be development machines, we won't be doing this for production servers.
The automated build part can be done in multiple ways - TFS, TeamCity (what we use), CruiseControl.NET, etc. That in turn could call a build script in NAnt (again, what we use), MSBuild, etc.
As for stopping and installing a service remotely, see How to create a Windows service by using Sc.exe. Note that you could shell/exec out to this from your build script if there isn't a built-in task. (I haven't tried this recently, so do a quick spike first to make sure it works in your environment.)
Alternately, it's probably possible (and likely elegant) in Windows PowerShell 2.0.

Resources