Difference between "cargo build" and "anchor build" - rust-cargo

I am following rust tutorials online, and I found that some websites are using cargo build command while others are using anchor build command to build the project.
What is the difference between these two commands?

Cargo is Rust's build manager.
Anchor is a framework specifically for solana/rust. It has extra features for a better development experience. It is similar to truffle framework for Ethereum.
With Anchor you can build programs quickly because it writes various
boilerplate for you such as (de)serialization of accounts and
instruction data.
in anchor projects you use Account for creating account to handle the serilaization #[derive(Accounts)]. Compare the serialization of both projects, you will see how tiring it is without cargo

Under the hood, anchor build does cargo build-bpf and then extracts the program's IDL at src/lib.rs.
cargo build-bpf (now cargo build-sbf) differs from cargo build because it specifically builds a Solana on-chain program, and not a general binary / library that can be used on your system.
And the IDL is an important feature of Anchor -- it exposes the program's interface to be consumed by any client.

Related

Github action to publish binaries compiled from docker buildx for multiple platforms

I am working on a project where I am creating multi stage docker builds, that compile C++ code, as part of build stages, using buildx and build-push, for amd64, arm64, arm7, etc. on Debian.
I would also like to publish the compiled binaries to GitHub releases for the various platforms, along with publishing the final docker image that also contains the compiled code.
I am aware of methods to e.g. cat / pipe data out of a container.
What I'd like to know is if there is a standard GitHub actions integrated way to publish content compiled in docker containers to GitHub, or if I need to manually copy the content from the containers after building them?

Bazel: Mixing a Linux remote execution platform with a Mac OS local platform

Goal
I am using Bazel to build a multiplatform C++ client (iOS, OSX, Android, Windows).
iOS and OSX are built locally on my Mac (out of necessity). Android and Windows are built inside a Docker container.
At the end of the build I have a Bazel rule that takes each cc_binary rule for each platform and puts them in a .zip.
I'd like to utilize Bazel's remote execution API to build some of my binaries in the container and others locally and then reference a shared cache to collate them together -- all with one bazel build command.
Bazel support
Bazel claims that these types of multiplatform builds -- where the host (OSX x64), execution (Linux x64), and target platforms (many) are all different -- are possible.
See https://docs.bazel.build/versions/master/platforms.html
My experience
However, I hit this exact issue: https://github.com/bazelbuild/bazel/issues/5397 (where docker-sandbox is a correct proxy for remote builds.)
This, alongside the below Github issue, makes me question Bazel's claim about multiplatform builds.
https://github.com/bazelbuild/bazel/issues/5309
Fundamentally, these issues seem to say that local targets for one platform (e.g., OSX) cannot be built alongside remote targets on another platform (e.g. Linux).
Question
I was wondering:
(1) Is what I am trying to do fundamentally at odds with Bazel's design? If so, what is meant by Bazel being multiplatform?
(2) Is there a workaround I can employ that maintains hermiticity and stays within the Bazel build system? It could be possible to mount a Docker volume and then write a script that combines the Docker cache with my local cache, but it seems like Bazel was built to handle my use case. Am I missing something here?
Related questions: Does bazel support remote execution on different platforms? (Doesn't have a satisfactory answer.)
(1) Is what I am trying to do fundamentally at odds with Bazel's design?
In theory no, in practice yes. Bazel provides functionality which allows users to support your use-case, but it is not implemented by default.
Specifically, as described in the linked Bazel issues: Bazel rules currently make assumptions about the relationship between the host and target platforms which don't hold in your case, e.g. it will auto-detect the JDK files on your host (macOS) and then default to using these JDK files across all Java actions - regardless of target platform.
If so, what is meant by Bazel being multiplatform?
In practice, it means that you can run bazel build ... on multiple platforms and expect that Bazel will transform your inputs into outputs compatible for the current platform.
(2) Is there a workaround I can employ that maintains hermiticity and stays within the Bazel build system?
Yes, you can run bazel build ... from within a Windows VM or Docker container. This was the workaround that the Bazel team recommended when I asked this question.
Relevant advanced Bazel features:
If you want to build for multiple target platforms with one Bazel invocation, have a look at Bazel user-defined transitions (this would allow you to build the same rule for multiple platforms, e.g. iOS and macOS at once, but require you to write your own rules).
If you don't want to run bazel build from within containers/VMs, you can write your own C++ toolchain. At its core, Bazel gives each action a sandbox with all dependent files and guarantees that it will execute a specific command. In a custom C++ toolchain, you could tell Bazel to call a script instead of clang, which takes the command + the files and executes them from within a VM or container. This is likely a lot of work, but is definitely possible.

Using build system to run tests or interact with clusters

What is the purpose of projects like these below that use Bazel for things other than building software?
https://github.com/bazelbuild/rules_webtesting
https://github.com/bazelbuild/rules_k8s
Are they just conveniently providing environment for run command (as opposed to building portable executables) or am I missing something?
The best I can tell is that Bazel could be used to run only subset of E2E tests based on knowledge what changed.
Disclaimer: I have only cursory knowledge about k8s and docker.
Bazel isn't just used for building and testing, it can also deploy, as you've discovered with the rules in those projects.
The best I can tell is that Bazel could be used to run only subset of E2E tests based on knowledge what changed.
Correct, but also extend tests to deployments. If you've only changed a single string in your Go binary that's injected into an image, Bazel is able to use rules_k8s, rules_docker, and rules_go to:
incrementally and reproducibly rebuild the minimum set files to
build the new Go executable
create a new image layer containing the Go executable (without using Docker)
push the image to the registry
redeploy changed pod(s) into cluster
The notable thing is that if you didn't change the source file, Bazel will always create an image with the same digest due to its reproducibility. That means that you can now trust the deployment workflow to not redeploy/restart a pod even if you do a bazel run twice or more.
For more, check out this BazelCon 2017 talk: Using Bazel for Fast, Correct Docker Deployments w/ Databricks
Fun fact: you can also use bazel run to start a REPL for your build target, from 0.15.0 onwards. Haskell and Scala rules use this.

Makefiles in iOS build using jenkins

I'm new to makefiles and jenkins.Is there any guide on how to write makefile to run build and the unit test together using jenkins.?
You can definitely use Makefiles to build and run both your application/library and tests.
Here is a good guide to Makefiles:
http://mrbook.org/tutorials/make/
It should help you with writing a simple makefile. For more information, Google is your friend.
Another good guide is here:
http://www.cs.swarthmore.edu/~newhall/unixhelp/howto_makefiles.html
Remember, jenkins and makefiles are completely unrelated. You can use Jenkins with makefiles, and use makefiles without jenkins. One is a continuous integration system, the other just another way of building your software.
You can go ahead and use Xserver as suggested in the other post, but Jenkins has advantages that many other systems don't: it is extensible using a whole host of plugins, has a large user and developer community and is used for multiple types and styles of projects in various languages. While your project is purely for iOS, there are other things in Jenkins you could take advantage of from the available plugins list.
There is an XCode plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Xcode+Plugin
Maybe this helps too:
http://programming.oreilly.com/2013/04/upward-mobility-automating-ios-builds-with-jenkins.html
But maybe you are better off using Xserver if you try to do continuos integration:
https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/xcode_guide-continuous_integration/200-Adopting_a_Continuous_Integration_Workflow/adopt_continuous_integration.html

What is the minimal agent install footprint for Delphi build automation?

When creating a build server that does clean version control check-outs and full system builds of everything in a given source repository or project, what is the minimum required Delphi install footprint for XE3 Win32/Win64 projects? (Core system - not 3rd party components)
I'd prefer to have a small subset of files that can be included in a repository rather than a full Delphi install.
Using the command line compilers, you do not need to install the IDE on the remote agent computer. From a running installation, copy the \Bin and \Lib sub-folder content into your remote agent.
Then run DCC32.exe command line compiler, changing the DCC32.CFG file content to point to all needed source code paths. Do not forget to set a temporary folder for all your .dcu generated files, and specify a .exe destination folder.
See
How do I compile my delphi project on the command line?
and the official documentation of the command-line compiler.
Update: Yes, I know, MSBuild is the "official way". But for a build agent, I found out how easier it is to work with the command line compiler. And the question here is about the "minimal footprint" to build Delphi apps.
With DCC32, nothing to install, no need to reproduce the same layout than the one in the IDE. Build environment should not be tied to the IDE configuration, and should be "clean" from any developer specificity, from my experiment. It should build from scracth and from source all your application, run the unit tests, and prepare the release. I've seen some .dcu or .bpl polluting the build process, taking hours to find out why a code modification did not be taken in account!
If you need some complex build process, I always prefer to code some lines of Delphi (or python), reading the configuration from text files, regardless of the computer the build agent runs on. It is a PITA to install Delphi on a computer just for a build (especially latest versions), and even if the license allows you to do so, whereas the command-line compiler is safe and fast to install/setup. Just copy the files, and run them. If your build agent is a virtual server (as it should be today), your IT will be pleased not to pollute the registry. If just your company IT would prefer Delphi because of its cleanness in regard to other frameworks, it is always worth it.
Found a source for D2006+D2007:
https://delphi.fandom.com/wiki/Setting_up_a_Delphi_Build_Machine

Resources