Difference between `cargo build` , `cargo build-bpf` and `cargo build-sbf` - rust-cargo

I am new to blockchain and I am using solana/anchor/cargo/rust to build a project. I am confused about these commands. What is the difference between cargo build, cargo build-bpf and cargo build-sbf?

cargo build - Unless otherwise directed to build for a different architecture, this will compile your rust source and emit an executable file.
cargo build-bpf - Solana programs (smart contracts) are compiled to BPF (google it) bytecode. The Solana program run-time executes your smart contract in a BPF VM. deprecated in favor of:
cargo build-sbf - More comprehensive implementation of generated byte-code and interpreter

Related

cargo install binary with symlinks

I have a crate which compiles into a single binary. After installing the single binary foo via cargo install I would like to automatically symlink foo to bar inside $HOME/.cargo/bin. Currently I do this manually and was wondering whether this can be done by cargo or whether I should rather use make or a shell script or whatever for this task?

Is there a way to integrate an arbitrary build command to a Cargo build?

At the end of a cargo build, I would like to call wasm-opt with specific optimization options on the generated WASM file.
Unfortunately, it seems that the Cargo.toml does not support support asyncify options.
A good solution would prevent cargo from rebuilding the project after running wasm-opt on the WASM file.
If I used a cargo-build script, it is unclear to me how I could specify a dependency on the wasm-opt build step to avoid unnecessary re-builds even though the Rust source files haven't changed. Any pointers ?

how to build two versions with webdev build?

is it possible to compile two versions when building?
I would like to have a build/ which is the production release for end users and a buildDev/ with less restrictive dart2js flags. Something like that.
The idea is to be able to access a page with some security token that forces the page use the dart2js debug version.
This is not possible to do in a single build command. webdev build --no-release will use the dev_options instead of release_options, so to get both flavors of build you'll need to run two commands, webdev build; webdev build --no-release -o buildDev.

Build versus Runtime Dependencies in Nix

I am just starting to get to grips with Nix, so apologies if I missed the answer to my question in the docs.
I want to use Nix to setup a secure production machine with the minimal set of libraries and executables. I don't want any compilers or other build tools present because these can be security risks.
When I install some packages, it seems that they depend on only the minimum set of runtime dependencies. For example if I install apache-tomcat-8.0.23 then I get a Java runtime (JRE) and the pre-built JAR files comprising Tomcat.
On the other hand, some packages seem to include a full build toolchain as dependencies. Taking another Java-based example, when I install spark-1.4.0 Nix pulls down the Java development kit (JDK) which includes a compiler, and it also pulls the Maven build tool etc.
So, my questions are as follows:
Do Nix packages make any distinction between build and runtime dependencies?
Why do some packages appear to depend on build tools whereas others only need runtime? Is this all down to how the package author wrapped up the application?
If a package contains build dependencies that I don't want, is there anything that I, as the operator, can do about it except design my own alternative packaging for the same application?
Many thanks.
The runtime dependencies are a subset of the build-time dependencies that Nix determines automatically by scanning the generated output for the hash part of each build-time dependencies' store path. For example, if you build a package using the compiler /nix/store/abcdef...-foo-1.20, then Nix will scan all files in the generated output for the hash bit abcdef.... If that hash is found, then the output is assumed to reference the compiler in some way, so it's kepts as a runtime dependency. If that hash does not occur, however, then the generated output has no reference to the compiler and therefore cannot access it at runtime, so foo-1.20 is treated as a build-time-only dependency.
Some packages record large parts of their build environment for informational/debugging purposes. Perl, for example, stores every little detail about the tools used to compile it, so all those store paths end up being treated as runtime dependencies despite the fact that Perl doesn't actually need them at runtime, but Nix can't know: it just knows that the Perl store path references those tools. Now, Nixpkgs maintainers usually make an effort to clean that up, i.e. by pruning the logfile that contains all those store paths from the installation, etc., but for sure there are plenty of packages in the database still that haven't been optimized to that end yet.
Let's assume that you'd like to compile a version of openssh that does not depend on PAM. Then you can remove the build input from the expression by means of an override, i.e. you replace the pam argument that's normally passed to the openssh build function with null. To do that, store the following file in ~/.nixpkgs/config.nix
{
packageOverrides = super: let self = super.pkgs; in {
openssh-without-pam = super.openssh.override {
pam = null;
};
};
}
and now install that package by running:
$ nix-env -f "<nixpkgs>" -iA openssh-without-pam

How can I run custom tools from a premake build script?

I'm using protocol buffers for data serialization in my C++ application. I would like to add the invokation of the protoc code generator in my premake build script (thus ensure the up-to-date state of the generated classes and avoid the need to store generated source under version control).
Even their FAQ has a question and answer about this, but the answer is very incomplete for me. Having the ability to call any lua function is great, but where exactly do I put that call? I need to run the protoc compiler before building either the application or the unit tests.
You can certainly call outside code from Premake scripts. But remember: Premake scripts are used to generate build files: Makefiles, C++ projects, etc. The Premake script is run before building the project.
If you want this preprocess to be run outside of the actual build files (and not by make, VC++, Code::Blocks, etc), then it's easy. Lua's os.execute will execute a command-line.
Premake scripts are still Lua scripts. All of the Premake commands are just Lua calls into functions that Premake defines. Premake executes the scripts, then uses the data from them to generate the build files. So all of your Lua code is run during the execution of the script. Where you put this command in your script is irrelevant; wherever it is, it will execute before your build files are generated.
And if you want to run the protoc step during the build (from VC++, makefile, etc.) you can set up a prebuild command. See http://industriousone.com/prebuildcommands for more info and an example.

Resources