cargo install binary with symlinks - rust-cargo

I have a crate which compiles into a single binary. After installing the single binary foo via cargo install I would like to automatically symlink foo to bar inside $HOME/.cargo/bin. Currently I do this manually and was wondering whether this can be done by cargo or whether I should rather use make or a shell script or whatever for this task?

Related

Is there a way to integrate an arbitrary build command to a Cargo build?

At the end of a cargo build, I would like to call wasm-opt with specific optimization options on the generated WASM file.
Unfortunately, it seems that the Cargo.toml does not support support asyncify options.
A good solution would prevent cargo from rebuilding the project after running wasm-opt on the WASM file.
If I used a cargo-build script, it is unclear to me how I could specify a dependency on the wasm-opt build step to avoid unnecessary re-builds even though the Rust source files haven't changed. Any pointers ?

Cache Rust dependencies with Docker build and lib.rs

I've been trying to create a Dockerfile for my rust build that would allow me to build the application separately from the dependencies as demonstrated here:
Cache Rust dependencies with Docker build
However this doesn't not seem to be working for me, having a slightly different working tree with the lib.rs file. My Dockerfile is laid out like so:
FROM rust:1.60 as build
# create a new empty shell project
RUN USER=root cargo new --bin rocket-example-pro
WORKDIR /rocket-example-pro
# create dummy lib.rs file to build dependencies separately from changes to src
RUN touch src/lib.rs
# copy over your manifests
COPY ./Cargo.lock ./Cargo.lock
COPY ./Cargo.toml ./Cargo.toml
RUN cargo build --release --locked
RUN rm src/*.rs
# copy your source tree
COPY ./src ./src
# build for release
RUN rm ./target/release/deps/rocket_example_pro*
RUN cargo build --release --locked ## <-- fails
# our final base
FROM rust:1.60
# copy the build artifact from the build stage
COPY --from=build /rocket-example-pro/target/release/rocket_example_pro .
# set the startup command to run your binary
CMD ["./rocket_example_pro"]
As you can see initially I copy over the toml files and perform a build, similarly to the previously demonstrated. However with my project structure being slightly different I seem to be having an issues, as my main.rs pretty much only has one line that calls the main method in my lib.rs. lib.rs is also defined in my toml file that gets copied before building dependencies and requires me to touch the lib.rs file for it to not fail building here with it otherwise being missing.
Its at the second build step that I can't seem to resolve, after I've copied over the actual source files to build the application, I am getting the error message
Compiling rocket_example_pro v0.1.0 (/rocket-example-pro)
error[E0425]: cannot find function `run` in crate `rocket_example_pro`
--> src/main.rs:3:22
|
3 | rocket_example_pro::run().unwrap();
| ^^^ not found in `rocket_example_pro`
When performing these steps myself in a empty directory I don't seem to encounter the same errors myself, instead the last step succeeds, but the produced rocket-example-pro executable file still seems to be the shell example project only printing 'Hello world' and not the rocket application i copy over before the second build.
As far as i can figure it seems that the first build is affecting the second, perhaps when I touch the lib.rs file in the dummy shell project, it builds it without the run() method? so when the second one starts, it doesn't see the run method because its empty? but this doesn't make much sense to me as I have copied over the lib.rs file with the run() method inside it.
here's what the toml file looks like if it helps:
[package]
name = "rocket_example_pro"
version = "0.1.0"
edition = "2021"
[[bin]]
name = "rocket_example_pro"
path = "src/main.rs"
[lib]
name = "rocket_example_pro"
path = "src/lib.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
...
(I couldn't reproduce this at first. Then I noticed that having at least one dependency seems to be a necessary condition.)
With the line
RUN rm ./target/release/deps/rocket_example_pro*
you're forcing rebuild of the rocket_example_pro binary. But the library will remain as built from the first empty file. Try changing to
RUN rm ./target/release/deps/librocket_example_pro*
Though personally, I think deleting random files from the target directory is a terribly hacky solution. I'd prefer to trigger the rebuild of the lib by adjusting the timestamp:
RUN touch src/lib.rs && cargo build --release --locked ## Doesn't fail anymore
For a clean solution, have a look at cargo-chef.
[Edit:] So what's happening here?
To decide whether to rebuild, cargo seems to compare the mtime of the target/…/*.d to the mtime of the files listed in the content of the *.d files.
Probably, src/lib.rs was created first, and then docker build was run. So src/lib.rs was older than target/release/librocket_example_pro.d, leading to target/release/librocket_example_pro.rlib not being rebuilt after copying in src/lib.rs.
You can partially verify that that's what's happening.
With the original Dockerfile, run cargo build, see it fail
Run echo >> src/lib.rs outside of docker to update its mtime and hash
Run cargo build, it succeeds
Note that for step 2, updating mtime with touch src/lib.rs is not sufficient because docker will set the mtime when COPYing a file, but it will ignore mtime when deciding whether to use a cached step.

Very slow docker build with go-sqlite3 CGO enabled package

Since I've installed go-sqlite3 as dependency in my go project my docker build time started oscillating around 1 min.
I tried to optimize the build by using go mod download to cache dependencies
But it didn't reduce overall build time.
Then I found out that
go-sqlite3 is a CGO enabled package you are required to
set the environment variable CGO_ENABLED=1 and have a gcc compile
present within your path.
So I run go install github.com/mattn/go-sqlite3 as an extra step and it reduced build time to 17s~
I also tried vendoring, but it didn't help with reducing the build time, installing library explicitly was always necessary to achieve that.
## Build
FROM golang:1.16-buster AS build
WORKDIR /app
# Download dependencies
COPY go.mod .
COPY go.sum .
RUN go mod download
RUN go install github.com/mattn/go-sqlite3 //this reduced build time to around 17s~
COPY . .
RUN go build -o /myapp
But somehow I am still not happy with this solution.
I don't get why adding this package makes my build so long and why I need to explicitly install it in order to avoid such long build times.
Also, wouldn't it be better to install all packages after downloading them?
Do you see any obvious way of improving my current docker build?
The fact of the matter is that the C-based SQLite package just takes a long time to build. I use it myself currently, and yes it's painful every time. I have also been unhappy with it, and have been looking for alternatives. I have been busy with other projects, but I did find this package QL [1], which you can build without C [2]:
go build -tags purego
or if you just need read only, you can try SQLittle [3].
https://pkg.go.dev/modernc.org/ql
https://pkg.go.dev/modernc.org/ql#hdr-Building_non_CGO_QL
https://github.com/alicebob/sqlittle

How should I use NPM modules in Jenkins that I don't want in the build itself?

What are the best practices when using development modules such as Mocha or Browserify, which are needed during the build process but not in the built artifact itself? The only options I can see are:
Pre-arranging these modules to be installed globally on the build server (not likely possible in my case)
Run npm install -g explicitly as part of the build process for each module (Feels somewhat wrong to install globally)
Don't use the --production flag on npm install (which forces these modules to be present in the final artifact, increasing its size and defeating the purpose of dev/production dependencies)

build Compiler 'protobuf' from source and use it with it's shared objects from within cmake

I'm using a CMake build in a Jenkins environment and want to build the protobuf compiler from source.
This all works but in the next step I'm trying to use the compiler to translate my proto files which doesn't work, cause it cannot find it's own shared objects. I've tried defining the search path in the CMakeLists.txt file but it won't detect the shared object location in my repository tree $PRJ_MIDDLEWARE/protobuf/lib. I've tried telling cmake or the system where to search by defining:
set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib")
set(ENV{LD_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib:$ENV{LD_LIBRARY_PATH}")
But it always fails when trying to invoke the protoc compiler I just build. I tried invoking ´´ldconfig´´ from CMake but that doesn't work cause the jenkins user doesn't have the right to do this. Currently my only solution is to login to the build server an do this manually as root. But that is not how I want to do this... the next release moves to a new directory—this has to be done again. Do I have other options? Preferably directly from CMake, from Jenkins or maybe even Protocol Buffers?
Thank you
Two ideas come to mind:
Build protobuf compiler as a static binary (I don't know if that's possible but it usually is.)
Set LD_LIBRARY_PATH environment variable before invoking cmake to point to the location of protoc shared libraries.

Resources