Cache Rust dependencies with Docker build and lib.rs - docker

I've been trying to create a Dockerfile for my rust build that would allow me to build the application separately from the dependencies as demonstrated here:
Cache Rust dependencies with Docker build
However this doesn't not seem to be working for me, having a slightly different working tree with the lib.rs file. My Dockerfile is laid out like so:
FROM rust:1.60 as build
# create a new empty shell project
RUN USER=root cargo new --bin rocket-example-pro
WORKDIR /rocket-example-pro
# create dummy lib.rs file to build dependencies separately from changes to src
RUN touch src/lib.rs
# copy over your manifests
COPY ./Cargo.lock ./Cargo.lock
COPY ./Cargo.toml ./Cargo.toml
RUN cargo build --release --locked
RUN rm src/*.rs
# copy your source tree
COPY ./src ./src
# build for release
RUN rm ./target/release/deps/rocket_example_pro*
RUN cargo build --release --locked ## <-- fails
# our final base
FROM rust:1.60
# copy the build artifact from the build stage
COPY --from=build /rocket-example-pro/target/release/rocket_example_pro .
# set the startup command to run your binary
CMD ["./rocket_example_pro"]
As you can see initially I copy over the toml files and perform a build, similarly to the previously demonstrated. However with my project structure being slightly different I seem to be having an issues, as my main.rs pretty much only has one line that calls the main method in my lib.rs. lib.rs is also defined in my toml file that gets copied before building dependencies and requires me to touch the lib.rs file for it to not fail building here with it otherwise being missing.
Its at the second build step that I can't seem to resolve, after I've copied over the actual source files to build the application, I am getting the error message
Compiling rocket_example_pro v0.1.0 (/rocket-example-pro)
error[E0425]: cannot find function `run` in crate `rocket_example_pro`
--> src/main.rs:3:22
|
3 | rocket_example_pro::run().unwrap();
| ^^^ not found in `rocket_example_pro`
When performing these steps myself in a empty directory I don't seem to encounter the same errors myself, instead the last step succeeds, but the produced rocket-example-pro executable file still seems to be the shell example project only printing 'Hello world' and not the rocket application i copy over before the second build.
As far as i can figure it seems that the first build is affecting the second, perhaps when I touch the lib.rs file in the dummy shell project, it builds it without the run() method? so when the second one starts, it doesn't see the run method because its empty? but this doesn't make much sense to me as I have copied over the lib.rs file with the run() method inside it.
here's what the toml file looks like if it helps:
[package]
name = "rocket_example_pro"
version = "0.1.0"
edition = "2021"
[[bin]]
name = "rocket_example_pro"
path = "src/main.rs"
[lib]
name = "rocket_example_pro"
path = "src/lib.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
...

(I couldn't reproduce this at first. Then I noticed that having at least one dependency seems to be a necessary condition.)
With the line
RUN rm ./target/release/deps/rocket_example_pro*
you're forcing rebuild of the rocket_example_pro binary. But the library will remain as built from the first empty file. Try changing to
RUN rm ./target/release/deps/librocket_example_pro*
Though personally, I think deleting random files from the target directory is a terribly hacky solution. I'd prefer to trigger the rebuild of the lib by adjusting the timestamp:
RUN touch src/lib.rs && cargo build --release --locked ## Doesn't fail anymore
For a clean solution, have a look at cargo-chef.
[Edit:] So what's happening here?
To decide whether to rebuild, cargo seems to compare the mtime of the target/…/*.d to the mtime of the files listed in the content of the *.d files.
Probably, src/lib.rs was created first, and then docker build was run. So src/lib.rs was older than target/release/librocket_example_pro.d, leading to target/release/librocket_example_pro.rlib not being rebuilt after copying in src/lib.rs.
You can partially verify that that's what's happening.
With the original Dockerfile, run cargo build, see it fail
Run echo >> src/lib.rs outside of docker to update its mtime and hash
Run cargo build, it succeeds
Note that for step 2, updating mtime with touch src/lib.rs is not sufficient because docker will set the mtime when COPYing a file, but it will ignore mtime when deciding whether to use a cached step.

Related

How to prevent Docker from re-running when the only change the Dockerfile?

I am trying to make some changes to my Dockerfile and it's causing the build to re-trigger every time. Here is a simplified Dockerfile that I am using:
FROM ...
COPY . .
RUN ./build.sh
RUN <other command>
I am currently trying to modify everything after ./build.sh but it keeps getting triggered because the COPY . . command and technically the build context has been modified. I have tried adding Dockerfile to .dockerignore, but this still triggers a rebuild. This wouldn't really be an issue but the build takes upwards of 30 min so it's quite a bit annoying.
I guess I could specify files and folders in the copy such that the Dockerfile is not copied but that could get unwieldy as the number of files and folders grow.
I'm not sure what else to try. Any suggestions?
Adding the Dockerfile to .dockerignore will cause a cache miss on the next build since the Dockerfile is not in the context like it was before. However the next build will not have that issue.
I'd also recommend moving the COPY . . as late as possible in the Dockerfile since any context change will trigger a cache miss on that line. So if you have steps later that install dependencies and can run those earlier, and those dependencies rarely change, nice those RUN lines to the top of the file.

In Dockerfile, COPY all contents of current directory except one directory

In my Dockerfile, I have the following:
COPY . /var/task
...which copies my app code into the image.
I need to exclude the vendor/ directory when performing this copy.
I cannot add vendor/ to .dockerignore, because that directory needs to be part of the image when it gets built within the image with a RUN composer install.
I cannot specify every file and directory that should be copied, because they may change and I can't rely on other developers to keep the list updated.
I've tried the following, with the following errors:
COPY [^vendor$]* /var/task
When using COPY with more than one source file, the destination must be a directory and end with a /
COPY [^vendor$]*/ /var/task
COPY failed: no source files were specified
It is actually enough to add the vendor directory to the .dockerignore file.
You can broadly follow the flow of files through docker build in three phases:
docker build reads files from the directory you name, ignoring things in the .dockerignore file, and sends them to the Docker daemon as the build context.
The COPY instruction copies files from the build context into the container filesystem.
RUN instructions do further transformation or processing.
If you put vendor in the .dockerignore file, it prevents the directory from being included in the build context. The build will go somewhat faster, and COPY won't have the files to copy into the image. It won't prevent a RUN composer install step later on from creating its own vendor directory in the image.
I don't think there is an easy solution to this problem.
If you need vendor for RUN composer install and you're not using a multistage build then it doesn't matter if you remove the vendor folder in the copy command. If you've copied it into the build earlier then it's going to be present in your final image, even if you don't copy it over in your COPY step.
One way to get around this is with multi-stage builds, like so:
FROM debian as base
COPY . /var/task/
RUN rm -rf /var/task/vendor
FROM debian
COPY --from=base /var/task /var/task
If you can use this pattern in your larger build file then the final image will contain all the files in your working directory except vendor.
There's still a performance hit though. You're still going to have to copy the entire vendor directory into the build, and depending on what docker features you're using that will still take a long time. But if you need it for composer install then there's really no way around this.

How can I cache a nix derivations's dependencies when built via Docker?

FROM nixos/nix#sha256:af330838e838cedea2355e7ca267280fc9dd68615888f4e20972ec51beb101d8
# FROM nixos/nix:2.3
ADD . /build
WORKDIR /build
RUN nix-build
ENTRYPOINT /build/result/bin/app
I have the very simple Dockerfile above that can succesfully build my application. However each time I modify any of the files within the application directory (.), it'll have to rebuild from scratch + download all the nix store dependencies.
Can I somehow grab a "list" of store dependencies downloaded and then add them in on the beginning of the Dockerfile for the purpose of caching them independently (for the ultimate goal of saving time + bandwidth)?
I'm aware I could build this docker image using nix natively which has it's own caching functionality (well the nix store), but I'm trying to have this buildable in a non nix environment (hence using docker).
I can suggest split source in two parts. The idea is to create a separate Docker layer with dependencies only, which changes rarely:
FROM nixos/nix:2.3
ADD ./default.nix /build
# if you have any other Nix files, put them to ./nix subdirectory
ADD ./nix /build/nix
# now let's download all the dependencies
RUN nix-shell --run exit
# At this point, Docker has cached all the dependencies. We can perform the build
ADD . /build
WORKDIR /build
RUN nix-build
ENTRYPOINT /build/result/bin/app

How to run tests in Dockerfile using xunit

So I have an ASP.NET project in a folder (src) and a test project in a folder right next to the other folder (tests). What I am trying to achieve is to be able to run my tests and deploy the application using docker, however I am really stuck.
Right now there is a Dockerfile in the src folder, which builds the application and deploys it just fine. There is also a Dockerfile for the test project in the tests folder, which should just run my tests.
The tests/Dockerfile currently looks like this:
FROM microsoft/dotnet:2.2.103-sdk AS build
WORKDIR /tests
COPY ["tests.csproj", "Tests/"]
RUN dotnet restore "Tests/tests.csproj"
WORKDIR /tests/Tests
COPY . .
RUN dotnet test
But if i run docker build, the tests fail, I am guessing because the application's code to be tested is missing.
I am getting a lot of:
The type or namespace name 'MyService' could not be found (are you missing a using directive or an assembly reference?
I do have a projectreference in my .csproj file so what could be the problem?
Your test code references some files (containing the type MyService) that have not been copied to the image.
This happens because your COPY . . instruction is executed after the WORKDIR /tests/Tests instruction, therefore you are copying everything inside the /tests/Tests folder, and not the referenced code which, according to your description, resides in the src folder.
Your problem should be solved by performing COPY . . in your second line, right after the FROM instruction. That way, all the required files will be correctly copied to the image. If you proceed this way, you can simplify your Dockerfile to something like this (not tested):
FROM microsoft/dotnet:2.2.103-sdk AS build
COPY . . # Copy all files
WORKDIR /tests/Tests # Go to tests directory
ENTRYPOINT ["dotnet", "test"] # Run tests (this will perform a restore + build before launching the tests)

How to create a docker container using a project solution where lib projects are located one level higher than the building context

I have a VS2017 (v5.18.0) solution which contains a .NET Core 2.0 console application "ReferenceGenerator" as the "startup" application. The solution contains also two .Net Core lib 2.0 projects FwCore and LibReferenceGenerator, which are "homegrown" libs. I have added docker support (Linux) and so all files needed to create a docker application are added. I can debug the application in the "docker-compose" mode with "docker for windows in Linux mode". And the application works fine. If I try to build a release version I get an error that a COPY occurs from an illegal path. The docker file looks like this:
FROM microsoft/dotnet:2.0-runtime AS base
WORKDIR /app
FROM microsoft/dotnet:2.0-sdk AS build
WORKDIR /src
COPY ReferenceGenerator/ReferenceGenerator.csproj
ReferenceGenerator/
COPY ../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/
COPY ../FwCore/FwCore/FwCore.csproj ../FwCore/FwCore/
RUN dotnet restore
ReferenceGenerator/ReferenceGenerator.csproj
COPY . .
WORKDIR /src/ReferenceGenerator
RUN dotnet build ReferenceGenerator.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish ReferenceGenerator.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ReferenceGenerator.dll"]
The line with beneath content:
COPY ../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/
Is causing error:
Step 6/17 : COPY ../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/
1>Service 'referencegenerator' failed to build: COPY failed: Forbidden path outside the build context: ../LibReferenceGenerator/LibReferenceGenerator.csproj ()
I have read that relative paths are not allowed, so be it. But the output of the compiler is already complete in the bin directory of the project ReferenceGenerator. I already tried to remove the two copy lines referencing the libs but then the build complains about the missing lib project files at the dotnet build stage.
Having some "homebuild" lib projects being included in an solution seems to me a very common situation. I am a newbee on docker containers and I have no idea how to fix this, anyone?
Additional info my file structure looks like this:
/Production/ReferenceGenerator/ReferenceGenerator.sln
/Production/ReferenceGenerator/ReferenceGenerator/ReferenceGenerator.csproj
/Production/LibReferenceGenerator/LibReferenceGenerator.csproj
/Production/FwCore/FwCore/FwCore.csproj
/Production/ReferenceGenerator/ReferenceGenerator/Dockerfile
Please anyone. The people that tried to help me have not succeeded in doing so. I'm completely stuck in development....
The answer is, there is no solution...
If you need libraries you must include them by using (private) nuget libraries.
It is not a neat solution because while debugging you do not have the sources of your libraries available but including libs outside the build context is a no go I learned researching the internet...
Also in a micro-service environment sharing code should be minized to avoid teams breaking code of other teams. Sorry for all developers who liked to have a solution for this problem, again beside a workaround using nuget packages there is none!
As the error says, you can't copy files that exist outside of the build context. When you run a command like docker image build ., that last argument (.) specifies the build context. That context is copied to the Docker engine for building. Files outside of that (e.g., ../LibReferenceGenerator/LibReferenceGenerator.csproj) simply don't exist.
So, for your example to work, you need to adjust your build context up one level in order to access LibReferenceGenerator and FwCore. Then, make the source of your COPY instructions relative to that one-level up context.
Note that the default location of the Dockerfile is a file named Dockerfile at your build context. You'll need to either move your Dockerfile, or specify a custom path using the -f, --file option.
docker image build documentation
You are missing one level in the copy.
It should be:
COPY ../../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/

Resources