I have a .net solution which has 2 projects, and each project is a microservice (a server). I have a dockerfile which first installs all the dependencies which are used by both projects. Then I publish the solution:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.sln .
COPY Server/*.csproj ./Server/
COPY JobRunner/*.csproj ./JobRunner/
RUN dotnet restore ./MySolution.sln
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "Server.dll"]
When the solution is published, 2 executables are available: Server.dll and JobRunner.dll. However, I can only start only one of them in Dockerfile.
This seems to be wasteful because restoring the solution is a common step for both Server and JobRunner project. In addition this line RUN dotnet publish -c Release -o out produces both an executable for Server and JobRunner. I could write a separate Dockerfile for each project but this seems redundant as 99% of the build steps for each project is identical.
Is there a way to somehow start 2 executables from a single file without using a script (I don't want that both services will be started in a single container)? The closest I've found is the --target option in docker build but it probably won't work because I'd need multiple entrypoints.
In your Dockerfile, change ENTRYPOINT to CMD in the very last line.
Once you do this, you can override the command by just providing an alternate command after the image name in the docker run command:
docker run ... my-image \
dotnet JobRunner.dll
(In principle you can do this without changing the Dockerfile, but the docker run construction is awkward, and there's no particular benefit to using ENTRYPOINT here. If you're using Docker Compose, you can override either entrypoint: or command: on a container-by-container basis.)
I'm building a web-based application that has separate reactjs projects for each role. (Think a public react site, and an admin site). Each of these web modules are in separate directories, but I would like to combine the results into a single nginx deployment. I currently have a Dockerfile under each of the directories that can build the image for that directory, but I don't want to end up with 2 image as it is a waste of space and resources.
Directory Structure
root
admin-ui
public-ui
api
I would like the output of the two UI projects to be created in a single file structure like below so I don't need to run it under a subdomain or have an extra docker container running the admin portion.
root
index.html \\public-ui
admin
index.html \\admin-ui
I suppose this can be done with a Dockerfile at the root level that copies the two ui directories in the context and does all the building there, but I was hoping for a more elegant solution - perhaps using the output of one Docker build as an input to another so I could build the directories independently then create an image that had the combined results.
There's a couple of ways to approach this. The big differences are how many commands you need to run, how easy it is to run the projects independently, and how much Docker is required to run the build process.
Run most of the build process on the host. In a comment you write out a sequence of commands to build the two projects without Docker. You can run that exactly as is, and use Docker only to build the final image.
# combined/Dockerfile
FROM nginx
COPY . /usr/share/nginx/html
(cd public-ui && yarn build)
(cd admin-ui && yarn build)
cp -a public-ui/build/* combined
cp -a admin-ui/build combined/admin
docker build ./combined
Run the entire build process from a Dockerfile in the root directory. You could do this with a multi-stage build; this does technically create multiple images, but you can clean those up pretty easily.
# Dockerfile
FROM node AS public
WORKDIR /app
COPY public-ui/package.json public-ui/yarn.lock .
RUN yarn install
COPY public-ui .
RUN yarn
CMD ["node", "build/main.js"]
FROM node AS admin
# similarly
FROM nginx
COPY --from=public /app/build /usr/share/nginx/html
COPY --from=admin /app/build /usr/share/nginx/html/admin
Go ahead and build three separate images. This would let you run the subproject images independently in a development setup; but then you can COPY --from=... the static files from those images into the final image for production use. The Docker tooling doesn't directly support this setup, though; you need to manually run the separate docker build commands to build the whole thing.
# combined/Dockerfile
FROM nginx
ARG tag=latest
COPY --from=me/public:${tag} /app/build /usr/share/nginx/html
COPY --from=me/admin:${tag} /app/build /usr/share/nginx/html/admin
tag=20210213
docker build -t me/public:$tag ./public
docker build -t me/admin:$tag ./admin
docker build -t me/app:$tag --build-arg tag=$tag ./combined
# docker rmi me/public:$tag me/admin:$tag
I am trying to contain my asp.net-core application into a docker container. As I use the Microsoft-secret-store for saving credentials, I need to run a dotnet user-secrets command withing my container. The application needs to read these credentials when starting, so I have to run the command prior to starting my application. When trying to do that in my Dockerfile I get the error:
---> Running in 90f974a06d83
Could not find a MSBuild project file in '/app'. Specify which project to use with the --project option.
I tried building my application first and then building a container with the already build dll, but that gave me the same error. I also tried connecting to the container with ENTRYPOINT ["/bin/bash"] and then looking around in the container. It seems that the /app folder that gets created does not have the .csproj files included. Im not sure if that could be an error.
My Dockerfile
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Joinme.Facade/Joinme.Facade.csproj", "Joinme.Facade/"]
COPY ["Joinme.Domain/Joinme.Domain.csproj", "Joinme.Domain/"]
COPY ["Joinme.Business/Joinme.Business.csproj", "Joinme.Business/"]
COPY ["Joinme.Persistence/Joinme.Persistence.csproj", "Joinme.Persistence/"]
COPY ["Joinme.Integration/Joinme.Integration.csproj", "Joinme.Integration/"]
RUN dotnet restore "Joinme.Facade/Joinme.Facade.csproj"
COPY . .
WORKDIR "/src/Joinme.Facade"
RUN dotnet build "Joinme.Facade.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Joinme.Facade.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
RUN dotnet user-secrets set "jwt:secret" "some_password"
ENTRYPOINT ["dotnet", "Joinme.Facade.dll"]
My expected results are that the secret gets set, so I can start the container without it crashing.
Plain and simple: the operation is failing because at this stage, there is no *.csproj file(s), which the user-secrets command requires. However, this is not what you should be doing anyways for a few reasons:
User secrets are not for production. You can just as easily, or in fact more easily, set an environment variable here instead, which doesn't require dotnet or the SDK.
ENV jwt:secret some_password
You should not actually be storing secrets in your Dockerfile, as that goes into your source control, and is exposed as plain text. Use Docker secrets, or an external provider like Azure Key Vault.
You don't want to build your final image based on the SDK, anyways. That's going to make your container image huge, which means both longer transfer times to/from the container registry and higher storage/bandwidth costs. Your final image should be based on the runtime, or even something like alpine, if you publish self-contained (i.e. keep it as small as possible).
I'm building a Rust program in Docker (rust:1.33.0).
Every time code changes, it re-compiles (good), which also re-downloads all dependencies (bad).
I thought I could cache dependencies by adding VOLUME ["/usr/local/cargo"]. edit I've also tried moving this dir with CARGO_HOME without luck.
I thought that making this a volume would persist the downloaded dependencies, which appear to be in this directory.
But it didn't work, they are still downloaded every time. Why?
Dockerfile
FROM rust:1.33.0
VOLUME ["/output", "/usr/local/cargo"]
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
COPY src/ ./src/
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Built with just docker build ..
Cargo.toml
[package]
name = "mwe"
version = "0.1.0"
[dependencies]
log = { version = "0.4.6" }
Code: just hello world
Output of second run after changing main.rs:
...
Step 4/6 : COPY Cargo.toml .
---> Using cache
---> 97f180cb6ce2
Step 5/6 : COPY src/ ./src/
---> 835be1ea0541
Step 6/6 : RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
---> Running in 551299a42907
Updating crates.io index
Downloading crates ...
Downloaded log v0.4.6
Downloaded cfg-if v0.1.6
Compiling cfg-if v0.1.6
Compiling log v0.4.6
Compiling mwe v0.1.0 (/)
Finished dev [unoptimized + debuginfo] target(s) in 17.43s
Removing intermediate container 551299a42907
---> e4626da13204
Successfully built e4626da13204
A volume inside the Dockerfile is counter-productive here. That would mount an anonymous volume at each build step, and again when you run the container. The volume during each build step is discarded after that step completes, which means you would need to download the entire contents again for any other step needing those dependencies.
The standard model for this is to copy your dependency specification, run the dependency download, copy your code, and then compile or run your code, in 4 separate steps. That lets docker cache the layers in an efficient manner. I'm not familiar with rust or cargo specifically, but I believe that would look like:
FROM rust:1.33.0
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
RUN cargo fetch # this should download dependencies
COPY src/ ./src/
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Another option is to turn on some experimental features with BuildKit (available in 18.09, released 2018-11-08) so that docker saves these dependencies in what is similar to a named volume for your build. The directory can be reused across builds, but never gets added to the image itself, making it useful for things like a download cache.
# syntax=docker/dockerfile:experimental
FROM rust:1.33.0
VOLUME ["/output", "/usr/local/cargo"]
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
COPY src/ ./src/
RUN --mount=type=cache,target=/root/.cargo \
["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Note that the above assumes cargo is caching files in /root/.cargo. You'd need to verify this and adjust as appropriate. I also haven't mixed the mount syntax with a json exec syntax to know if that part works. You can read more about the BuildKit experimental features here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
Turning on BuildKit from 18.09 and newer versions is as easy as export DOCKER_BUILDKIT=1 and then running your build from that shell.
I would say, the nicer solution would be to resort to docker multi-stage build as pointed here and there
This way you can create yourself a first image, that would build both your application and your dependencies, then use, only, in the second image, the dependency folder from the first one
This is inspired by both your comment on #Jack Gore's answer and the two issue comments linked here above.
FROM rust:1.33.0 as dependencies
WORKDIR /usr/src/app
COPY Cargo.toml .
RUN rustup default nightly-2019-01-29 && \
mkdir -p src && \
echo "fn main() {}" > src/main.rs && \
cargo build -Z unstable-options --out-dir /output
FROM rust:1.33.0 as application
# Those are the lines instructing this image to reuse the files
# from the previous image that was aliased as "dependencies"
COPY --from=dependencies /usr/src/app/Cargo.toml .
COPY --from=dependencies /usr/local/cargo /usr/local/cargo
COPY src/ src/
VOLUME /output
RUN rustup default nightly-2019-01-29 && \
cargo build -Z unstable-options --out-dir /output
PS: having only one run will reduce the number of layers you generate; more info here
Here's an overview of the possibilities. (Scroll down for my original answer.)
Add Cargo files, create fake main.rs/lib.rs, then compile dependencies. Afterwards remove the fake source and add the real ones. [Caches dependencies, but several fake files with workspaces].
Add Cargo files, create fake main.rs/lib.rs, then compile dependencies. Afterwards create a new layer with the dependencies and continue from there. [Similar to above].
Externally mount a volume for the cache dir. [Caches everything, relies on caller to pass --mount].
Use RUN --mount=type=cache,target=/the/path cargo build in the Dockerfile in new Docker versions. [Caches everything, seems like a good way, but currently too new to work for me. Executable not part of image. Edit: See here for a solution.]
Run sccache in another container or on the host, then connect to that during the build process. See this comment in Cargo issue 2644.
Use cargo-build-deps. [Might work for some, but does not support Cargo workspaces (in 2019)].
Wait for Cargo issue 2644. [There's willingness to add this to Cargo, but no concrete solution yet].
Using VOLUME ["/the/path"] in the Dockerfile does NOT work, this is per-layer (per command) only.
Note: one can set CARGO_HOME and ENV CARGO_TARGET_DIR in the Dockerfile to control where download cache and compiled output goes.
Also note: cargo fetch can at least cache downloading of dependencies, although not compiling.
Cargo workspaces suffer from having to manually add each Cargo file, and for some solutions, having to generate a dozen fake main.rs/lib.rs. For projects with a single Cargo file, the solutions work better.
I've got caching to work for my particular case by adding
ENV CARGO_HOME /code/dockerout/cargo
ENV CARGO_TARGET_DIR /code/dockerout/target
Where /code is the directory where I mount my code.
This is externally mounted, not from the Dockerfile.
EDIT1: I was confused why this worked, but #b.enoit.be and #BMitch cleared up that it's because volumes declared inside the Dockerfile only live for one layer (one command).
You do not need to use an explicit Docker volume to cache your dependencies. Docker will automatically cache the different "layers" of your image. Basically, each command in the Dockerfile corresponds to a layer of the image. The problem you are facing is based on how Docker image layer caching works.
The rules that Docker follows for image layer caching are listed in the official documentation:
Starting with a parent image that is already in the cache, the next
instruction is compared against all child images derived from that
base image to see if one of them was built using the exact same
instruction. If not, the cache is invalidated.
In most cases, simply comparing the instruction in the Dockerfile with
one of the child images is sufficient. However, certain instructions
require more examination and explanation.
For the ADD and COPY instructions, the contents of the file(s) in the
image are examined and a checksum is calculated for each file. The
last-modified and last-accessed times of the file(s) are not
considered in these checksums. During the cache lookup, the checksum
is compared against the checksum in the existing images. If anything
has changed in the file(s), such as the contents and metadata, then
the cache is invalidated.
Aside from the ADD and COPY commands, cache checking does not look at
the files in the container to determine a cache match. For example,
when processing a RUN apt-get -y update command the files updated in
the container are not examined to determine if a cache hit exists. In
that case just the command string itself is used to find a match.
Once the cache is invalidated, all subsequent Dockerfile commands
generate new images and the cache is not used.
So the problem is with the positioning of the command COPY src/ ./src/ in the Dockerfile. Whenever there is a change in one of your source files, the cache will be invalidated and all subsequent commands will not use the cache. Therefore your cargo build command will not use the Docker cache.
To solve your problem it will be as simple as reordering the commands in your Docker file, to this:
FROM rust:1.33.0
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
COPY src/ ./src/
Doing it this way, your dependencies will only be reinstalled when there is a change in your Cargo.toml.
Hope this helps.
With the integration of BuildKit into docker, if you are able to avail yourself of the superior BuildKit backend, it's now possible to mount a cache volume during a RUN command, and IMHO, this has become the best way to cache cargo builds. The cache volume retains the data that was written to it on previous runs.
To use BuildKit, you'll mount two cache volumes, one for the cargo dir, which caches external crate sources, and one for the target dir, which caches all of your built artifacts, including external crates and the project bins and libs.
If your base image is rust, $CARGO_HOME is set to /usr/local/cargo, so your command looks like this:
RUN --mount=type=cache,target=/usr/local/cargo,from=rust,source=/usr/local/cargo \
--mount=type=cache,target=target \
cargo build
If your base image is something else, you will need to change the /usr/local/cargo bit to whatever is the value of $CARGO_HOME, or else add a ENV CARGO_HOME=/usr/local/cargo line. As a side note, the clever thing would be to set literally target=$CARGO_HOME and let Docker do the expansion, but it
doesn't seem to work right - expansion happens, but buildkit still doesn't persist the same volume across runs when you do this.
Other options for achieving Cargo build caching (including sccache and the cargo wharf project) are described in this github issue.
I figured out how to get this also working with cargo workspaces, using romac's fork of cargo-build-deps.
This example has my_app, and two workspaces: utils and db.
FROM rust:nightly as rust
# Cache deps
WORKDIR /app
RUN sudo chown -R rust:rust .
RUN USER=root cargo new myapp
# Install cache-deps
RUN cargo install --git https://github.com/romac/cargo-build-deps.git
WORKDIR /app/myapp
RUN mkdir -p db/src/ utils/src/
# Copy the Cargo tomls
COPY myapp/Cargo.toml myapp/Cargo.lock ./
COPY myapp/db/Cargo.toml ./db/
COPY myapp/utils/Cargo.toml ./utils/
# Cache the deps
RUN cargo build-deps
# Copy the src folders
COPY myapp/src ./src/
COPY myapp/db/src ./db/src/
COPY myapp/utils/src/ ./utils/src/
# Build for debug
RUN cargo build
I'm sure you can adjust this code for use with a Dockerfile, but I wrote a dockerized drop-in replacement for cargo that you can save to a package and run as ./cargo build --release. This just works for (most) development (uses rust:latest), but isn't set up for CI or anything.
Usage: ./cargo build, ./cargo build --release, etc
It will use the current working directory and save the cache to ./.cargo. (You can ignore the entire directory in your version control and it doesn't need to exist beforehand.)
Create a file named cargo in your project's folder, run chmod +x ./cargo on it, and place the following code in it:
#!/bin/bash
# This is a drop-in replacement for `cargo`
# that runs in a Docker container as the current user
# on the latest Rust image
# and saves all generated files to `./cargo/` and `./target/`.
#
# Be sure to make this file executable: `chmod +x ./cargo`
#
# # Examples
#
# - Running app: `./cargo run`
# - Building app: `./cargo build`
# - Building release: `./cargo build --release`
#
# # Installing globally
#
# To run `cargo` from anywhere,
# save this file to `/usr/local/bin`.
# You'll then be able to use `cargo`
# as if you had installed Rust globally.
sudo docker run \
--rm \
--user "$(id -u)":"$(id -g)" \
--mount type=bind,src="$PWD",dst=/usr/src/app \
--workdir /usr/src/app \
--env CARGO_HOME=/usr/src/app/.cargo \
rust:latest \
cargo "$#"
In our project, we have an ASP.NET Core project with an Angular2 client. At Docker build time, we launch:
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN apt-get -qq update ; apt-get -qqy --no-install-recommends install \
git \
unzip
RUN curl -sL https://deb.nodesource.com/setup_7.x | bash -
RUN apt-get install -y nodejs build-essential
RUN ["dotnet", "restore"]
RUN npm install
RUN npm run build:prod
RUN ["dotnet", "build"]
EXPOSE 5000/tcp
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "run"]
Since restoring the npm packages is essential to be able to build the Angular2 client using npm run build, our Docker image is HUGE, I mean almost 2GB. Built Angular2 client is only 1.7Mb itself.
Our app does nothing fancy: simple web API writing to MongoDB and displaying static files.
In order to improve the size of our image, is there any way to exclude path which are useless at run time? For example node_modules or any .NET Core source?
Dotnet may restore much, especially if you have multiple targets platforms (linux, mac, windows).
Depending on how your application is configured (i.e. as portable .NET Core app or as self-contained), it can also pull the whole .NET Core Framework for one, or multiple platforms and/or architectures (x64, x86). This is mainly explained here.
When "Microsoft.NETCore.App" : "1.0.0" is defined, without the type platform, then then complete framework will be fetched via nuget. Then if you have multiple runtimes defined
"runtimes": {
"win10-x64": {},
"win10-x86": {},
"osx.10.10-x86": {},
"osx.10.10-x64": {}
}
it will get native libraries for all this platforms too. But not only in your project directory but also in ~/.nuget and npm-cache additionally to node_modules in your project + eventual copies in your wwwdata.
However, this is not how docker works. Everything you execute inside the Dockerfile is written to the virtual filesystem of the container! That's why you see this issues.
You should follow my previous comment on your other question:
Run dotnet restore, dotne build and dotnet publish outside the Dockerfile, for example in a bash or powershell/batch script.
Once finished call copy the content of the publish folder in your container with
dotnet publish
docker build bin\Debug\netcoreapp1.0\publish ... (your other parameters here)
This will generate publish files on your file system, only containing the required dll files, Views and wwwroot content without all the other build files, artifacts, caches or source and will run the docker process from the bin\Debug\netcoreapp1.0\publish folder.
You also need to change your docker files, to copy the files instead of running the commands you have during container building.
Scott uses this Dockerfile for his example in his blog:
FROM ... # Your base image here
ENTRYPOINT ["dotnet", "YourWebAppName.dll"] # Application to run
ARG source=. # An argument from outside, here store the path from real filesystem
WORKDIR /app
ENV ASPNETCORE_URLS http://+:82 # Define the port it should listen
EXPOSE 82
COPY $source . # copy the files from defined folder, here bin\Debug\netcoreapp1.0\publish to inside the docker container
This is the recommended approach for building docker containers. When you run the build commands inside, all the build and publish artifacts remain in the virtual file system and the docker image grows unexpectedly.