I'm building a Rust program in Docker (rust:1.33.0).
Every time code changes, it re-compiles (good), which also re-downloads all dependencies (bad).
I thought I could cache dependencies by adding VOLUME ["/usr/local/cargo"]. edit I've also tried moving this dir with CARGO_HOME without luck.
I thought that making this a volume would persist the downloaded dependencies, which appear to be in this directory.
But it didn't work, they are still downloaded every time. Why?
Dockerfile
FROM rust:1.33.0
VOLUME ["/output", "/usr/local/cargo"]
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
COPY src/ ./src/
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Built with just docker build ..
Cargo.toml
[package]
name = "mwe"
version = "0.1.0"
[dependencies]
log = { version = "0.4.6" }
Code: just hello world
Output of second run after changing main.rs:
...
Step 4/6 : COPY Cargo.toml .
---> Using cache
---> 97f180cb6ce2
Step 5/6 : COPY src/ ./src/
---> 835be1ea0541
Step 6/6 : RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
---> Running in 551299a42907
Updating crates.io index
Downloading crates ...
Downloaded log v0.4.6
Downloaded cfg-if v0.1.6
Compiling cfg-if v0.1.6
Compiling log v0.4.6
Compiling mwe v0.1.0 (/)
Finished dev [unoptimized + debuginfo] target(s) in 17.43s
Removing intermediate container 551299a42907
---> e4626da13204
Successfully built e4626da13204
A volume inside the Dockerfile is counter-productive here. That would mount an anonymous volume at each build step, and again when you run the container. The volume during each build step is discarded after that step completes, which means you would need to download the entire contents again for any other step needing those dependencies.
The standard model for this is to copy your dependency specification, run the dependency download, copy your code, and then compile or run your code, in 4 separate steps. That lets docker cache the layers in an efficient manner. I'm not familiar with rust or cargo specifically, but I believe that would look like:
FROM rust:1.33.0
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
RUN cargo fetch # this should download dependencies
COPY src/ ./src/
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Another option is to turn on some experimental features with BuildKit (available in 18.09, released 2018-11-08) so that docker saves these dependencies in what is similar to a named volume for your build. The directory can be reused across builds, but never gets added to the image itself, making it useful for things like a download cache.
# syntax=docker/dockerfile:experimental
FROM rust:1.33.0
VOLUME ["/output", "/usr/local/cargo"]
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
COPY src/ ./src/
RUN --mount=type=cache,target=/root/.cargo \
["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Note that the above assumes cargo is caching files in /root/.cargo. You'd need to verify this and adjust as appropriate. I also haven't mixed the mount syntax with a json exec syntax to know if that part works. You can read more about the BuildKit experimental features here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
Turning on BuildKit from 18.09 and newer versions is as easy as export DOCKER_BUILDKIT=1 and then running your build from that shell.
I would say, the nicer solution would be to resort to docker multi-stage build as pointed here and there
This way you can create yourself a first image, that would build both your application and your dependencies, then use, only, in the second image, the dependency folder from the first one
This is inspired by both your comment on #Jack Gore's answer and the two issue comments linked here above.
FROM rust:1.33.0 as dependencies
WORKDIR /usr/src/app
COPY Cargo.toml .
RUN rustup default nightly-2019-01-29 && \
mkdir -p src && \
echo "fn main() {}" > src/main.rs && \
cargo build -Z unstable-options --out-dir /output
FROM rust:1.33.0 as application
# Those are the lines instructing this image to reuse the files
# from the previous image that was aliased as "dependencies"
COPY --from=dependencies /usr/src/app/Cargo.toml .
COPY --from=dependencies /usr/local/cargo /usr/local/cargo
COPY src/ src/
VOLUME /output
RUN rustup default nightly-2019-01-29 && \
cargo build -Z unstable-options --out-dir /output
PS: having only one run will reduce the number of layers you generate; more info here
Here's an overview of the possibilities. (Scroll down for my original answer.)
Add Cargo files, create fake main.rs/lib.rs, then compile dependencies. Afterwards remove the fake source and add the real ones. [Caches dependencies, but several fake files with workspaces].
Add Cargo files, create fake main.rs/lib.rs, then compile dependencies. Afterwards create a new layer with the dependencies and continue from there. [Similar to above].
Externally mount a volume for the cache dir. [Caches everything, relies on caller to pass --mount].
Use RUN --mount=type=cache,target=/the/path cargo build in the Dockerfile in new Docker versions. [Caches everything, seems like a good way, but currently too new to work for me. Executable not part of image. Edit: See here for a solution.]
Run sccache in another container or on the host, then connect to that during the build process. See this comment in Cargo issue 2644.
Use cargo-build-deps. [Might work for some, but does not support Cargo workspaces (in 2019)].
Wait for Cargo issue 2644. [There's willingness to add this to Cargo, but no concrete solution yet].
Using VOLUME ["/the/path"] in the Dockerfile does NOT work, this is per-layer (per command) only.
Note: one can set CARGO_HOME and ENV CARGO_TARGET_DIR in the Dockerfile to control where download cache and compiled output goes.
Also note: cargo fetch can at least cache downloading of dependencies, although not compiling.
Cargo workspaces suffer from having to manually add each Cargo file, and for some solutions, having to generate a dozen fake main.rs/lib.rs. For projects with a single Cargo file, the solutions work better.
I've got caching to work for my particular case by adding
ENV CARGO_HOME /code/dockerout/cargo
ENV CARGO_TARGET_DIR /code/dockerout/target
Where /code is the directory where I mount my code.
This is externally mounted, not from the Dockerfile.
EDIT1: I was confused why this worked, but #b.enoit.be and #BMitch cleared up that it's because volumes declared inside the Dockerfile only live for one layer (one command).
You do not need to use an explicit Docker volume to cache your dependencies. Docker will automatically cache the different "layers" of your image. Basically, each command in the Dockerfile corresponds to a layer of the image. The problem you are facing is based on how Docker image layer caching works.
The rules that Docker follows for image layer caching are listed in the official documentation:
Starting with a parent image that is already in the cache, the next
instruction is compared against all child images derived from that
base image to see if one of them was built using the exact same
instruction. If not, the cache is invalidated.
In most cases, simply comparing the instruction in the Dockerfile with
one of the child images is sufficient. However, certain instructions
require more examination and explanation.
For the ADD and COPY instructions, the contents of the file(s) in the
image are examined and a checksum is calculated for each file. The
last-modified and last-accessed times of the file(s) are not
considered in these checksums. During the cache lookup, the checksum
is compared against the checksum in the existing images. If anything
has changed in the file(s), such as the contents and metadata, then
the cache is invalidated.
Aside from the ADD and COPY commands, cache checking does not look at
the files in the container to determine a cache match. For example,
when processing a RUN apt-get -y update command the files updated in
the container are not examined to determine if a cache hit exists. In
that case just the command string itself is used to find a match.
Once the cache is invalidated, all subsequent Dockerfile commands
generate new images and the cache is not used.
So the problem is with the positioning of the command COPY src/ ./src/ in the Dockerfile. Whenever there is a change in one of your source files, the cache will be invalidated and all subsequent commands will not use the cache. Therefore your cargo build command will not use the Docker cache.
To solve your problem it will be as simple as reordering the commands in your Docker file, to this:
FROM rust:1.33.0
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
COPY src/ ./src/
Doing it this way, your dependencies will only be reinstalled when there is a change in your Cargo.toml.
Hope this helps.
With the integration of BuildKit into docker, if you are able to avail yourself of the superior BuildKit backend, it's now possible to mount a cache volume during a RUN command, and IMHO, this has become the best way to cache cargo builds. The cache volume retains the data that was written to it on previous runs.
To use BuildKit, you'll mount two cache volumes, one for the cargo dir, which caches external crate sources, and one for the target dir, which caches all of your built artifacts, including external crates and the project bins and libs.
If your base image is rust, $CARGO_HOME is set to /usr/local/cargo, so your command looks like this:
RUN --mount=type=cache,target=/usr/local/cargo,from=rust,source=/usr/local/cargo \
--mount=type=cache,target=target \
cargo build
If your base image is something else, you will need to change the /usr/local/cargo bit to whatever is the value of $CARGO_HOME, or else add a ENV CARGO_HOME=/usr/local/cargo line. As a side note, the clever thing would be to set literally target=$CARGO_HOME and let Docker do the expansion, but it
doesn't seem to work right - expansion happens, but buildkit still doesn't persist the same volume across runs when you do this.
Other options for achieving Cargo build caching (including sccache and the cargo wharf project) are described in this github issue.
I figured out how to get this also working with cargo workspaces, using romac's fork of cargo-build-deps.
This example has my_app, and two workspaces: utils and db.
FROM rust:nightly as rust
# Cache deps
WORKDIR /app
RUN sudo chown -R rust:rust .
RUN USER=root cargo new myapp
# Install cache-deps
RUN cargo install --git https://github.com/romac/cargo-build-deps.git
WORKDIR /app/myapp
RUN mkdir -p db/src/ utils/src/
# Copy the Cargo tomls
COPY myapp/Cargo.toml myapp/Cargo.lock ./
COPY myapp/db/Cargo.toml ./db/
COPY myapp/utils/Cargo.toml ./utils/
# Cache the deps
RUN cargo build-deps
# Copy the src folders
COPY myapp/src ./src/
COPY myapp/db/src ./db/src/
COPY myapp/utils/src/ ./utils/src/
# Build for debug
RUN cargo build
I'm sure you can adjust this code for use with a Dockerfile, but I wrote a dockerized drop-in replacement for cargo that you can save to a package and run as ./cargo build --release. This just works for (most) development (uses rust:latest), but isn't set up for CI or anything.
Usage: ./cargo build, ./cargo build --release, etc
It will use the current working directory and save the cache to ./.cargo. (You can ignore the entire directory in your version control and it doesn't need to exist beforehand.)
Create a file named cargo in your project's folder, run chmod +x ./cargo on it, and place the following code in it:
#!/bin/bash
# This is a drop-in replacement for `cargo`
# that runs in a Docker container as the current user
# on the latest Rust image
# and saves all generated files to `./cargo/` and `./target/`.
#
# Be sure to make this file executable: `chmod +x ./cargo`
#
# # Examples
#
# - Running app: `./cargo run`
# - Building app: `./cargo build`
# - Building release: `./cargo build --release`
#
# # Installing globally
#
# To run `cargo` from anywhere,
# save this file to `/usr/local/bin`.
# You'll then be able to use `cargo`
# as if you had installed Rust globally.
sudo docker run \
--rm \
--user "$(id -u)":"$(id -g)" \
--mount type=bind,src="$PWD",dst=/usr/src/app \
--workdir /usr/src/app \
--env CARGO_HOME=/usr/src/app/.cargo \
rust:latest \
cargo "$#"
Related
I use this docker build - < Dockerfile -t deepface to build a docker image.
When I runin command it show Error:
> ERROR [3/4] COPY ./requirements.txt /requirements.txt
> 0.0s
> ------
> > [3/4] COPY ./requirements.txt /requirements.txt:
> ------ failed to compute cache key: "/requirements.txt" not found: not found
My Director File is
>Deepface
|->Dockerfile
|->requirements.txt
My requirements.txt is
numpy==1.19.5
pandas==1.2.4
gdown==3.13.0
tqdm==4.60.0
Pillow==8.2.0
opencv-python==4.5.2.52
tensorflow==2.5.0
keras==2.4.3
Flask==2.0.1
matplotlib==3.4.2
deepface==0.0.53
and my Dockerfile is
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /requirements.txt
RUN pip install -r ./requirements.txt
How can I solve this problem?
This could be related to this BuildKit docker issue.
In order to see if this is indeed the problem, try building with BuildKit disabled:
$ DOCKER_BUILDKIT=0 docker build ...
If this helps, then you can try one of these for a permanent fix:
Update docker (or follow the above linked GitHub issue to see if it is fixed)
Disable BuildKit globally by adding
{ "features": { "buildkit": true } }
to /etc/docker/daemon.json
(or c:\Users\CURRENT_USER\.docker\daemon.json on Windows).
As a side note, I would recommend avoiding copying requirements.txt to the root folder of the container. Use a subdirectory, such as /app and use WORKDIR in your Dockerfile to make it the base directory.
As a secondary side note - instead of running
docker build - < Dockerfile ... you can just run
docker build ...
The particular docker build syntax you use
docker build - <Dockerfile
has only the Dockerfile; nothing else is in the Docker build context, and so you can't COPY anything into the image. Even though this syntax is in the docker build documentation I wouldn't use it.
A more typical invocation is to just specify the current directory as the build context:
docker build -t deepface .
(Don't forget to also COPY your application code into the image, and set the standard CMD the container should run.)
1. WHY?
Whenever you piped through STDIN Dockerfile you can't use ADD and COPY instructions within local paths.
There is a trick. Docker can use paths only in scope of so called context.
Context is a directory you specify for docker build. E.g. docker build my-docker-context-dir. But as long as you use STDIN instead of directory there is no directory.
In this case docker is absolutely blind to everything but the contents of Dockerfile. Read this official Build with -
Perhaps its also worth reading whole docker build section. Frankly at first I also skipped it, and got some pitfalls just like you.
2. What to do?
Whenever you want to put some files into docker image, you have to create a context directory.
So your directory structure should be like this:
>Deepface
|->Dockerfile
|->context-dir
|->requirements.txt
Now you can call docker build as follows (note, there is a -t option as proposed by David Maze):
cd Deepface
docker build -t deepface -f Dockerfile context-dir
For some reason the file may be in the .dockerignore file, please check if the file is not there.
I am aware that you cannot step outside of Docker's build context and I am looking for alternatives on how to share a file between two folders (outside the build context).
My folder structure is
project
- server
Dockerfile
- client
Dockerfile
My client folder needs to access a file inside the server folder for some code generation, where the client is built according to the contract of the server.
The client Dockerfile looks like the following:
FROM node:10-alpine AS build
WORKDIR /app
COPY . /app
RUN yarn install
RUN yarn build
FROM node:10-alpine
WORKDIR /app
RUN yarn install --production
COPY --from=build /app ./
EXPOSE 5000
CMD [ "yarn", "serve" ]
I run docker build -t my-name . inside the client directory.
During the RUN yarn build step, a script is looking for a file in ../server/src/schema/schema.graphql which can not be found, as the file is outside the client directory and therefore outside Docker's build context.
How can I get around this, or other suggestions to solving this issue?
The easiest way to do this is to use the root of your source tree as the Docker context directory, point it at one or the other of the Dockerfiles, and be explicit about whether you're using the client or server trees.
cd $HOME/project
docker build \
-t project-client:$(git rev-parse --short HEAD) \
-f client/Dockerfile \
.
FROM node:10-alpine AS build
WORKDIR /app
COPY client/ ./
ET cetera
In the specific case of GraphQL, depending on your application and library stack, it may be possible to avoid needing the schema at all and just make unchecked client calls; or to make an introspection query at startup time to dynamically fetch the schema; or to maintain two separate copies of the schema file. Some projects I work on use GraphQL interfaces but the servers and clients are in actual separate repositories and there's no choice but to store separate copies of the schema, but if you're careful about changes, this isn't been a problem in practice.
I have followed these tutorials to build Docker image for my Spring Boot application, which uses Maven as build tool.
I am using boot2docker VM on top of Windows 10 machine, cloning my project to the VM from my Bitbucker repository.
https://spring.io/guides/gs/spring-boot-docker/
https://www.callicoder.com/spring-boot-docker-example/
I understand the instructions told, but, I failed to build a proper Docker image. Here's the things I tried.
Use the Spotify maven plugin for Dockerfile. Try to run ./mvnw to build the JAR as well as the Docker image. But, I don't have Java installed in the boot2docker. So the Maven wrapper ./mvnw cannot be run.
I tried to build the JAR through Dockerfile, which is based on the openjdk:8-jdk-alpine image. I added RUN ./mvnw package instruction in the Dockerfile. Then run docker build -t <my_project> . to build Docker image.
It fails at the RUN instruction, claiming /bin/sh: mvnw: not found
The command '/bin/sh -c mvnw package' returned a non-zero code: 127
My Dockerfile, located in the directory where the mvnw is located:
MAINTAINER myname
VOLUME /tmp
RUN ./mvnw package
ARG JAR_FILE=target/myproject-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
For 1, I need to have Java installed in the OS where the Docker engine resides. But I think it's not a good practice coz this lowers the portability.
For 2, first, I don't know how to run ./mvnw successfully in Dockerfile. Second, I'm not sure if it is a good practice to build the Spring Boot JAR through Dockerfile, coz I don't see any "Docker for Spring Boot" tutorial to tell to do so.
So, what is the best practice to solve my situation? I'm new to Docker. Comments and answers are appreciated!
You can install maven and run the compile directly in the build. Typically this would be a multi-stage build to avoid including the entire jdk in your pushed image:
FROM openjdk:8-jdk-alpine as build
RUN apk add --no-cache maven
WORKDIR /java
COPY . /java
RUN mvn package -Dmaven.test.skip=true
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/java/target/myproject-0.0.1-SNAPSHOT.jar"]
The above is a stripped down version of a rework from the same example that I've done in the past. You may need to adjust filenames in your entrypoint, but the key steps are to install maven and run it inside your build.
From your second example I think you are misunderstanding how Docker builds images. When Docker executes RUN ./mvnw package, the file mvnw must exist in the file system of the image being built, which means you should have an instruction like COPY mvnw . in a previous step - that will copy the file from your local filesystem into the image.
You will likely need to copy the entire project structure inside the image, before calling ./mvnw, as the response from #BMitch suggests.
Also, as #BMitch said, to generate a small-sized image it's normally recommended to use a multi-stage build, where the first stage installs every dependency but the final image has only your JAR.
You could try something like below:
# First stage: build fat JAR
# Select base image.
# (The "AS builder" gives a name to the stage that we will need later)
# (I think it's better to use a slim image with Maven already installed instead
# than ./mvnw. Otherwise you could need to give execution rights to your file
# with instructions like "RUN chmod +x mvnw".)
FROM maven:3.6.3-openjdk-8-slim AS builder
# Set your preferred working directory
# (This tells the image what the "current" directory is for the rest of the build)
WORKDIR /opt/app
# Copy everything from you current local directory into the working directory of the image
COPY . .
# Compile, test and package
# (-e gives more information in case of errors)
# (I prefer to also run unit tests at this point. This may not be possible if your tests
# depend on other technologies that you don't whish to install at this point.)
RUN mvn -e clean verify
###
# Second stage: final image containing only WAR files
# The base image for the final result can be as small as Alpine with a JRE
FROM openjdk:8-jre-alpine
# Once again, the current directory as seen by your image
WORKDIR /opt/app
# Get artifacts from the previous stage and copy them to the new image.
# (If you are confident the only JAR in "target/" is your package, you could NOT
# use the full name of the JAR and instead something like "*.jar", to avoid updating
# the Dockerfile when the version of your project changes.)
COPY --from=builder /opt/app/target/*.jar ./
# Expose whichever port you use in the Spring application
EXPOSE 8080
# Define the application to run when the Docker container is created.
# Either ENTRYPOINT or CMD.
# (Optionally, you could define a file "entrypoint.sh" that can have a more complex
# startup logic.)
# (Setting "java.security.egd" when running Spring applications is good for security
# reasons.)
ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar /opt/app/*.war
I have a Go project with a large vendor/ directory which almost never changes.
I am trying to use the new go 1.10 build cache feature to speed up my builds in Docker engine locally.
Avoiding recompilation of my vendor/ directory would be enough optimization. So I'm trying to do Go equivalent of this common Dockerfile pattern for Python:
FROM python
COPY requirements.txt . # <-- copy your dependency list
RUN pip install -r requirements.txt # <-- install dependencies
COPY ./src ... # <-- your actual code (everything above is cached)
Similarly I tried:
FROM golang:1.10-alpine
COPY ./vendor ./src/myproject/vendor
RUN go build -v myproject/vendor/... # <-- pre-build & cache "vendor/"
COPY . ./src/myproject
However this is giving "cannot find package" error (likely because you cannot build stuff in vendor/ directly normally either).
Has anyone been able to figure this out?
Here's something that works for me:
FROM golang:1.10-alpine
WORKDIR /usr/local/go/src/github.com/myorg/myproject/
COPY vendor vendor
RUN find vendor -maxdepth 2 -mindepth 2 -type d -exec sh -c 'go install -i github.com/myorg/myproject/{}/... || true' \;
COPY main.go .
RUN go build main.go
It makes sure the vendored libraries are installed first. As long as you don't change a library, you're good.
Just use go install -i ./vendor/....
Consider the following Dockerfile:
FROM golang:1.10-alpine
ARG APP
ENV PTH $GOPATH/src/$APP
WORKDIR $PTH
# Pre-compile vendors.
COPY vendor/ $PTH/vendor/
RUN go install -i ./vendor/...
ADD . $PTH/
# Display time taken and the list of the packages being compiled.
RUN time go build -v
You can test it doing something like:
docker build -t test --build-arg APP=$(go list .) .
On the project I am working on, without pre-compile, it takes ~12sec with 90+ package each time, after, it take ~1.2s with only 3 (only the local ones).
If you still have "cannot find package", it means there are missing vendors. Re-run dep ensure should fix it.
An other tip, unrelated to Go is to have your .dockerignore start with *. i.e. ignore everything and then whitelist what you need.
As of Go 1.11, you would use go modules to accomplish this;
FROM golang
WORKDIR /src/myproject
COPY go.mod go.sum ./ # <-- copy your dependency list
RUN go mod download # <-- install dependencies
COPY . . # <-- your actual code (everything above is cached)
As long as go.sum doesn't change, the image layer created by go mod download shall be reused from cache.
Using go mod download alone didn't do the trick for me.
What ended up working for me was to leverage buildkit to mount Go's build cache.
This was the article that lead me to this feature.
My Dockerfile looks something like this
FROM golang AS build
WORKDIR /go/src/app
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY . ./
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build -o /my-happy-app
For local development this did quite a different for me, changing buildtimes from 1.5 minutes to 3 seconds.
I'm using colima (to run Docker on my Mac. Just mentioning this since I'm not using Docker for Mac, but it should work just the same) with
colima version 0.4.2
git commit: f112f336d05926d62eb6134ee3d00f206560493b
runtime: docker
arch: x86_64
client: v20.10.9
server: v20.10.11
And golang 1.17, so this is not a 1.10 specific answer.
I took this setup from docker compose's cli dockerfile here.
Your mileage may vary.
In our project, we have an ASP.NET Core project with an Angular2 client. At Docker build time, we launch:
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN apt-get -qq update ; apt-get -qqy --no-install-recommends install \
git \
unzip
RUN curl -sL https://deb.nodesource.com/setup_7.x | bash -
RUN apt-get install -y nodejs build-essential
RUN ["dotnet", "restore"]
RUN npm install
RUN npm run build:prod
RUN ["dotnet", "build"]
EXPOSE 5000/tcp
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "run"]
Since restoring the npm packages is essential to be able to build the Angular2 client using npm run build, our Docker image is HUGE, I mean almost 2GB. Built Angular2 client is only 1.7Mb itself.
Our app does nothing fancy: simple web API writing to MongoDB and displaying static files.
In order to improve the size of our image, is there any way to exclude path which are useless at run time? For example node_modules or any .NET Core source?
Dotnet may restore much, especially if you have multiple targets platforms (linux, mac, windows).
Depending on how your application is configured (i.e. as portable .NET Core app or as self-contained), it can also pull the whole .NET Core Framework for one, or multiple platforms and/or architectures (x64, x86). This is mainly explained here.
When "Microsoft.NETCore.App" : "1.0.0" is defined, without the type platform, then then complete framework will be fetched via nuget. Then if you have multiple runtimes defined
"runtimes": {
"win10-x64": {},
"win10-x86": {},
"osx.10.10-x86": {},
"osx.10.10-x64": {}
}
it will get native libraries for all this platforms too. But not only in your project directory but also in ~/.nuget and npm-cache additionally to node_modules in your project + eventual copies in your wwwdata.
However, this is not how docker works. Everything you execute inside the Dockerfile is written to the virtual filesystem of the container! That's why you see this issues.
You should follow my previous comment on your other question:
Run dotnet restore, dotne build and dotnet publish outside the Dockerfile, for example in a bash or powershell/batch script.
Once finished call copy the content of the publish folder in your container with
dotnet publish
docker build bin\Debug\netcoreapp1.0\publish ... (your other parameters here)
This will generate publish files on your file system, only containing the required dll files, Views and wwwroot content without all the other build files, artifacts, caches or source and will run the docker process from the bin\Debug\netcoreapp1.0\publish folder.
You also need to change your docker files, to copy the files instead of running the commands you have during container building.
Scott uses this Dockerfile for his example in his blog:
FROM ... # Your base image here
ENTRYPOINT ["dotnet", "YourWebAppName.dll"] # Application to run
ARG source=. # An argument from outside, here store the path from real filesystem
WORKDIR /app
ENV ASPNETCORE_URLS http://+:82 # Define the port it should listen
EXPOSE 82
COPY $source . # copy the files from defined folder, here bin\Debug\netcoreapp1.0\publish to inside the docker container
This is the recommended approach for building docker containers. When you run the build commands inside, all the build and publish artifacts remain in the virtual file system and the docker image grows unexpectedly.