can't copy folder into docker container - docker

I have a local folder called images that contains bunch of folders and files within. When I run the container I get the following error:
the command I execute: docker run -t -i file-uploader -token=abcdefgh
panic: failed Walk: Failed to walk directory: *fs.PathError lstat ./images/: no such file or directory
goroutine 1 [running]:
main.main()
/src/main.go:57 +0x357
Here is the Dockerfile I created:
FROM golang:1.16
WORKDIR /src
COPY go.sum go.mod ./
RUN go mod download
COPY ./images/ images/
COPY . .
RUN CGO_ENABLED=0 go build -o /bin/app .
ENTRYPOINT ["/bin/app"]
FROM scratch
COPY --from=0 /bin/app /bin/app
ENTRYPOINT ["/bin/app"]
And, here is the code in the program:
var (
token = flag.String("token", "", "user's token for application")
rootpath = flag.String("rootpath", "./images/","folder path to be uploaded")
)
func main() {
flag.Parse()
if *token == "" {
log.Fatal(Red + "please provide a client token => -token={$token}")
}
tokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: *token})
oauthClient := oauth2.NewClient(context.TODO(), tokenSource)
client := putio.NewClient(oauthClient)
paths := make(chan string)
var wg = new(sync.WaitGroup)
for i := 0; i < 20; i++ {
wg.Add(1)
go worker(paths, wg, client)
}
if err := filepath.Walk(*rootpath, func(path string, info os.FileInfo, err error) error {
if err != nil {
return fmt.Errorf("Failed to walk directory: %T %w", err, err)
}
if !info.IsDir() {
paths <- path
}
return nil
}); err != nil {
panic(fmt.Errorf("failed Walk: %w", err))
}
close(paths)
wg.Wait()
}
}
If flag is not provided, its default value is the folder itself which is ./images/. When I run this normally like: go run main.go -token="abcde", it works properly. I did some changes on Dockerfile. Some of the changes I made and tried again and again.:
replacing COPY . . with COPY ./images/ /images. It should automatically creates a folder inside /src like /src/images and get the local folder from host and put into it. It didn't work.
I also did try COPY . ., believing that it will copy everything from host into docker container. It didn't work either.
I did put 2 COPY command together. Didn't work.
COPY . ./ didn't work either.
My structure of the project is as follows:
/file-uploader-cli
/images
Dockerfile
file-uploader-cli (binary)
go.mod with go.sum
main.go
How can I put /images folder into container and run it properly?
Extra question: /images folder is approx. 500 MB or something. Is it a good practice to put that folder into a container?
I guess it is possible to copy a folder like docker cp {$folder_name} ${container_id}:/{$path}, but it must be a running container or something? I did try this using image_id replacing the container_id but I got an error like No such container:path: 12312312312:/.
EDIT:
the problem is the scratch image which was done in order to reduce the size. However, when I delete the scratch thing, the size of the image became 1.1 GB. Any easier or convenient way to utilize the images folder without having too much size?

You don't need images when building your app, you need it when executing your app. Instead of adding images to the first image, add it to the final image:
FROM golang:1.16 AS builder
WORKDIR /src
COPY go.sum go.mod ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /bin/app .
FROM scratch
WORKDIR /
copy images .
COPY --from=builder /bin/app /bin/app
ENTRYPOINT ["/bin/app"]
Or, if you want to provide images dynamically, you could mount it at runtime.
I also recommend removing docker containers automatically unless you actually want them sticking around. Otherwise you end up with lots and lots of Exited containers.
docker run --rm -it -v /my/path/to/images:/images:ro file-uploader -token=abcdefgh
I would also recommend you put your token in an environment variable so its not saved in bash history and docker runtime information.
So, you're saying don't dockerize the app?
I don't usually containerize Go programs unless containers are a good fit for deployment and operations (eg kubernetes). Most languages like C/C++, Java, Erlang, Python, Javascript, all require significant runtime components provided by the filesystem - compiled ones are usually dynamically linked with shared libraries from the operating system, VM based languages like Java or Erlang require the VM to be installed and configured (and it will likely also have runtime dependencies), and interpreted languages like Python, Ruby, or Javascript require the entire interpreter runtime, as well as any shared libraries the language's libraries are linked to.
Go is an exception to this though. Ignoring CGo (which I recommend avoiding whenever possible), Go binaries are statically linked and have minimal userspace runtime requirements. This is why a Go binary is one of the few things that can acutally work in a container built FROM scratch. The one exception to this, is that the Go program will need CA Certificates from the operating system to validate the certificates on HTTPS servers and other protocols secured with TLS.
I recommend that you don't dockerize the app unless docker is helping you distribute or operate it.

The problem is that the second build stage does not include the images directory. I don't think you can use the scratch base for this purpose, so we will have to change that. If you are concerned about image size, you should look into into alpine linux. I have converted your Dockerfile to use alpine.
FROM golang:1.16-alpine
WORKDIR /src
COPY go.sum go.mod ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /bin/app .
ENTRYPOINT ["/bin/app"]
FROM alpine:3.15.0
WORKDIR /opt/app
COPY --from=0 /bin/app app
COPY images .
ENTRYPOINT ["/opt/app/app"]
Note that I changed the app path in the second build stage to /opt/app. I did this because it would be odd to have an image folder under /bin. And /opt is a common place to store user applications.
As for whether or not you should containerize your code, that is up to you. One of Go's advantages is static compilation and easy cross-compiling. So you could distribute your binary (and images) as is.

Related

Why my docker file does not copy the HTML files?

My directory is:
-Dockerfile
app/
-main.go
media/
/css
/html
/img
/svg
Inside the html folder, I have subfolders to organise my HTML files, so the path to the HTML files is media/html/*/*.html
And I have my Dockerfile as follows:
FROM golang:alpine
# Set necessary environmet variables needed for our image
ENV GO111MODULE=on \
CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
# Copy the code into the container
COPY media .
# Move to working directory /build
WORKDIR /build
# Copy the code from /app to the build folder into the container
COPY app .
# Configure the build (go.mod and go.sum are already copied with prior step)
RUN go mod download
# Build the application
RUN go build -o main .
WORKDIR /app
# Copy binary from build to main folder
RUN cp /build/main .
# Export necessary port
EXPOSE 8080
# Command to run when starting the container
CMD ["/app/main"]
and my main.go is:
package main
import (
"net/http"
"github.com/gin-gonic/gin"
)
func main() {
// We create the instance for Gin
r := gin.Default()
// Path to the static files. /static is rendered in the HTML and /media is the link to the path to the images, svg, css.. the static files
r.StaticFS("/static", http.Dir("../media"))
// Path to the HTML templates. * is a wildcard
r.LoadHTMLGlob("../media/html/*/*.html")
r.NoRoute(renderHome)
// This get executed when the users gets into our website in the home domain ("/")
r.GET("/", renderHome)
r.Run(":8080")
}
func renderHome(c *gin.Context) {
c.HTML(http.StatusOK, "my-html.html", gin.H{})
}
Problem is, I can run without problem my app in Golang with go run main.go, I can build the Docker image without problems, but on the moment to run a Docker container from the image, I got the error:
panic: html/template: pattern matches no files: ../media/html/*/*.html
The path is correct (since is also proven because I can run it in plain go) and it seems that Docker is not coping the files correctly, or at least not in the right directory. What is failing? The full simple project can be found here
media is a bad choice for a Docker folder, because a typical Linux container already has a /media folder.
But that's not the root cause.
The root cause is that COPY media . copies the contents of media folder to /. You probably want COPY media/ /media/ if you want to preserve the media folder itself (or use WORKDIR /media).
As a debug tool, you can run your container with a shell as entrypoint to "look around" it without starting your app:
docker build . -t test
docker run -it --rm test sh
/app # ls /media
cdrom floppy usb
/app # ls -R /html
/html:
website
/html/website:
my-html.html
As you can see your media/html folder is located at /html.
Some more notes:
It's a good idea to move go mod download to before COPY app so that the downloaded modules can be cached:
FROM golang:alpine
WORKDIR /build
COPY app/go.mod app/go.sum ./
RUN go mod download
COPY app .
RUN go build -o main .
WORKDIR /app
RUN cp /build/main .
COPY media /media/
EXPOSE 8080
CMD ["/app/main"]
As a next step you can look into two-stage builds to not depend on the golang image for running the compiled app (is only needed for building really).

Docker container has not set GOPATH correctly

I have a problem when I try to run my app in a Docker container. It is running fine with a simple go run main.go, but whenever I build an image and I run the docker container, I got the error of panic: html/template: pattern matches no files: *.html, so I guess GOPATH is not properly set in the docker container (tho I use this same docker file from other projects and I don't have any problems). I am a little lost here, since this method I been using already for a while without problems.
I am using gin as a framework for develop.
The docker file is:
FROM golang:alpine as builder
RUN apk update && apk add git && apk add ca-certificates
# For email certificate
RUN apk add -U --no-cache ca-certificates
COPY . $GOPATH/src/github.com/kiketordera/advanced-performance/
WORKDIR $GOPATH/src/github.com/kiketordera/advanced-performance/
RUN go get -d -v $GOPATH/src/github.com/kiketordera/advanced-performance
# For Cloud Server
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -o /go/bin/advanced-performance $GOPATH/src/github.com/kiketordera/advanced-performance
FROM scratch
COPY --from=builder /go/bin/advanced-performance /advanced-performance
COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /go/src/github.com/kiketordera/advanced-performance/media/
# For email certificate
VOLUME /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8050/tcp
ENV GOPATH /go
ENTRYPOINT ["/advanced-performance"]
Main function is:
package main
import (
"fmt"
"net/http"
"github.com/gin-gonic/gin"
i18n "github.com/suisrc/gin-i18n"
"golang.org/x/text/language"
)
func main() {
// We create the instance for Gin
r := gin.Default()
/* Internationalization for showing the right language to match the browser's default settings
*/
bundle := i18n.NewBundle(
language.English,
"text/en.toml",
"text/es.toml",
)
// Tell Gin to use our middleware. This means that in every single request (GET, POST...), the call to i18n will be executed
r.Use(i18n.Serve(bundle))
// Path to the static files. /static is rendered in the HTML and /media is the link to the path to the images, svg, css.. the static files
r.StaticFS("/static", http.Dir("media"))
// Path to the HTML templates. * is a wildcard
r.LoadHTMLGlob("*.html")
// Redirects when users introduces a wrong URL
r.NoRoute(redirect)
// This get executed when the users gets into our website in the home domain ("/")
r.GET("/", renderHome)
r.POST("/", getForm)
// Listen and serve on 0.0.0.0:8080 (for windows "localhost:8080")
r.Run()
}
The full project can be found in https://github.com/kiketordera/advanced-performance, is a simple website rendering with i18n and a POST form-handler
GOPATH is not relevant; it's used to "resolve import statements" and plays no role when running an executable (unless your code references it specifically!). The WORKDIR is the issue here.
FROM "clears any state created by previous instructions". This includes the WORKDIR. For example if you use the docker file:
FROM alpine:3.12
WORKDIR /test
copy 1.txt .
FROM alpine:3.12
copy 2.txt .
The final resulting image will have file 2.txt in the root folder (and no /test folder).
In your dockerfile you are copying the media folder to /go/src/github.com/kiketordera/advanced-performance/media/ on the assumption that the WORKDIR will be set; but that is not the case (it defaults to /). Simplest fix is to change COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /go/src/github.com/kiketordera/advanced-performance/media/ to COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /media/.
You are also accessing files from the root folder so need to copy these in with COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/*.html / (or similar). Given that you are doing this it's probably best to put everything (the exe, html files and media folder) into a folder (e.g. /app) to keep the root folder clean.
Note: There is no need to set GOPATH in the second image; as mentioned above it's not relevant when running the executable. I'd recommend using modules (support for GOPATH will probably be dropped in 1.17); this would also enable you to considerably shorten your paths!

How to use go mod with local package and docker?

I have two go modules github.com/myuser/mymainrepo and github.com/myuser/commonrepo
Here is how i have the files in my local computer
- allmyrepos
- mymainrepo
- Dockerfile
- go.mod
- commonrepo
- go.mod
mymainrepo/go.mod
...
require (
github.com/myuser/commonrepo
)
replace (
github.com/myuser/commonrepo => ../commonrepo
)
It works well i can do local development with it. Problem happens when i'm building docker image of mymainrepo
mymainrepo/Dockerfile
...
WORKDIR /go/src/mymainrepo
COPY go.mod go.sum ./
RUN go mod download
COPY ./ ./
RUN go build -o appbinary
...
Here replace replaces github.com/myuser/commonrepo with ../commonrepo but in Docker /go/src/commonrepo does not exists.
I'm building the Docker image on CI/CD which needs to fetch directly from remote github url but i also need to do local development on commonrepo. How can i do both ?
I tried to put all my files in GOPATH so it's ~/go/src/github.com/myuser/commonrepo and go/src/github.com/myuser/mymainrepo. And i removed the replace directive. But it looks for commonrepo inside ~/go/pkg/mod/... that's downloaded from github.
Create two go.mod files: one for local development, and one for your build. You can name it go.build.mod for example.
Keep the replace directive in your go.mod file but remove it from go.build.mod.
Finally, in your Dockerfile:
COPY go.build.mod ./go.mod
COPY go.sum ./
I still can't find other better solution even the voted answer doesn't work for me. Here a trick I've done that workaround for me. This is an example structure for doing this:
|---sample
| |---...
| |---go.mod
| |---Dockerfile
|---core
| |---...
| |---go.mod
We know that docker build error when it can't find our local module. Let's make one in the builder process:
# Use the offical golang image to create a binary.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.16.3-buster AS builder
# Copy core library
RUN mkdir /core
COPY core/ /core
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary
RUN go build -o /app/sample cmd/main.go
...
...
Ok, our working dir is /app and our core lib placed next to it /core.
Let's make a trick when build a docker image! Yeah, you know it.
cp -R ../core . && docker build --tag sample-service . && rm -R core/
Update
A way better, create a Makefile next to Dockerfile, with content below:
build:
cp -R ../core .
docker build -t sample-service .
rm -R core/
Then command, make build in the sample directory.
You can create make submit or make deploy commands as you like to.
=> Production ready!
Be aware that if there's an error occurs during docker build process, it won't delete back the core folder we have copied to sample.
Pls let me know if you find any better solution. ;)

Cache Cargo dependencies in a Docker volume

I'm building a Rust program in Docker (rust:1.33.0).
Every time code changes, it re-compiles (good), which also re-downloads all dependencies (bad).
I thought I could cache dependencies by adding VOLUME ["/usr/local/cargo"]. edit I've also tried moving this dir with CARGO_HOME without luck.
I thought that making this a volume would persist the downloaded dependencies, which appear to be in this directory.
But it didn't work, they are still downloaded every time. Why?
Dockerfile
FROM rust:1.33.0
VOLUME ["/output", "/usr/local/cargo"]
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
COPY src/ ./src/
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Built with just docker build ..
Cargo.toml
[package]
name = "mwe"
version = "0.1.0"
[dependencies]
log = { version = "0.4.6" }
Code: just hello world
Output of second run after changing main.rs:
...
Step 4/6 : COPY Cargo.toml .
---> Using cache
---> 97f180cb6ce2
Step 5/6 : COPY src/ ./src/
---> 835be1ea0541
Step 6/6 : RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
---> Running in 551299a42907
Updating crates.io index
Downloading crates ...
Downloaded log v0.4.6
Downloaded cfg-if v0.1.6
Compiling cfg-if v0.1.6
Compiling log v0.4.6
Compiling mwe v0.1.0 (/)
Finished dev [unoptimized + debuginfo] target(s) in 17.43s
Removing intermediate container 551299a42907
---> e4626da13204
Successfully built e4626da13204
A volume inside the Dockerfile is counter-productive here. That would mount an anonymous volume at each build step, and again when you run the container. The volume during each build step is discarded after that step completes, which means you would need to download the entire contents again for any other step needing those dependencies.
The standard model for this is to copy your dependency specification, run the dependency download, copy your code, and then compile or run your code, in 4 separate steps. That lets docker cache the layers in an efficient manner. I'm not familiar with rust or cargo specifically, but I believe that would look like:
FROM rust:1.33.0
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
RUN cargo fetch # this should download dependencies
COPY src/ ./src/
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Another option is to turn on some experimental features with BuildKit (available in 18.09, released 2018-11-08) so that docker saves these dependencies in what is similar to a named volume for your build. The directory can be reused across builds, but never gets added to the image itself, making it useful for things like a download cache.
# syntax=docker/dockerfile:experimental
FROM rust:1.33.0
VOLUME ["/output", "/usr/local/cargo"]
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
COPY src/ ./src/
RUN --mount=type=cache,target=/root/.cargo \
["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
Note that the above assumes cargo is caching files in /root/.cargo. You'd need to verify this and adjust as appropriate. I also haven't mixed the mount syntax with a json exec syntax to know if that part works. You can read more about the BuildKit experimental features here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
Turning on BuildKit from 18.09 and newer versions is as easy as export DOCKER_BUILDKIT=1 and then running your build from that shell.
I would say, the nicer solution would be to resort to docker multi-stage build as pointed here and there
This way you can create yourself a first image, that would build both your application and your dependencies, then use, only, in the second image, the dependency folder from the first one
This is inspired by both your comment on #Jack Gore's answer and the two issue comments linked here above.
FROM rust:1.33.0 as dependencies
WORKDIR /usr/src/app
COPY Cargo.toml .
RUN rustup default nightly-2019-01-29 && \
mkdir -p src && \
echo "fn main() {}" > src/main.rs && \
cargo build -Z unstable-options --out-dir /output
FROM rust:1.33.0 as application
# Those are the lines instructing this image to reuse the files
# from the previous image that was aliased as "dependencies"
COPY --from=dependencies /usr/src/app/Cargo.toml .
COPY --from=dependencies /usr/local/cargo /usr/local/cargo
COPY src/ src/
VOLUME /output
RUN rustup default nightly-2019-01-29 && \
cargo build -Z unstable-options --out-dir /output
PS: having only one run will reduce the number of layers you generate; more info here
Here's an overview of the possibilities. (Scroll down for my original answer.)
Add Cargo files, create fake main.rs/lib.rs, then compile dependencies. Afterwards remove the fake source and add the real ones. [Caches dependencies, but several fake files with workspaces].
Add Cargo files, create fake main.rs/lib.rs, then compile dependencies. Afterwards create a new layer with the dependencies and continue from there. [Similar to above].
Externally mount a volume for the cache dir. [Caches everything, relies on caller to pass --mount].
Use RUN --mount=type=cache,target=/the/path cargo build in the Dockerfile in new Docker versions. [Caches everything, seems like a good way, but currently too new to work for me. Executable not part of image. Edit: See here for a solution.]
Run sccache in another container or on the host, then connect to that during the build process. See this comment in Cargo issue 2644.
Use cargo-build-deps. [Might work for some, but does not support Cargo workspaces (in 2019)].
Wait for Cargo issue 2644. [There's willingness to add this to Cargo, but no concrete solution yet].
Using VOLUME ["/the/path"] in the Dockerfile does NOT work, this is per-layer (per command) only.
Note: one can set CARGO_HOME and ENV CARGO_TARGET_DIR in the Dockerfile to control where download cache and compiled output goes.
Also note: cargo fetch can at least cache downloading of dependencies, although not compiling.
Cargo workspaces suffer from having to manually add each Cargo file, and for some solutions, having to generate a dozen fake main.rs/lib.rs. For projects with a single Cargo file, the solutions work better.
I've got caching to work for my particular case by adding
ENV CARGO_HOME /code/dockerout/cargo
ENV CARGO_TARGET_DIR /code/dockerout/target
Where /code is the directory where I mount my code.
This is externally mounted, not from the Dockerfile.
EDIT1: I was confused why this worked, but #b.enoit.be and #BMitch cleared up that it's because volumes declared inside the Dockerfile only live for one layer (one command).
You do not need to use an explicit Docker volume to cache your dependencies. Docker will automatically cache the different "layers" of your image. Basically, each command in the Dockerfile corresponds to a layer of the image. The problem you are facing is based on how Docker image layer caching works.
The rules that Docker follows for image layer caching are listed in the official documentation:
Starting with a parent image that is already in the cache, the next
instruction is compared against all child images derived from that
base image to see if one of them was built using the exact same
instruction. If not, the cache is invalidated.
In most cases, simply comparing the instruction in the Dockerfile with
one of the child images is sufficient. However, certain instructions
require more examination and explanation.
For the ADD and COPY instructions, the contents of the file(s) in the
image are examined and a checksum is calculated for each file. The
last-modified and last-accessed times of the file(s) are not
considered in these checksums. During the cache lookup, the checksum
is compared against the checksum in the existing images. If anything
has changed in the file(s), such as the contents and metadata, then
the cache is invalidated.
Aside from the ADD and COPY commands, cache checking does not look at
the files in the container to determine a cache match. For example,
when processing a RUN apt-get -y update command the files updated in
the container are not examined to determine if a cache hit exists. In
that case just the command string itself is used to find a match.
Once the cache is invalidated, all subsequent Dockerfile commands
generate new images and the cache is not used.
So the problem is with the positioning of the command COPY src/ ./src/ in the Dockerfile. Whenever there is a change in one of your source files, the cache will be invalidated and all subsequent commands will not use the cache. Therefore your cargo build command will not use the Docker cache.
To solve your problem it will be as simple as reordering the commands in your Docker file, to this:
FROM rust:1.33.0
RUN rustup default nightly-2019-01-29
COPY Cargo.toml .
RUN ["cargo", "build", "-Z", "unstable-options", "--out-dir", "/output"]
COPY src/ ./src/
Doing it this way, your dependencies will only be reinstalled when there is a change in your Cargo.toml.
Hope this helps.
With the integration of BuildKit into docker, if you are able to avail yourself of the superior BuildKit backend, it's now possible to mount a cache volume during a RUN command, and IMHO, this has become the best way to cache cargo builds. The cache volume retains the data that was written to it on previous runs.
To use BuildKit, you'll mount two cache volumes, one for the cargo dir, which caches external crate sources, and one for the target dir, which caches all of your built artifacts, including external crates and the project bins and libs.
If your base image is rust, $CARGO_HOME is set to /usr/local/cargo, so your command looks like this:
RUN --mount=type=cache,target=/usr/local/cargo,from=rust,source=/usr/local/cargo \
--mount=type=cache,target=target \
cargo build
If your base image is something else, you will need to change the /usr/local/cargo bit to whatever is the value of $CARGO_HOME, or else add a ENV CARGO_HOME=/usr/local/cargo line. As a side note, the clever thing would be to set literally target=$CARGO_HOME and let Docker do the expansion, but it
doesn't seem to work right - expansion happens, but buildkit still doesn't persist the same volume across runs when you do this.
Other options for achieving Cargo build caching (including sccache and the cargo wharf project) are described in this github issue.
I figured out how to get this also working with cargo workspaces, using romac's fork of cargo-build-deps.
This example has my_app, and two workspaces: utils and db.
FROM rust:nightly as rust
# Cache deps
WORKDIR /app
RUN sudo chown -R rust:rust .
RUN USER=root cargo new myapp
# Install cache-deps
RUN cargo install --git https://github.com/romac/cargo-build-deps.git
WORKDIR /app/myapp
RUN mkdir -p db/src/ utils/src/
# Copy the Cargo tomls
COPY myapp/Cargo.toml myapp/Cargo.lock ./
COPY myapp/db/Cargo.toml ./db/
COPY myapp/utils/Cargo.toml ./utils/
# Cache the deps
RUN cargo build-deps
# Copy the src folders
COPY myapp/src ./src/
COPY myapp/db/src ./db/src/
COPY myapp/utils/src/ ./utils/src/
# Build for debug
RUN cargo build
I'm sure you can adjust this code for use with a Dockerfile, but I wrote a dockerized drop-in replacement for cargo that you can save to a package and run as ./cargo build --release. This just works for (most) development (uses rust:latest), but isn't set up for CI or anything.
Usage: ./cargo build, ./cargo build --release, etc
It will use the current working directory and save the cache to ./.cargo. (You can ignore the entire directory in your version control and it doesn't need to exist beforehand.)
Create a file named cargo in your project's folder, run chmod +x ./cargo on it, and place the following code in it:
#!/bin/bash
# This is a drop-in replacement for `cargo`
# that runs in a Docker container as the current user
# on the latest Rust image
# and saves all generated files to `./cargo/` and `./target/`.
#
# Be sure to make this file executable: `chmod +x ./cargo`
#
# # Examples
#
# - Running app: `./cargo run`
# - Building app: `./cargo build`
# - Building release: `./cargo build --release`
#
# # Installing globally
#
# To run `cargo` from anywhere,
# save this file to `/usr/local/bin`.
# You'll then be able to use `cargo`
# as if you had installed Rust globally.
sudo docker run \
--rm \
--user "$(id -u)":"$(id -g)" \
--mount type=bind,src="$PWD",dst=/usr/src/app \
--workdir /usr/src/app \
--env CARGO_HOME=/usr/src/app/.cargo \
rust:latest \
cargo "$#"

Accessing file outside build context

I am aware that you cannot step outside of Docker's build context and I am looking for alternatives on how to share a file between two folders (outside the build context).
My folder structure is
project
- server
Dockerfile
- client
Dockerfile
My client folder needs to access a file inside the server folder for some code generation, where the client is built according to the contract of the server.
The client Dockerfile looks like the following:
FROM node:10-alpine AS build
WORKDIR /app
COPY . /app
RUN yarn install
RUN yarn build
FROM node:10-alpine
WORKDIR /app
RUN yarn install --production
COPY --from=build /app ./
EXPOSE 5000
CMD [ "yarn", "serve" ]
I run docker build -t my-name . inside the client directory.
During the RUN yarn build step, a script is looking for a file in ../server/src/schema/schema.graphql which can not be found, as the file is outside the client directory and therefore outside Docker's build context.
How can I get around this, or other suggestions to solving this issue?
The easiest way to do this is to use the root of your source tree as the Docker context directory, point it at one or the other of the Dockerfiles, and be explicit about whether you're using the client or server trees.
cd $HOME/project
docker build \
-t project-client:$(git rev-parse --short HEAD) \
-f client/Dockerfile \
.
FROM node:10-alpine AS build
WORKDIR /app
COPY client/ ./
ET cetera
In the specific case of GraphQL, depending on your application and library stack, it may be possible to avoid needing the schema at all and just make unchecked client calls; or to make an introspection query at startup time to dynamically fetch the schema; or to maintain two separate copies of the schema file. Some projects I work on use GraphQL interfaces but the servers and clients are in actual separate repositories and there's no choice but to store separate copies of the schema, but if you're careful about changes, this isn't been a problem in practice.

Resources