Hi I have a cargo workspace and there are multiple projects on it. Now I want to dockerize it but I want every project in separate image. Is there a way to build entire workspace and create multiple images from that build?
right now im using a separate dockerfile to build every project like this:
this is the dockerfile for groups-service:
FROM rust:slim AS builder
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools musl-dev
RUN update-ca-certificates
WORKDIR /usr/src/app
# copy entire workspace
COPY . .
RUN cargo build --target x86_64-unknown-linux-musl --release
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-service ./
CMD [ "./groups-service" ]
and i have a dockerfile for every service.
but i want to have a single dockerfile that produces multiple images with different names like service/groups service/report service/graph so i can run docker build . once and have all services build.
because it takes a lot of time to build it right now and i want to reduce and simplify my work
Simply build all your images from the same Dockerfile, e.g.:
FROM rust:slim AS builder
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools musl-dev
RUN update-ca-certificates
WORKDIR /usr/src/app
# copy entire workspace
COPY . .
RUN cargo build --target x86_64-unknown-linux-musl --release
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-service ./
CMD [ "./groups-service" ]
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-report ./
CMD [ "./groups-report" ]
Then running docker build will build all the target images at once using the same builder. You can then set the image names with docker tag. In order to facilitate identification of the built images, you can add a LABEL to each image:
...
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-service ./
CMD [ "./groups-service" ]
LABEL service=groups
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-report ./
CMD [ "./groups-report" ]
LABEL service=report
Then use docker image inspect --format='{{.Config.Labels}}' or docker image ls --filter=label=<key>=<value> to identify the images. E.g.:
docker tag $(docker image ls --filter=dangling=true --filter=label=service=groups) service/groups
docker tag $(docker image ls --filter=dangling=true --filter=label=service=report) service/report
Related
I am trying to build react app in docker, here is my Dockerfile:
FROM node as build-step
LABEL stage=build-step
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
RUN npm run build
FROM nginx
COPY --from=build-step /app/build /usr/share/nginx/html
Using this command:
docker build . --rm -t react-server-manual:0.1
This works, but it is creating a few other images that's useless, how do I delete them?
What am I missing?
Unfortunately this --rm doesn't remove such intermediate images
You can run
docker build . -t react-server-manual:0.1 && \
docker image prune -f --filter label=stage=build-step
(or prune as separate command)
I hava a custom Dockerfile that setup and builds a project of mine.
But now I haven't beeing able to place that into a folder of the host. Here the script and docker file...
Command
sudo docker build --output type=local,dest=./build/server/server -f ./build/scripts/Dockerfile.server ./server
Dockerfile
FROM node:14 AS build-stage
WORKDIR /usr/src/project
RUN npm i nexe#3.3.7 -g
COPY package*.json ./
RUN npm install --only=production
COPY . .
RUN nexe server.js -t linux-x64-12.14.1
FROM scratch AS export-stage
COPY --from=build-stage /usr/src/project/server /
The docker buildkit needs to be enabled before running the build command:
export DOCKER_BUILDKIT=1
You are setting up the context wrong. The command should be:
sudo docker build --output type=local,dest=./build/server/server -f ./build/scripts/Dockerfile.server .
The dot at the end sets the build context to current directory structure.
I’ve the following docker which works ok, I was able to run it and build it successfully!
FROM golang:1.13.6 AS build-env
ENV GO111MODULE=on
ENV GOOS=linux
ENV CGO_ENABLED=0
RUN mkdir -p /go/src/github.company.corp/deng/fst-cl
WORKDIR /go/src/github.company.corp/deng/fsr-clie
COPY ./ ./
# build the code
RUN go build -v -o ./fsr ./src/cmd/main.go
Now I want to change the image to use lighter docker image such as go alpine
So I change the from and added alpine version and also added git ,however the build is failing for
So go lib which doesn’t happen before the change, any idea what could be missing ?
FROM golang:1.13.6-alpine AS build-env
ENV GO111MODULE=on
ENV GOOS=linux
ENV CGO_ENABLED=0
## git is required to fetch go dependencies
RUN apk add --no-cache ca-certificates git
RUN apk add --no-cache gcc musl-dev
RUN mkdir -p /go/src/github.company.corp/deng/fst-cl
WORKDIR /go/src/github.company.corp/deng/fsr-clie
COPY ./ ./
# build the code
RUN go build -v -o ./fsr ./src/cmd/main.go
The error is for specifid repo which resides on our company git repo, but I don’t understand why its happen on golang:1.13.6-alpine and works ok on golang:1.13.6 ????
Btw I try to use different version of go alpine without success…
This is the error:
get "github.company.corp/deng/logger-ut": found meta tag get.metaImport{Prefix:"github.company.corp/deng/logger-ut", VCS:"git", RepoRoot:"https://github.company.corp/deng/logger-ut.git"} at //github.company.corp/deng/logger-ut?go-get=1
go: github.company.corp/deng/logger-ut#v1.0.0: reading github.company.corp/deng/logger-ut/go.mod at revision v1.0.0: unknown revision v1.0.0
If you want a lighter image and wish to use apline, you can use example below. Your final app image should be something like 7MB on scratch. Adjust it as it fits!
# STAGE 1: prepare
FROM golang:1.13.1-alpine3.10 as prepare
WORKDIR /source
COPY go.mod .
COPY go.sum .
RUN go mod download
# STAGE 2: build
FROM prepare AS build
COPY . .
RUN CGO_ENABLED=0 go build -ldflags "-s -w" -o bin/app -v your/app.go
# STAGE 3: run
FROM scratch as run
COPY --from=build /source/bin/app /app
ENTRYPOINT ["/app"]
I'm beginner with Docker, and I'm trying to build an image in two stages, as explained here: https://docs.docker.com/develop/develop-images/multistage-build/
You can selectively copy artifacts from one stage to another
Looking at the examples given there, I had thought that one could build some files during a first stage, and then make them available for the next one:
FROM golang:1.7.3 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
(Example taken from the above-linked page)
Isn't that what the COPY app.go . and the COPY --from=builder /go/src/github.com/alexellis/href-counter/app . are supposed to do?
I probably have a complete misunderstanding of what is going on, because when I try to do something similar (see below), it seems that the COPY command from the first stage is not able to see the files that have just been built (I can confirm that they have been actually built using a RUN ls step, but then I get a lstat <the file>: no such file or directory error).
And indeed, most other information I can gather regarding COPY (except the examples in the above link) rather suggest that COPY is actually meant to copy files from the directory where the docker build command was launched, not from within the build environment.
Here is my Dockerfile:
FROM haskell:8.6.5 as haskell
RUN git clone https://gitlab+deploy-token-75:sakyTxfe-PxPHDwqsoGm#gitlab.pasteur.fr/bli/bioinfo_utils.git
WORKDIR bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell
RUN stack --resolver ghc-8.6.5 build && \
stack --resolver ghc-8.6.5 install --local-bin-path .
RUN pwd; echo "---"; ls
COPY remove-duplicates-from-sorted-fastq .
FROM python:3.7-buster
RUN python3.7 -m pip install snakemake
RUN mkdir -p /opt/bin
COPY --from=haskell /bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell/remove-duplicates-from-sorted-fastq /opt/bin/remove-duplicates-from-sorted-fastq
CMD ["/bin/bash"]
And here is how the build ends when I run docker build . from the directory containing the Dockerfile:
Step 5/11 : RUN pwd; echo "---"; ls
---> Running in 28ff49fe9150
/bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell
---
LICENSE
Setup.hs
install.sh
remove-duplicates-from-sorted-fastq
remove-duplicates-from-sorted-fastq.cabal
src
stack.yaml
---> f551efc6bba2
Removing intermediate container 28ff49fe9150
Step 6/11 : COPY remove-duplicates-from-sorted-fastq .
lstat remove-duplicates-from-sorted-fastq: no such file or directory
How am I supposed to proceed to have the built file available for the next stage?
Well, apparently, I was mislead by the COPY step used in the first stage in the doc example. In my case, this is actually useless, and I can just COPY --from=haskell in my second stage, without any COPY in the first stage.
The following Dockerfile builds without issues:
FROM haskell:8.6.5 as haskell
RUN git clone https://gitlab+deploy-token-75:sakyTxfe-PxPHDwqsoGm#gitlab.pasteur.fr/bli/bioinfo_utils.git
WORKDIR bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell
RUN stack --resolver ghc-8.6.5 build && \
stack --resolver ghc-8.6.5 install --local-bin-path .
FROM python:3.7-buster
RUN python3.7 -m pip install snakemake
RUN mkdir -p /opt/bin
COPY --from=haskell /bioinfo_utils/remove-duplicates-from-sorted-fastq/Haskell/remove-duplicates-from-sorted-fastq /opt/bin/remove-duplicates-from-sorted-fastq
CMD ["/bin/bash"]
I'm having some weird issues with my custom Dockerfile, compiling a .Net core app in alpine containers.
I've tried numerous different configurations to no avail - cache is ALWAYS invalidated when I implement the final FROM instruction (if I comment that and everything below it out, caching works fine). Here's the file:
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution
RUN export
WORKDIR /source
COPY ./*.sln ./nuget.config ./
# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
COPY . ./
RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done
WORKDIR /source/${PROJECT_DIR}
RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app
RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore
FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7
ARG ASPNETCORE_ENVIRONMENT=development
RUN export
COPY --from=build /app .
WORKDIR /app
EXPOSE 80
VOLUME /app/logs
ENTRYPOINT ["dotnet", "MyAssembly.dll"]
Any ideas? Hints? Tips? Blazingly obvious mistakes? I've checked each layer and the COPY . ./ instruction ONLY copies the files I expect it to - and none of them change between builds.
Its also worth noting that if I remove the last FROM instruction (and other relevant lines) the cache works perfectly - but the final image size is obviously considerably bigger than the base microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7 (172Mb vs 1.8Gb) image. I have tried just commenting out the COPY instruction after the FROM, but it doesn't affect the cache invalidation. The following works as expected:
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution
RUN export
WORKDIR /source
COPY ./*.sln ./nuget.config ./
# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
COPY . ./
RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done
WORKDIR /source/${PROJECT_DIR}
RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app
RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore
WORKDIR /app
EXPOSE 80
VOLUME /app/logs
ENTRYPOINT ["dotnet", "MyAssembly.dll"]
.dockerignore below:
base-images/
docker-compose.yml
docker-compose.*.yml
VERSION
**/.*
**/*.ps1
**/*.DotSettings
**/*.csproj.user
**/*.md
**/*.log
**/*.sh
**/Dockerfile
**/bin
**/obj
**/node_modules
**/.vs
**/.vscode
**/dist
**/packages/
**/wwwroot/
Last bit of info: I'm building containers using docker-compose - specifically by running docker-compose build myservicename, but building the image with docker build -f src/MyAssembly/Dockerfile -t MyImageName . yields the same results.
If you're building locally and the cache isn't working – then I don't know what the issue is :)
But if you're building as part of CI – then the issue may be that you need to pull, build, and push the intermediate stage explicitly:
> docker pull MyImageName:build || true
> docker pull MyImageName:latest || true
> docker build --target build --tag MyImageName:build .
> docker build --cache-from MyImageName:build --tag MyImageName:latest .
> docker push MyImageName:build
> docker push MyImageName:latest
The || true part is there because the images won't be there on the initial CI build. The "magic sauce" of this recipe is docker build --target <intermediate-stage-name> and docker build --cache-from <intermediate-stage-name>.
I can't explain why building and pushing the intermediate stage explicitly is needed to get the cache to work – other than some handwaving about only the final image gets pushed, and not the intermediate stage and its layers. But it worked for me – I learned this "trick" from here: https://pythonspeed.com/articles/faster-multi-stage-builds/