I am trying to build react app in docker, here is my Dockerfile:
FROM node as build-step
LABEL stage=build-step
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
RUN npm run build
FROM nginx
COPY --from=build-step /app/build /usr/share/nginx/html
Using this command:
docker build . --rm -t react-server-manual:0.1
This works, but it is creating a few other images that's useless, how do I delete them?
What am I missing?
Unfortunately this --rm doesn't remove such intermediate images
You can run
docker build . -t react-server-manual:0.1 && \
docker image prune -f --filter label=stage=build-step
(or prune as separate command)
Related
Hi I have a cargo workspace and there are multiple projects on it. Now I want to dockerize it but I want every project in separate image. Is there a way to build entire workspace and create multiple images from that build?
right now im using a separate dockerfile to build every project like this:
this is the dockerfile for groups-service:
FROM rust:slim AS builder
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools musl-dev
RUN update-ca-certificates
WORKDIR /usr/src/app
# copy entire workspace
COPY . .
RUN cargo build --target x86_64-unknown-linux-musl --release
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-service ./
CMD [ "./groups-service" ]
and i have a dockerfile for every service.
but i want to have a single dockerfile that produces multiple images with different names like service/groups service/report service/graph so i can run docker build . once and have all services build.
because it takes a lot of time to build it right now and i want to reduce and simplify my work
Simply build all your images from the same Dockerfile, e.g.:
FROM rust:slim AS builder
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools musl-dev
RUN update-ca-certificates
WORKDIR /usr/src/app
# copy entire workspace
COPY . .
RUN cargo build --target x86_64-unknown-linux-musl --release
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-service ./
CMD [ "./groups-service" ]
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-report ./
CMD [ "./groups-report" ]
Then running docker build will build all the target images at once using the same builder. You can then set the image names with docker tag. In order to facilitate identification of the built images, you can add a LABEL to each image:
...
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-service ./
CMD [ "./groups-service" ]
LABEL service=groups
FROM alpine
COPY --from=builder /usr/src/app/target/x86_64-unknown-linux-musl/release/groups-report ./
CMD [ "./groups-report" ]
LABEL service=report
Then use docker image inspect --format='{{.Config.Labels}}' or docker image ls --filter=label=<key>=<value> to identify the images. E.g.:
docker tag $(docker image ls --filter=dangling=true --filter=label=service=groups) service/groups
docker tag $(docker image ls --filter=dangling=true --filter=label=service=report) service/report
I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.
I've added a simple repo that recreates this issue: https://github.com/cgreening/docker-cache-problem
We have a very simple Dockerfile.
To speed up our builds we're using the --cache-from directive and using a previous build as a cache.
We're seeing some weird behaviour where if the files have not changed the lines after the COPY line are not being run.
RUN yarn && yarn build
Does not seem to get executed so when the application tries to start node_modules is missing.
FROM node
RUN mkdir /app
COPY . /app
WORKDIR /app
RUN yarn && yarn build
ENTRYPOINT ["yarn", "start"]
We're deploying to Kubernetes but I can pull the image locally and see that the files are missing:
# docker run -it --entrypoint /bin/bash gcr.io/XXXXX
root#3561a9cdab6e:/app# ls
DEVELOPING.md Dockerfile Makefile README.md admin-tools app.dev.yaml jest.config.js package.json src tailwind.config.js tools tsconfig.json tslint.json yarn.lock
root#3561a9cdab6e:/app#
Edit:
I've managed to recreate the problem outside of our build system.
Initial build:
DOCKER_BUILDKIT=1 docker build -t gcr.io/XXX/test:a . --build-arg BUILDKIT_INLINE_CACHE=1
docker push gcr.io/XXX/test:a
All works - node_modules and build folder are there:
Clean up docker as if we starting from scratch like on the build system
docker system prune -a
Do another build:
DOCKER_BUILDKIT=1 docker build -t gcr.io/XXX/test:b . --cache-from gcr.io/XXX/test:a --build-arg BUILDKIT_INLINE_CACHE=1
docker push gcr.io/XXX/test:a
Everything is still fine.
Clean up docker as if we starting from scratch like on the build system
docker system prune -a
Do a third build:
DOCKER_BUILDKIT=1 docker build -t gcr.io/XXX/test:c . --cache-from gcr.io/XXX/test:b --build-arg BUILDKIT_INLINE_CACHE=1
Files are missing!
docker run -it --entrypoint /bin/bash gcr.io/topo-wme-dev-d725ec6e/test:c
root#d07f6f1d3b12:/app# ls
DEVELOPING.md Dockerfile Makefile README.md admin-tools app.dev.yaml coverage jest.config.js package.json src tailwind.config.js tools tsconfig.json tslint.json yarn.lock
No node_modules or build folder.
I hava a custom Dockerfile that setup and builds a project of mine.
But now I haven't beeing able to place that into a folder of the host. Here the script and docker file...
Command
sudo docker build --output type=local,dest=./build/server/server -f ./build/scripts/Dockerfile.server ./server
Dockerfile
FROM node:14 AS build-stage
WORKDIR /usr/src/project
RUN npm i nexe#3.3.7 -g
COPY package*.json ./
RUN npm install --only=production
COPY . .
RUN nexe server.js -t linux-x64-12.14.1
FROM scratch AS export-stage
COPY --from=build-stage /usr/src/project/server /
The docker buildkit needs to be enabled before running the build command:
export DOCKER_BUILDKIT=1
You are setting up the context wrong. The command should be:
sudo docker build --output type=local,dest=./build/server/server -f ./build/scripts/Dockerfile.server .
The dot at the end sets the build context to current directory structure.
I'm having some weird issues with my custom Dockerfile, compiling a .Net core app in alpine containers.
I've tried numerous different configurations to no avail - cache is ALWAYS invalidated when I implement the final FROM instruction (if I comment that and everything below it out, caching works fine). Here's the file:
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution
RUN export
WORKDIR /source
COPY ./*.sln ./nuget.config ./
# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
COPY . ./
RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done
WORKDIR /source/${PROJECT_DIR}
RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app
RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore
FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7
ARG ASPNETCORE_ENVIRONMENT=development
RUN export
COPY --from=build /app .
WORKDIR /app
EXPOSE 80
VOLUME /app/logs
ENTRYPOINT ["dotnet", "MyAssembly.dll"]
Any ideas? Hints? Tips? Blazingly obvious mistakes? I've checked each layer and the COPY . ./ instruction ONLY copies the files I expect it to - and none of them change between builds.
Its also worth noting that if I remove the last FROM instruction (and other relevant lines) the cache works perfectly - but the final image size is obviously considerably bigger than the base microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7 (172Mb vs 1.8Gb) image. I have tried just commenting out the COPY instruction after the FROM, but it doesn't affect the cache invalidation. The following works as expected:
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution
RUN export
WORKDIR /source
COPY ./*.sln ./nuget.config ./
# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
COPY . ./
RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done
WORKDIR /source/${PROJECT_DIR}
RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app
RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore
WORKDIR /app
EXPOSE 80
VOLUME /app/logs
ENTRYPOINT ["dotnet", "MyAssembly.dll"]
.dockerignore below:
base-images/
docker-compose.yml
docker-compose.*.yml
VERSION
**/.*
**/*.ps1
**/*.DotSettings
**/*.csproj.user
**/*.md
**/*.log
**/*.sh
**/Dockerfile
**/bin
**/obj
**/node_modules
**/.vs
**/.vscode
**/dist
**/packages/
**/wwwroot/
Last bit of info: I'm building containers using docker-compose - specifically by running docker-compose build myservicename, but building the image with docker build -f src/MyAssembly/Dockerfile -t MyImageName . yields the same results.
If you're building locally and the cache isn't working – then I don't know what the issue is :)
But if you're building as part of CI – then the issue may be that you need to pull, build, and push the intermediate stage explicitly:
> docker pull MyImageName:build || true
> docker pull MyImageName:latest || true
> docker build --target build --tag MyImageName:build .
> docker build --cache-from MyImageName:build --tag MyImageName:latest .
> docker push MyImageName:build
> docker push MyImageName:latest
The || true part is there because the images won't be there on the initial CI build. The "magic sauce" of this recipe is docker build --target <intermediate-stage-name> and docker build --cache-from <intermediate-stage-name>.
I can't explain why building and pushing the intermediate stage explicitly is needed to get the cache to work – other than some handwaving about only the final image gets pushed, and not the intermediate stage and its layers. But it worked for me – I learned this "trick" from here: https://pythonspeed.com/articles/faster-multi-stage-builds/