This is my current setup for my project and I was wondering if there was a more elegant way. The current setup is as follows.
Directory structure
<root>
- Dockerfile_base # base image for the other two
- Dockerfile_dev # development image
- Dockerfile_prod # production image
- Makefile
The Dockerfiles:
# Dockerfile_base
FROM tensorflow/tensorflow:2.4.1-gpu
RUN pip install ...
# Dockerfile_dev
FROM eu.gcr.io/cool_project/cool_program_base:latest
RUN pip install <dev branch of this repo>
# Dockerfile_prod
FROM eu.gcr.io/cool_project/cool_program_base:latest
RUN pip install <master branch of this repo>
Makefile
deploybase:
docker build -f Dockerfile_base -t cool_program_base:latest .
docker tag cool_program_base:latest eu.gcr.io/cool_project/cool_program_base
docker push eu.gcr.io/cool_project/cool_program_base
deploydev:
docker build -f Dockerfile_dev -t cool_program_dev:latest .
docker tag cool_program_dev:latest eu.gcr.io/cool_project/cool_program_dev
docker push eu.gcr.io/cool_project/cool_program_dev
deployprod:
docker build -f Dockerfile_prod -t cool_program_prod:latest .
docker tag cool_program_prod:latest eu.gcr.io/cool_project/cool_program_prod
docker push eu.gcr.io/cool_project/cool_program_prod
Q1: Is there a way to combine the three Dockerfiles into a single one? I know that there are multistage builds but I could not find how to make this work.
Q2: If it is possible, can the Makefile also be written more compactly?
For the docker images, you could use build-args, that is having a single parametrized Dockerfile:
ARG BRANCH=dev-branch
FROM eu.gcr.io/cool_project/cool_program_base:latest
RUN pip install $BRANCH
Then:
docker build -f Dockerfile --build-arg BRANCH=master -t cool_program_prod:latest .
or
docker build -f Dockerfile --build-arg BRANCH=dev -t cool_program_dev:latest .
You actually do not need to push "cool_program_base" image, because its layers are already included in both of the dev and prod images.
Related
I have multiple projects/folders inside a single directory called root. There is a common Dockerfile that runs all projects/folders. I run the projects passing different build context in docker build command as below:
$ docker build -t project1:1.0.0 -f . root/project1
$ docker build -t project2:1.0.0 -f . root/project2
$ docker build -t project2:1.0.0 -f . root/project3
Now, I need to add some conditions based on docker build context in dockerfile. Can that be done? I didn't find a way to get docker build context.
I think you can pass the project name to the Dockerfile as an argument and build for instance:
ARG ENV
RUN if [ "$ENV" = "production" ] ; then yarn client:build:prod ; else yarn client:build ; fi
finally:
docker build -t node-image . --build-arg ENV=production
I've added a simple repo that recreates this issue: https://github.com/cgreening/docker-cache-problem
We have a very simple Dockerfile.
To speed up our builds we're using the --cache-from directive and using a previous build as a cache.
We're seeing some weird behaviour where if the files have not changed the lines after the COPY line are not being run.
RUN yarn && yarn build
Does not seem to get executed so when the application tries to start node_modules is missing.
FROM node
RUN mkdir /app
COPY . /app
WORKDIR /app
RUN yarn && yarn build
ENTRYPOINT ["yarn", "start"]
We're deploying to Kubernetes but I can pull the image locally and see that the files are missing:
# docker run -it --entrypoint /bin/bash gcr.io/XXXXX
root#3561a9cdab6e:/app# ls
DEVELOPING.md Dockerfile Makefile README.md admin-tools app.dev.yaml jest.config.js package.json src tailwind.config.js tools tsconfig.json tslint.json yarn.lock
root#3561a9cdab6e:/app#
Edit:
I've managed to recreate the problem outside of our build system.
Initial build:
DOCKER_BUILDKIT=1 docker build -t gcr.io/XXX/test:a . --build-arg BUILDKIT_INLINE_CACHE=1
docker push gcr.io/XXX/test:a
All works - node_modules and build folder are there:
Clean up docker as if we starting from scratch like on the build system
docker system prune -a
Do another build:
DOCKER_BUILDKIT=1 docker build -t gcr.io/XXX/test:b . --cache-from gcr.io/XXX/test:a --build-arg BUILDKIT_INLINE_CACHE=1
docker push gcr.io/XXX/test:a
Everything is still fine.
Clean up docker as if we starting from scratch like on the build system
docker system prune -a
Do a third build:
DOCKER_BUILDKIT=1 docker build -t gcr.io/XXX/test:c . --cache-from gcr.io/XXX/test:b --build-arg BUILDKIT_INLINE_CACHE=1
Files are missing!
docker run -it --entrypoint /bin/bash gcr.io/topo-wme-dev-d725ec6e/test:c
root#d07f6f1d3b12:/app# ls
DEVELOPING.md Dockerfile Makefile README.md admin-tools app.dev.yaml coverage jest.config.js package.json src tailwind.config.js tools tsconfig.json tslint.json yarn.lock
No node_modules or build folder.
I build the following image with docker build -t mylambda .
I now try to export lambdatest.zip to my localhost while building it so I see the .zip file on my Desktop. So far I used docker cp <Container ID>:/var/task/lambdatest.zip ~/Desktop but that doesn't work inside my Dockerfile (?). Do you have any ideas?
FROM lambci/lambda:build-python3.7
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
Dockerfile (updated):
FROM lambci/lambda:build-python3.7
RUN python3 -m venv venv
RUN . venv/bin/activate
RUN pip install --upgrade pip
RUN pip install pystan==2.18
RUN pip install fbprophet
WORKDIR /var/task/venv/lib/python3.7/site-packages
COPY lambda_function.py .
COPY .lambdaignore .
RUN echo "Package size: $(du -sh | cut -f1)"
RUN zip -9qr lambdatest.zip *
RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x
The typical answer is you do not. A Dockerfile does not have access to write files out to the host, by design, just as it does not have access to read arbitrary files from outside of the build context. There are various reasons for that, including security (you don't want an image build dropping a backdoor on a build host in the cloud) and reproducibility (images should not have dependencies outside of their context).
As a result, you need to take an extra step to extract contexts of an image back to the host. Typically this involves creating a container a running a docker cp command, along the lines of the following:
docker build -t your_image .
docker create --name extract your_image
docker cp extract:/path/to/files /path/on/host
docker rm extract
Or it can involve I/O pipes, where you run a tar command inside the container to package the files, and pipe that to a tar command running on the host to save the files.
docker build -t your_image
docker run --rm your_image tar -cC /path/in/container . | tar -xC /path/on/host
Recently, Docker has been working on buildx which is currently experimental. Using that, you can create a stage that consists of the files you want to export to the host and use the --output option to write that stage to the host rather than to an image. Your Dockerfile would then look like:
FROM lambci/lambda:build-python3.7 as build
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
FROM scratch as artifact
COPY --from=build /var/task/lambdatest.zip /lambdatest.zip
FROM build as release
And then the build command to extract the zip file would look like:
docker buildx build --target=artifact --output type=local,dest=$(pwd)/out/ .
I believe buildx is still marked as experimental in the latest release, so to enable that, you need at least the following json entry in $HOME/.docker/config.json:
{ "experimental": "enabled" }
And then for all the buildx features, you will want to create a non-default builder with docker buildx create.
With recent versions of the docker CLI, integration to buildkit has exposed more options. Now it's no longer needed to run buildx to get access to the output flag. That means the above changes to:
docker build --target=artifact --output type=local,dest=$(pwd)/out/ .
If buildkit hasn't been enabled on your version (should be on by default in 20.10), you can enable it in your shell with:
export DOCKER_BUILDKIT=1
or for the entire host, you can make it the default with the following in /etc/docker/daemon.json:
{
"features": {"buildkit": true }
}
And to use the daemon.json the docker engine needs to be reloaded:
systemctl reload docker
Since docker 18.09, it natively supports a custom backend called BuildKit:
DOCKER_BUILDKIT=1 docker build -o target/folder myimage
This allows you to copy your latest stage to target/folder. If you want only specific files and not an entire filesystem, you can add a stage to your build:
FROM XXX as builder-stage
# Your existing dockerfile stages
FROM scratch
COPY --from=builder-stage /file/to/export /
Note: You will need your docker client and engine to be compatible with Docker Engine API 1.40+, otherwise docker will not understand the -o flag.
Reference: https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs
I have a script used in the preapration of a Docker image. I have this in the Dockerfile:
COPY my_script /
RUN bash -c "/my_script"
The my_script file contains secrets that I don't want in the image (it deletes itself when it finishes).
The problem is that the file remains in the image despite being deleted because the COPY is a separate layer. What I need is for both COPY and RUN to affect the same layer.
How can I COPY and RUN a script so that both actions affect the same layer?
take a look to multi-stage:
Use multi-stage builds
With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image. To show how this works, let’s adapt the
Dockerfile from the previous section to use multi-stage builds.
Dockerfile:
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
As of 18.09 you can use docker build --secret to use secret information during the build process. The secrets are mounted into the build environment and aren't stored in the final image.
RUN --mount=type=secret,id=script,dst=/my_script \
bash -c /my_script
$ docker build --secret id=script,src=my_script.sh
The script wouldn't need to delete itself.
This can be handled by BuildKit:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=bind,target=/my_script,source=my_script,rw \
bash -c "/my_script"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image .
This also sounds like you are trying to inject secrets into the build, e.g. to pull from a private git repo. BuildKit also allows you to specify:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=secret,target=/creds,id=cred \
bash -c "/my_script -i /creds"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image --secret id=creds,src=./creds .
With both of the BuildKit options, the mount command never actually adds the file to your image. It only makes the file available as a bind mount during that single RUN step. As long as that RUN step does not output the secret to another file in your image, the secret is never injected in the image.
For more on the BuildKit experimental syntax, see: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
I guess you can use a workaround to do this:
Put my_script in a local http server which for example using python -m SimpleHTTPServer, and then the file could be accessed with http://http_server_ip:8000/my_script
Then, in Dockerfile use next:
RUN curl http://http_server_ip:8000/my_script > /my_script && chmod +x /my_script && bash -c "/my_script"
This workaround assure file add & delete in same layer, of course, you may need to add curl install in Dockerfile.
I think RUN --mount=type=bind,source=my_script,target=/my_script bash /my_script in BuildKit can solve your problem.
First, prepare BuildKit
export DOCKER_CLI_EXPERIMENTAL=enabled
export DOCKER_BUILDKIT=1
docker buildx create --name mybuilder --driver docker-container
docker buildx use mybuilder
Then, write your Dockerfile.
# syntax = docker/dockerfile:experimental
FORM debian
## something
RUN --mount=type=bind,source=my_script,target=/my_script bash -c /my_script
The first lint must be # syntax = docker/dockerfile:experimental because it's experimental feature.
And this method are not work in Play with docker, but work on my computer...
My computer us Ubuntu 20.04 with docker 19.03.12
Then, build it with
docker buildx build --platform linux/amd64 -t user/imgname -f ./Dockerfile . --push
I have made images ubuntu 14:04 on dockerfile
I am running the syntax
$ sudo docker build -t mypostgres .
but I am still confused as to build the dockerfile
how to build it?
sudo docker build -t mypostgres . means:
process the file named 'Dockerfile' (default name)
located in the current folder (that is the final .)
and build as a result the image named mypostgres
So if you have a Dockerfile starting with FROM postgres, you can execute your command and have your own postgres image in no time.
Dockerfile is not as complex as it looks. here's a good start article that could help you to build your first docker file easily - http://rominirani.com/2015/08/02/docker-tutorial-series-writing-a-dockerfile/
You may want to read the doc of Dockerfile best practice by Docker, better than any article IMHO.
You can build a docker file direct from git repository or from a director.
to build a docker file first create a docker file inside your project and name it just Docker without any extension. Now inside that file write necessary command for building an image. For example
FROM node:alpine
WORKDIR /app
COPY package.json ./
RUN npm install
COPY ./ ./
CMD ["npm", "start"]
->Build from git:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments
in here:
fecomments is branch name and comments is the folder name.
->building from git with tag and version:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments -t lordash/comments:v1.0
->Now if you want to build from a directory: first go to comments directory the run command sudo docker build .
->if you want to add tag you can use -t or -tag flag to do that:
sudo docker build -t lordash . or sudo docker build -t lordash/comments .
-> Now you can version your image with the help of tag:
sudo docker build -t lordash/comments:v1.0 .
->you can also apply multiple tag to an image:
sudo docker build -t lordash/comments:latest -t lordash/comments:v1.0 .