Docker build not using cache when copying Gemfile while using --cache-from - docker

On my local machine, I have built the latest image, and running another docker build uses cache everywhere it should.
Then I upload the image to the registry as the latest, and then on my CI server, I'm pulling the latest image of my app in order to use it as the build cache to build the new version :
docker pull $CONTAINER_IMAGE:latest
docker build --cache-from $CONTAINER_IMAGE:latest \
--tag $CONTAINER_IMAGE:$CI_COMMIT_SHORT_SHA \
.
From the build output we can see the COPY of the Gemfile is not using the cache from the latest image, while I haven't updated that file :
Step 15/22 : RUN gem install bundler -v 1.17.3 && ln -s /usr/local/lib/ruby/gems/2.2.0/gems/bundler-1.16.0 /usr/local/lib/ruby/gems/2.2.0/gems/bundler-1.16.1
---> Using cache
---> 47a9ad7747c6
Step 16/22 : ENV BUNDLE_GEMFILE=$APP_HOME/Gemfile BUNDLE_JOBS=8
---> Using cache
---> 1124ad337b98
Step 17/22 : WORKDIR $APP_HOME
---> Using cache
---> 9cd742111641
Step 18/22 : COPY Gemfile $APP_HOME/
---> f7ff0ee82ba2
Step 19/22 : COPY Gemfile.lock $APP_HOME/
---> c963b4c4617f
Step 20/22 : RUN bundle install
---> Running in 3d2cdf999972
Aside node : It is working perfectly on my local machine.
Looking at the Docker documentation Leverage build cache doesn't seem to explain the behaviour here as neither the Dockerfile, nor the Gemfile has changed, so the cache should be used.
What could prevent Docker from using the cache for the Gemfile?
Update
I tried to copy the files setting the right permissions using COPY --chown=user:group source dest but it still doesn't use the cache.
Opened Docker forum topic: https://forums.docker.com/t/docker-build-not-using-cache-when-copying-gemfile-while-using-cache-from/69186

Let me share with you some information that helped me to fix some issues with Docker build and --cache-from, while optimizing a CI build.
I had struggled for several days because I didn't have the correct understanding, I was basing myself on incorrect explanations found on the webs.
So I'm sharing this here hoping it will be useful to you.
When providing multiple --cache-from, the order matters
The order is very important, because at the first match, Docker will stop looking for other matches and it will use that one for all the rest of the commands.
This is explained by the person who implemented the feature in the Github PR:
When using multiple --cache-from they are checked for a cache hit in the order that user specified. If one of the images produces a cache hit for a command only that image is used for the rest of the build.
There is also a lenghtier explanation in the initial ticket proposal:
Specifying multiple --cache-from images is bit problematic. If both images match there is no way(without doing multiple passes) to figure out what image to use. So we pick the first one(let user control the priority) but that may not be the longest chain we could have matched in the end. If we allow matching against one image for some commands and later switch to a different image that had a longer chain we risk in leaking some information between images as we only validate history and layers for cache. Currently I left it so that if we get a match we only use this target image for rest of the commands.
Using --cache-from is exclusive: the local Docker cache won't be used
This means that it doesn't add new caching sources, the image tags you provide will be the only caching source for the Docker build.
Even if you just built the same image locally, the next time you run docker build for it, in order to benefit from the cache, you need to either:
provide the correct tag(s) with --cache-from (and with the correct precedence); or
not use --cache-from at all (so that it will use the local build cache)
If the parent image changes, the cache will be invalidated
For example, if you have an image based on docker:stable, and docker:stable gets updated, the cached builds of your image will not be valid anymore as the layers of the base image were changed.
This is why, if you're configuring a CI build, it can be useful to docker pull the base image as well and include it in the --cache-from, as mentioned in this comment in yet another Github discussion.

I struggled with this problem, and in my case I used COPY when the checksum might have changed (but only technically, the content was functionally identical). So, I worked around this way:
Dockerfile:
ARG builder_image=base-builder
# Compilation/build stage
FROM golang:1.16 AS base-builder
RUN echo "build the app" > /go/app
# This step is required to facilitate docker cache. With the definition of a `builder_image` build tag
# we can essentially skip the build stage and use a prebuilt-image directly.
FROM $builder_image AS builder
# myapp docker image
FROM ubuntu:20.04 AS myapp
COPY --from=builder /go/app /opt/my-app/bin/
Then, I can run the following:
# build cache
DOCKER_BUILDKIT=1 docker build --target base-builder -t myapp-builder .
docker push myapp-builder
# use cache
DOCKER_BUILDKIT=1 docker build --target myapp --build-arg=builder_image=myapp-builder -t myapp .
docker push myapp
This way we can force Docker to use a prebuilt image as a cache.

For whoever is fighting with DockerHub automated builds and --cache-from. I realized images built from DockerHub would always lead to cache bust on COPY commands when pulled and used as build cache source. It seems to be also the case for #Marcelo (refs his comment).
I investigated by creating a very simple image doing a couple of RUN commands and later COPY. Everything is using the cache except the COPY. Even though content and permissions of the file being copied is the same on both the pulled image and the one built locally (verified via sha1sum and ls -l).
The solution for me was to publish the image to the registry from the CI (Travis in my case) rather than letting DockerHub automated build doing it. Let me emphasis here that I'm talking here about a specific case where files are definitely the same and should not cache bust, but you're using DockerHub automated builds.
I'm not sure why is that, but I know for instance old docker-engine version e.g. prior 1.8.0 didn't ignore file timestamp to decide whether to use the cache or not, refs https://docs.docker.com/release-notes/docker-engine/#180-2015-08-11 and https://github.com/moby/moby/pull/12031.

For a COPY command to be cached, the checksum needs to be identical on the source being copied. You can compare the checksum in the docker history output between the cache image and the one you just built. Most importantly, the checksum includes metadata like the file owner and file permission, in addition to file contents. Whitespace changes inside a file like changing to linefeeds between Linux and Windows styles will also affect this. If you download the code from a repo, it's likely the metadata, like the owner, will be different from the cached value.

Related

docker-compose ignores ubuntu:latest in Dockerfile [duplicate]

I want to build a docker image for the Linkurious project on github, which requires both the Neo4j database, and Node.js to run.
My first approach was to declare a base image for my image, containing Neo4j. The reference docs do not define "base image" in any helpful manner:
Base image:
An image that has no parent is a base image
from which I read that I may only have a base image if that image has no base image itself.
But what is a base image? Does it mean, if I declare neo4j/neo4j in a FROM directive, that when my image is run the neo database will automatically run and be available within the container on port 7474?
Reading the Docker reference I see:
FROM can appear multiple times within a single Dockerfile in order to create multiple images. Simply make a note of the last image ID output by the commit before each new FROM command.
Do I want to create multiple images? It would seem what I want is to have a single image that contains the contents of other images e.g. neo4j and node.js.
I've found no directive to declare dependencies in the reference manual. Are there no dependencies like in RPM where in order to run my image the calling context must first install the images it needs?
I'm confused...
As of May 2017, multiple FROMs can be used in a single Dockerfile.
See "Builder pattern vs. Multi-stage builds in Docker" (by Alex Ellis) and PR 31257 by Tõnis Tiigi.
The general syntax involves adding FROM additional times within your Dockerfile - whichever is the last FROM statement is the final base image. To copy artifacts and outputs from intermediate images use COPY --from=<base_image_number>.
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
The result would be two images, one for building, one with just the resulting app (much, much smaller)
REPOSITORY TAG IMAGE ID CREATED SIZE
multi latest bcbbf69a9b59 6 minutes ago 10.3MB
golang 1.7.3 ef15416724f6 4 months ago 672MB
what is a base image?
A set of files, plus EXPOSE'd ports, ENTRYPOINT and CMD.
You can add files and build a new image based on that base image, with a new Dockerfile starting with a FROM directive: the image mentioned after FROM is "the base image" for your new image.
does it mean that if I declare neo4j/neo4j in a FROM directive, that when my image is run the neo database will automatically run and be available within the container on port 7474?
Only if you don't overwrite CMD and ENTRYPOINT.
But the image in itself is enough: you would use a FROM neo4j/neo4j if you had to add files related to neo4j for your particular usage of neo4j.
Let me summarize my understanding of the question and the answer, hoping that it will be useful to others.
Question: Let’s say I have three images, apple, banana and orange. Can I have a Dockerfile that has FROM apple, FROM banana and FROM orange that will tell docker to magically merge all three applications into a single image (containing the three individual applications) which I could call smoothie?
Answer: No, you can't. If you do that, you will end up with four images, the three fruit images you pulled, plus the new image based on the last FROM image. If, for example, FROM orange was the last statement in the Dockerfile without anything added, the smoothie image would just be a clone of the orange image.
Why Are They Not Merged? I Really Want It
A typical docker image will contain almost everything the application needs to run (leaving out the kernel) which usually means that they’re built from a base image for their chosen operating system and a particular version or distribution.
Merging images successfully without considering all possible distributions, file systems, libraries and applications, is not something Docker, understandably, wants to do. Instead, developers are expected to embrace the microservices paradigm, running multiple containers that talk to each other as needed.
What’s the Alternative?
One possible use case for image merging would be to mix and match Linux distributions with our desired applications, for example, Ubuntu and Node.js. This is not the solution:
FROM ubuntu
FROM node
If we don’t want to stick with the Linux distribution chosen by our application image, we can start with our chosen distribution and use the package manager to install the applications instead, e.g.
FROM ubuntu
RUN apt-get update &&\
apt-get install package1 &&\
apt-get install package2
But you probably knew that already. Often times there isn’t a snap or package available in the chosen distribution, or it’s not the desired version, or it doesn't work well in a docker container out of the box, which was the motivation for wanting to use an image. I’m just confirming that, as far as I know, the only option is to do it the long way, if you really want to follow a monolithic approach.
In the case of Node.js for example, you might want to manually install the latest version, since apt provides an ancient one, and snap does not come with the Ubuntu image. For neo4j we might have to download the package and manually add it to the image, according to the documentation and the license.
One strategy, if size does not matter, is to start with the base image that would be hardest to install manually, and add the rest on top.
When To Use Multiple FROM Directives
There is also the option to use multiple FROM statements and manually copy stuff between build stages or into your final one. In other words, you can manually merge images, if you know what you're doing. As per the documentation:
Optionally a name can be given to a new build stage by adding AS name
to the FROM instruction. The name can be used in subsequent FROM and
COPY --from=<name> instructions to refer to the image built in this
stage.
Personally, I’d only be comfortable using this merge approach with my own images or by following documentation from the application vendor, but it’s there if you need it or you're just feeling lucky.
A better application of this approach though, would be when we actually do want to use a temporary container from a different image, for building or doing something and discard it after copying the desired output.
Example
I wanted a lean image with gpgv only, and based on this Unix & Linux answer, I installed the whole gpg with yum and then copied only the binaries required, to the final image:
FROM docker.io/photon:latest AS builder
RUN yum install gnupg -y
FROM docker.io/photon:latest
COPY --from=builder /usr/bin/gpgv /usr/bin/
COPY --from=builder /usr/lib/libgcrypt.so.20 /usr/lib/libgpg-error.so.0 /usr/lib/
The rest of the Dockerfile continues as usual.
The first answer is too complex, historic, and uninformative for my tastes.
It's actually rather simple. Docker provides for a functionality called multi-stage builds the basic idea here is to,
Free you from having to manually remove what you don't want, by forcing you to allowlist what you do want,
Free resources that would otherwise be taken up because of Docker's implementation.
Let's start with the first. Very often with something like Debian you'll see.
RUN apt-get update \
&& apt-get dist-upgrade \
&& apt-get install <whatever> \
&& apt-get clean
We can explain all of this in terms of the above. The above command is chained together so it represents a single change with no intermediate Images required. If it was written like this,
RUN apt-get update ;
RUN apt-get dist-upgrade;
RUN apt-get install <whatever>;
RUN apt-get clean;
It would result in 3 more temporary intermediate Images. Having it reduced to one image, there is one remaining problem: apt-get clean doesn't clean up artifacts used in the install. If a Debian maintainer includes in his install a script that modifies the system that modification will also be present in the final solution (see something like pepperflashplugin-nonfree for an example of that).
By using a multi-stage build you get all the benefits of a single changed action, but it will require you to manually allowlist and copy over files that were introduced in the temporary image using the COPY --from syntax documented here. Moreover, it's a great solution where there is no alternative (like an apt-get clean), and you would otherwise have lots of un-needed files in your final image.
See also
Multi-stage builds
COPY syntax
Here is probably one of the most fundamental use cases of using multiple FROMs, aka, multi stage builds.
I want want one dockerfile, and I want to change one word and depending on what I set that word to, I get different images depending on whether I want to run, Dev or Publish the application!
Run - I just want to run the app
Dev - I want to edit the code and run the app
Publish - Run the app in production
Lets suppose we're working in the dotnet environment. Heres one single Dockerfile. Without multi stage build, there would be multiple files (builder pattern)
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["ConsoleApp1/ConsoleApp1.csproj", "ConsoleApp1/"]
RUN dotnet restore "ConsoleApp1/ConsoleApp1.csproj"
COPY . .
WORKDIR "/src/ConsoleApp1"
RUN dotnet build "ConsoleApp1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ConsoleApp1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ConsoleApp1.dll"]
Want to run the app? Leave FROM base AS final as it currently is in the dockerfile above.
Want to dev the source code in the container? Change the same line to FROM build AS final
Want to release into prod? Change the same line to FROM publish AS final
I agree with the OP, that this feature is useful for docker! Here is a different view into the same problem:
If you had multiple FROMs (or a "FROM" and multiple "MERGE"'s, for example) then you can use the docker registry versioning system for the base docker image AND other container elements, and that is the win here: I have third party development tools which do not exist in .deb format, so these tools must be installed by un-taring a tball and is HUGE, so caching on the docker host will be important but versioning/change control of the image is equally important. I (think I) can simply use "RUN git ....", and docker will deal with the caching of the new layer for me, which is what I want; because another container will have the same base image but a different set of HUGE third party tools, so the caching of the base image and the tools image is really important (the 3rd party tools tar can be as big as the base image of say ubuntu so caching of these is really important too). The (suggested) feature just allows all these elements to be managed in a central repo. versioning system.
Said a different way, why do we use FROM at all? If I were to simply git clone an ubuntu image using the RUN command for my "base image/layer", this would create a new layer and docker would cache this anyway...so is there any difference/advantage in using FROM, other than it uses dockers internal versioning system/syntax?

Docker Cache being cleared when I COPY /

I was trying to make my docker image build faster and noticed that the following step
# THIS STEP RUINS THE DOCKER CACHE, DON'T KNOW WHY
COPY / .
caused the cache to be invalidated. Is this happening because I'm changing the current repo in which I'm running the docker build in?
Can there a better way to only copy files of interest, perhaps with a regex.
As stated in the comment below it seems that any change in the repo causes the docker cache to be cleared.
I suspect that
COPY / .
Rather than
COPY . .
is the reason the cache is being cleared, intentional of not.

How to use big file only to build the container without adding it?

I have a big tar/executable (over 30GB) I COPY/ADD it but this is used only for the installation. Once the application is installed I don't need it anymore.
How can I do? I am trying to use it but:
Everytime I run a build, it takes minutes to define the build context.
I'd like to share this image, if I create a tar with docker save, Is the final version or each layer included in it?
I found some solutions that said I can use RUN wget tar ... && rm tar but I don't want to create webserver for that.
Why isn't possible to mount a volume during build process?! It would be very useful.
Use Docker's multi-stage builds. This mechanism allows you to drop intermediate artifacts and therefore achieve a lightweight image.
Example:
FROM alpine:latest as build
# copy large file
# build
FROM alpine:latest as output
# copy necessary files built in the previous stage
COPY --from=build app /app
Anything built in the build stage will not be included in the final image, unless you explicitly COPY them.
Docs: https://docs.docker.com/develop/develop-images/multistage-build/
This is solvable using 2 different context.
Please follow these steps as mentioned below.
Objective is to create a
docker image that will have you large-build file.
docker image that will have you real codebase/executables.
For this you have to create 2 folders (Build & CodeBase) as follow.
Application<br/>
|---> BUILD <br/>
|======|--->Large-File<br/>
|======|--->Dockerfile<br/>
|--->CodeBase<br/>
|======|--->SRC+Other stuff<br/>
|======|--->Dockerfile<br/>
Build & Codebase both folders will have individual Dockerfile and arrange files accordingly.
Dockerfile(Build)
FROM **Base-Image**
COPY Large-File /tmp/Large-File
Build this and tag it with some name like (base-build-app-image)
#>cd Application <==Application root folder as mentioned above==>
#>docker build -t base-build-app-image BUILD <==path of your build-folder==>
Dockerfile(Codebase)
FROM base-build-app-image
RUN *****
CMD *****
RUN rm -f **/tmp/Large-File**
RUN rm -f **Remove installation files that is not required**
ENTRYPOINT *****
Build this-code-base and base-build-app-image is already in your local docker-repository and your large iso file is not in the current-buid-context
#>cd Application <==Application root folder as mentioned above==>
#>docker build CodeBase <==path of your code-base==>
This time since the context size is only your code base and since this doesn't include that Large file - it will definitely reduce your build time.
You can also take an advance of using docker-compose to do both operations together so you will not have to execute 2 separate commands.
If you need help on preparing this docker-compose file then do let me know in comments.
If anything is not clear then leave a comment or come over chat to fix this issue.

How to speed up Docker build

I'm trying to create a Dockerfile for a project I'm working on. Installing all the required packages through apt and pip takes a couple of minutes. Since the required packages don't change, is there a way so I can skip to the steps that do change?
I'm running Docker CE on OS X (version 17.06.2-ce-mac27).
Yes you can. Create two images
Dockerfile-base
FROM python:3.6
RUN pip install selenium
Then build using below
docker build -f Dockerfile-base -t base .
Dockerfile
FROM base
COPY . .
So you won't rebuild base. And keep on working on the main Dockerfile. There are other possible solutions also like deploying local Nexus package manager and using it to cache packages locally. But then too much of effort for a developer machine
If you use the docker cache each layer of the image will only be rebuilt if it has changed or the layer above has changed.
FROM alpine:latest # First layer
RUN apk add git gcc # Second layer
RUN apk add another-package # Third layer
If the first or second layers are changed (say you add openssl to the second line for example) the second and third layer will be rebuilt without using the cache.
But if only the third layer is changed, only that layer will have to rebuild, while the first and second layer is built from cache.
So sometimes you can move all the stuff that are supposed to be built rarely to the top of the file, and then let stuff that rebuild often be in its own layer further down (even though more layers increases image size).
If you rather move it into multiple images, you can absolutely do as Tarun says above.
If it is only data that you wish to move from one image to another (that is, not installed packages and such) you could check into multi-stage builds which allows you to define multiple images in a single file and let them copy data from the one built before in the file.
To get more information about how the build cache works, check out the docs!

Selecting different code branches when using a shared base image in Docker

I am containerising a codebase that serves multiple applications. I have created three images;
app-base:
FROM ubuntu
RUN apt-get install package
COPY ./app-code /code-dir
...
app-foo:
FROM app-base:latest
RUN foo-specific-setup.sh
and app-buzz which is very similar to app-foo.
This works currently, except I want to be able to build versions of app-foo and app-buzz for specific code branches and versions. It's easy to do that for app-base and tag appropriately, but app-foo and app-buzz can't dynamically select that tag, they are always pinned to app-base:latest.
Ultimately I want this build process automated by Jenkins. I could just dynamically re-write the Dockerfile, or not have three images and just have two nearly-but-not-quite identical Dockerfiles for each app that would need to be kept in sync manually (later increasing to 4 or 5). Each of those solutions has obvious drawbacks however.
I've seen lots of discussions in the past about things such as an INCLUDE statement, or dynamic tags. None seemed to come to anything.
Does anyone have a working, clean(ish) solution to this problem? As long as it means Dockerfile code can be shared across images, I'd be happy. If it also means that the shared layers of images don't need to be rebuilt for each app, then even better.
You could still use build args to do this.
Dockerfile:
FROM ubuntu
ARG APP_NAME
RUN echo $APP_NAME-specific-setup.sh >> /root/test
ENTRYPOINT cat /root/test
Build:
docker build --build-arg APP_NAME=foo -t foo .
Run:
$ docker run --rm foo
foo-specific-setup.sh
In your case you could run the correct script in the RUN using the argument you just set before. You would have one Dockerfile per app-base variant and run the correct set-up based on the build argument.
FROM ubuntu
RUN apt-get install package
COPY ./app-code /code-dir
ARG APP_NAME
RUN $APP_NAME-specific-setup.sh
Any layers before setting the ARG would not need to be rebuilt when creating other versions.
You can then push the built images to separate docker repositories for each app.
If your apps need different ENTRYPOINT instructions, you can have an APP_NAME-entrypoint.sh per app and rename it to entrypoint.sh within your APP_NAME-specific-setup.sh (or pass it through as an argument to run).

Resources