Docker compose does not completely invalidate cache when asked - docker

In an effort to update my container to the newest version of PHP 8.0 (that is 8.0.20 at the time of writing) I have tried running
$ docker compose build --no-cache
[+] Building 148.2s (24/24) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.97kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/php:8.0-apache 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 6.17kB 0.0s
=> CACHED [base 1/9] FROM docker.io/library/php:8.0-apache 0.0s
=> [base 2/9] RUN a2enmod rewrite
...
But as seen from the output, only step 2 and up are rebuilt, the base image is still being read from cache, resulting in PHP version 8.0.8.
How can I force a complete rebuild without using old cache?
$ docker --version
Docker version 20.10.12, build 20.10.12-0ubuntu4
$ docker compose version
Docker Compose version v2.4.0
Top of Dockerfile:
FROM php:8.0-apache as base
# Apache rewrite module
RUN a2enmod rewrite
EDIT: After more research I find this question is similar to and possibly a duplicate of How to get docker-compose to always re-create containers from fresh images?. The missing part in this specific example is docker pull php:8.0-apache.
I don't understand why though. Why do I have to manually pull the fresh version of the base image?
Pruning (docker system prune, docker builder prune -a) has no effect on this issue, even after taking the containers down.

There's a general Docker rule that, if you already have some image locally, it's just used without checking Docker Hub. As #BMitch indicates in their answer the newer BuildKit engine should validate that you do in fact have the most current version of an image, but it's possible to update this manually.
In your case, you already have a php:8.0-apache image locally (with PHP 8.0.8). So you could manually get the updated base image
docker pull php:8.0-apache
docker-compose build
If the base image has changed, this will invalidate the cache, and so you don't need --no-cache here. This will also work with the "classic" builder (though your output does show that you're using the newer BuildKit engine).
This is a common enough sequence that there's a shorthand for it
docker-compose build --pull

You can do something like:
docker-compose up --force-recreate
or something like this:
docker-compose down --rmi all --remove-orphans && docker-compose up --force-recreate
This will remove all the images so use at your own discretion. Reference to docker-compose down command here

For a base image, CACHED effectively means "verified" where buildkit has queried the registry to check the current base image digest, and seen that the base image digest matches what it's already pulled down. More details are available in this GitHub issue.
There's only 2 reasons I can think of to not use that cache. First is if your local build cache is corrupt. In that case, purge your local cache (docker builder prune). And the other case is a sha256 collision, which if that happens, the registry server itself will probably be broken and fixing the builder is the least concern.
The reason for using --no-cache is to ensure steps that could possibly result in a different output are done again, and that's being done in this example.

Related

Removed Docker image is reappearing again upon new build command

Scenario:
I made a working dockerfile, and I want to test them from scratch. However, the remove command only removes the image temporarily, meaning that running build command again will make them reappear as if it was never removed in a first place.
Example:
This is what my terminal looks like:
*Note: first two images are irrelevant to this question.
The ***_seis image is removed using docker rmi ***_seis command, and as a result, running docker images will show that ***_seis image was deleted.
However, when I run the following build command:
docker build -f dockerfile -t ***_seis:latest .
It will build successfully, but gives this result:
Even though it was removed seconds ago, build took less than a minute and the created date indicates that it was made 3 days ago.
Log:
This is what my build log looks like:
docker build -f dockerfile -t ***_seis:latest .
[+] Building 11.3s (14/14) FINISHED
=> [internal] load build definition from dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/jupyter/base-notebook:latest 11.2s
=> [1/9] FROM docker.io/jupyter/base-notebook:latest#sha256:bc9ad73498f21ae716ba0e58d660063eae1677f6dd2bd5b669248fd0bf22dc79 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 32B 0.0s
=> CACHED [2/9] RUN apt update && apt install --no-install-recommends -y software-properties-common git zip unzip wget v 0.0s
=> CACHED [3/9] RUN conda install -c conda-forge jupyter_contrib_nbextensions jupyter_nbextensions_configurator jupyter-resource-usage 0.0s
=> CACHED [4/9] RUN mkdir /home/jovyan/environment_ymls 0.0s
=> CACHED [5/9] COPY seis.yml /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [6/9] RUN conda env create -f /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [7/9] RUN python -m ipykernel install --name seis--display-name "seis" 0.0s
=> CACHED [8/9] WORKDIR /home/jovyan/***_seis 0.0s
=> CACHED [9/9] RUN chown -R jovyan:users /home/jovyan 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:16a8e90e47c0adc1c32f28e32ad17a8bc72795c3ca9fc39e792fa383793c3bdb 0.0s
=> => naming to docker.io/library/***_seis:latest
Troubleshooting: So far, I've tried different ways of removing them, such as
docker rmi <image_name>
docker image prune
and manually removing from docker desktop.
I made sure that all containers are deleted by using:
docker ps -a
Expected result: If successful, it should rebuild from scratch, takes longer than a minute to build, and creation date should reflect the time it was actually created.
Question:
I would like to know what is the issue here in terms of image not being deleted completely. Why does it recreate image from the past rather than just starting new build?
Thank you in advance for your help.
It's building from the cache. Since no inputs appear to have changed to the build engine, and it has the steps from the previous build, they are reused, including the image creation date.
You can delete the build cache. But I'd recommend instead to run:
docker build --pull --no-cache -f dockerfile -t ***_seis:latest .
The --pull option pulls a new base image should you have an old version pulled locally. And the --no-cache option skips the caching for various steps (in particular a RUN step that may fetch the latest external dependency).

Docker build is not working - It does not find the Dockerfile

I am trying to build a quarkus container with docker file, but look like that docker build is not finding the Dockerfile. I have changed the name of the Dockerfile but anyway is not working.
I run: docker build src/main/docker/native.dockerfile
And there is the error:
docker build src/main/docker/native.dockerfile
[+] Building 0.1s (1/2)
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 97B 0.0s
------
> [internal] load build definition from Dockerfile:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: error from sender: walk src\main\docker\native.dockerfile: System cannot find specified path.
Here is a print:
Even if I run with the intellij it throw another error:
This is the dockerfile:
FROM registry.access.redhat.com/ubi8/ubi-minimal
WORKDIR /work/
COPY target/*-runner /work/application
RUN chmod 775 /work
CMD ./application -Dquarkus.http.host=0.0.0.0 -Dquarkus.http.port=${PORT}
What am I doing wrong?
First of all, speaking of -f flag, the command:
docker build -f src/main/docker/native.dockerfile
will not work, as you mentioned, but I think it is important for me to explain why. The reason is - you did not specify the build context for Dockerfile. When you typing something like this:
docker build src/main/docker/native.dockerfile
It will lookup for Dockerfile, called Dockerfile, but the src/main/docker/native.dockerfile will act as an context of build. In other words, when you coping something to your image, docker needs to understand, from where exactly you want to copy files/directories. So you can assign whatever name your want to your Dockerfile, just remember about the build context (It can be either relative or absolute)
Now let me address errors you encoutered :)
You got 2 different problems, roughly speaking. First of them is - when you ran:
docker build /build/context/path
docker engine was not able to determine the context. I do not use docker on windows, but I am pretty sure this is because of separators. If I were you I will simply change directory (just to ease your life) to one which represents your build context (I assume this is the same directory, where is your Dockerfile is situated), and simply run:
docker build --file native.dockerfile .
But you will get the problem, that you have got in Intelij. This is completely another problem. The reason of it - when docker was copying files to your image from the host machine, it was not able to find suitable (in regards to your wildcard) files to copy. I do not see you target directory - it does not present on the screenshots, so, I cannot suggest anything further, but the problem is there. Fell free to attach them and lets investigate together :)
Have a nice day!

failed to solve with frontend dockerfile.v0: failed to read dockerfile?

I am new to Docker and i am trying to create an image from my application, i created Dockerfile in the same directory with package.json file with no extension, just Dockerfile
Now in Dockerfile:
FROM node:14.16.0
CMD ["/bin/bash"]
and i am trying to build the image with that command
docker build -t app .
But i got this constant error:
[+] Building 0.2s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 2B 0.0s
=> CANCELED [internal] load .dockerignore 0.0s
=> => transferring context: 0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount457647683/Dockerfile: no such file or directory
My folder directory is like this:
|- Dockerfile
|- README.md
|- src
|- package.json
|- public
|- node-modules
|-package-lock.json
My OS is : Windows 10 Pro
Double check that you are in the right directory.
I was in downloads/app
When I downloaded the app from Docker as part of their tutorial, I extracted it and it ended up in downloads/app**/app**
Type dir in your terminal to see if you can see dockerfile or another folder called app.
I encountered a different issue, so sharing as an FYI. On Windows, I created the docker file as DockerFile instead of Dockerfile. The capital F messed things up.
If you come here from a duplicate, notice also that Docker prevents you from accessing files outside the current directory tree. So, for example,
docker build -f ../Dockerfile .
will not be allowed. You have to copy the Dockerfile into the current directory, or perhaps run the build in the parent directory.
For what it's worth, you also can't use symlinks to files elsewhere in your file system from docker build for security reasons.
Naming convention for Docker file is 'Dockerfile' not 'DockerFile', I got this error because of this.
In windows when the Dockerfile is in .txt format I got this error. changing it to type "file" fixed the issue.
It's a pretty generic error message but what caused it for me, was in my Dockerfile, I didn't have a space specifying the initial command properly. Here's how it should look like:
CMD ["command-name"]
Notice the space between "CMD" and "[".
Instead, my mistake was that I typed CMD["command-name"] which resulted in the error you described.

Can you make any sense of Dockers error-messages?

I admit, I am a newbie in the container-world. But I managed to get docker running on my W10 with WSL2. I can also use the docker-UI and run Containers/Apps or Images. So I believe that the infrastructure is in place and uptodate.
Yet, when I try even the simplest Dockerfile, it doesn't seem to work and I don't understand the error-messages it gives:
This is Dockerfile:
FROM ubuntu:20.04
(yes, a humble beginning - or an extremly slimmed down repro)
docker build Dockerfile
[+] Building 0.0s (2/2) FINISHED
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 33B 0.0s
=> ERROR [internal] load .dockerignore 0.0s
=> => transferring context: 33B 0.0s
------
> [internal] load build definition from Dockerfile:
------
------
> [internal] load .dockerignore:
------
failed to solve with frontend dockerfile.v0: failed to build LLB: error from sender: Dockerfile is not a directory
You need to run docker build -f [docker_file_name] . (don't miss the dot at the end).
If the name of your file is Dockerfile then you don't need the -f and the filename.
I faced a similar issue, I use the docker desktop for windows. Restarted the laptop and the issue was resolved. Hope it may help someone.
First check Docker file name(D should be capital), then run docker build -f Dockerfile . (dot at the end).
For me, I had a Linux symlink in the same directory as the Dockerfile. When running docker build from Windows 10, it gave me the ERROR [internal] load build definition from Dockerfile. I suspect Docker docker build . scans the directory and, if it can't read one file, it crashes. For me, I mounted the directory with WSL and removed the symblink.
I had the same issue but a different solution (on Windows):
I opened a console in my folder; my folder contains only Dockerfile
Dockerfile content was FROM ubuntu:20.04 (same as OP)
Ran docker build knowing that I had a Dockerfile in my current folder
I was getting the OP's same error message
I stopped the Docker Desktop service
Ran docker build again -- got "docker build" requires exactly 1 argument.
Ran docker build Dockerfile -- got unable to prepare context: context must be a directory: C:\z\docker\aem_ubuntu20\Dockerfile
Ran docker build . -- got error during connect: This error may indicate that the docker daemon is not running.
Re-started the Docker Desktop service
Ran docker build . -- success!
Conclusion: docker build PATH, where PATH must be a folder name and that folder must contain a Dockerfile
In my case I got this error when running docker commands in a wrong directory. Just cd to the dir where your Dockerfile is, and all is good again.

Dockerfile ADD - Embedded in image or ADDed to container?

When I have a Dockerfile that use ADD file-folder/ target-folder/ does the content get added into the image or only once you create the container?
Is it used docker build or docker run?
Note: I use a docker-compose project but some of the content is ADDed in Dockerfile, not as volumes.
Again, the modularity of images (or rather Dockerfile) come into question if ADD items are embedded into the image. I fully understand and get the value of RUN elements into the image, but when I want to add several projects using slightly different ADD variants (Dockerfile variants which then gets used in seperate docker-compose files).
From the documentation:
Docker can build images automatically by reading the instructions from a Dockerfile
when you execute docker build on a Dockerfile, you create a Docker image.
The files ADDed are indeed baked into the image.
Remember:
docker build is for building images.
docker run is for running containers from images.
The images I created had alot of content - lists of PHP7.4 modules/Magento 2 setup etc so I was not sure if ADD embedded into image or not from the report information created by Docker build.
A hint to assist future users that might not be sure:
For better readability (docker image build report) with the added benefit of faster / optimized image creation compared to the default build, use
DOCKER_BUILDKIT=1 docker build -t tag -f file-location/docker-compose.yml
The 'flattened' image contents built show you reports in far fewer rows than the default. This is especially useful for images that use cached parent images.
Images are smaller the time taken to build is also much faster than not using DOCKER_BUILDKIT.
Sample output of DOCKER_BUILDKIT:
=> CACHED [1/8] FROM docker.io/current_timezone/full-supervisord-nginx-proxy:1.00 0.0s
=> [2/8] ADD ./nginx-proxy/supervisord/conf.d/ /etc/supervisor/conf.d/ 0.1s
=> [3/8] ADD ./nginx-proxy/supervisord/scripts/ /etc/supervisor/scripts/ 0.0s
=> [4/8] ADD ./nginx-proxy/nginx/conf.d/proxy.params/ /etc/nginx/conf.d/params/proxy.par 0.0s
=> [5/8] ADD ./nginx-proxy/nginx/sites-available/ /etc/nginx/sites-available/ 0.0s
=> [6/8] COPY ./nginx-proxy/nginx/nginx.conf /etc/nginx/nginx.conf 0.0s
=> [7/8] RUN chown -R user:user /etc/supervisor && chmod -R 700 /etc 0.4s
=> [8/8] WORKDIR /var/www/html 0.0s

Resources