docker dont generate new image from docker build - docker

I'm in low cost project that we send to container registry (DigitalOcean) only latest image.
But all time, after running:
docker build .
Is generating the same digest, every time.
This is my script for build:
docker build .
docker tag {image}:latest registry.digitalocean.com/{company}/{image}:latest;
docker push registry.digitalocean.com/{company}/{image}
I tried:
BUILD_VERSION=`date '+%s'`;
docker build -t {image}:"$NOW" -t {image}:latest .
docker tag {image}:latest registry.digitalocean.com/{company}/{image}:latest;
docker push registry.digitalocean.com/{company}/{image}
but not worked.

Editing my answer, what David said is correct - the push with out the tag should pick up latest tag.
If you provide what you have in your local repository and the output of the above commands, it would shed more light to your problem.
Edit 2:
I think I have figured out on why:
Is generating the same digest, every time.
This means, although you are running your docker build - there has been no change to the underlying artifacts which are being packaged into the image and hence it results into the same digest.

Sometimes layers are cached but there are changes that aren't detected so you can delete the image or use 'docker system prune' to force clearing cache here

Related

Does the order of --cache-from arguments matter when building an image with Docker Buildkit?

Suppose I am building an image using Docker Buildkit. My image is from a multistage Dockerfile, like so:
FROM node:12 AS some-expensive-base-image
...
FROM some-expensive-base-image AS my-app
...
I am now trying to build both images. Suppose that I push these to Docker Hub. If I were to use Docker Buildkit's external caching feature, then I would want to try to save build time on my CI pipeline by pulling in the remote some-expensive-base-image:latest image as the cache when building the some-expensive-base-image target. And, I would want to pull in both the just-built some-expensive-base-image image and the remote my-app:latest image as the caches for the latter image. I believe that I need both in order to prevent requiring the steps of some-expensive-base-image from needing to be rebuilt, since...well...they are expensive.
This is what my build script looks like:
export DOCKER_BUILDKIT=1
docker build --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from some-expensive-base-image:latest --target some-expensive-base-image -t some-expensive-base-image:edge .
docker build --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from some-expensive-base-image:edge --cache-from my-app:latest --target my-app -t my-app:edge .
My question: Does the order of the --cache-from arguments matter for the second docker build?
I have been getting inconsistent results on my CI pipeline for this build. There are cache misses that are happening when building that latter image, even though there hasn't been any code changes that would have caused cache busting. The Cache Minefest can be pulled without issue. There are times when the cache image is pulled, but other times when all steps of that latter target need to be rerun. I don't know why.
By chance, should I instead try to docker pull both images before running the docker build commands in my script?
Also, I know that I referred to Docker Hub in my example, but in real life, my application uses AWS ECR for its remote Docker repository. Would that matter for proper Buildkit functionality?
Yes, the order of --cache-from matters!
See the explanation on Github from the person who implemented the feature, quoting here:
When using multiple --cache-from they are checked for a cache hit in the order that user specified. If one of the images produces a cache hit for a command only that image is used for the rest of the build.
I've had similar problems in the past, you might find useful to check ths answer, where I've shared about using Docker cache in the CI.

Docker image missing on Google Cloud Build "No such image", after being pushed and tagged

So we have a step which generates a docker image.
The next step uses the image to run a test, but by referring it using a LOCAL tag to avoid communicating with our remote registry (save time and traffic).
cloud build.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: ['./build_image.sh','$PROJECT_ID','base-image']
- name: 'base-image:latest'. #This should ensure there's no need for an "actual" pull
args: ['-f', './verify_image.ps1']
build_image.sh
#!/bin/bash
project=$1
applicationName=$2
version=1.0.3
image="gcr.io/$project/$applicationName:$version"
local_image="$applicationName:latest"
docker build -t $image .
echo "Tag image to make it locally availiable so we dont have to do a new pull"
docker tag $image $local_image
docker push $image
# docker run $local_image #works
This works most of the time on GCB. But every once in a while step 2 fails with:
Step #2: Already have image (with digest): base-image:latest
Finished Step #2
ERROR
ERROR: build step 2 "base-image:latest" failed: starting container: Error response from daemon: No such image: base-image#sha256:XXXX
I cen reproduce this by for ex starting 10 rebuilds of the same commit and then 0-5 of the rebuilds will throw this error.
We use this construct several places and makes our build process unstable.
GCB uses docker version:
Docker version 19.03.5, build 633a0ea838
I have been searching high and low for an explanation to this error but without any luck.
Things I've tried:
I have verified that both tags exists and that the digest's are
correct in step 1 and step 2 for the base image. As you can se step 2
confirms that the image with digest exists, but in the next line
where I expect GCB to do a "docker run base-image:latest", for some
reason, it does not, sometimes.
I've tried doing a docker run using the local tag, in step 1 and it works (the image runs), but immediately after step 2 fails with the above message, claiming the image exists but does not.
Tested with the official google builder docker image and the latest official docker image. Same result.
Every build on GCB, runs in its own vm host, so every step is a docker container running on the same vm. The docker daemon ,I believe is, hosted on the vm and not in the containers. This is why its possible to use the local tag and avoid the pull. But somehow state gets corrupted in the local registry/daemon or the local tag is not registered. Race condition?
In practice we use this setup so that the first step decides which version of the base-image to use in the second step (frameworks, etc) based on the content of the git repository being build, and then in the next step we can simply reference the local image as base-image:latest. This makes our build process more streamlined and enable us to do "dynamic" changes without bothering other devs on purpose.
We have a lot of products using a very small number of frameworks the similar way, so this makes good sense.
If anyone has any suggestions how to resolve/fix/workaround this issue please help :).
Kind regards
Christian
To use cached images in your builds you must use --cache-from flag when you build your images like in:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
'--cache-from', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
'.'
]
more on this can be found here.

Do I need to `docker commit` in order to push an image into a docker image registry (eg. docker hub)?

Usually according to docs In order to build a docker Image I need to follow these steps:
Create a Dockerfile for my application.
Run docker build . Dockerfile where the . is the context of my application
The using docker run run my image into a container.
Commit my image into a container
Then using docker push push the image into a container.
Though sometimes just launching the image into a container seems like a waste of time because I can tag my images using the parameter -t into the docker build command. So there's no need to commit a container as an image.
So is neseserily to commit a running container as an image?
You don't need to run and commit. docker commit allows you to create a new image from changes made on existing container.
You do need to build and tag your image in a way that will enable you to push it.
docker build -t [registry (defaults to docker hub)]/[your repository]:[image tag] [docker file context folder]
for example:
docker build -t my-repository/some-image:image-tag .
And then:
docker push my-repository/some-image:image-tag
This will build an image from a docker file found in the current folder (where you run the docker build command). The repository in this case is my-repository, the image name is some-image and it's tag is image-tag.
Also please note that you'll have to perform docker login with your credentials to docker hub before you are able to actually push the image.
You can also tag an existing image without rebuilding it. This is useful if you want to push an existing image to a different registry or if you want to create a different image tag. for example:
docker tag my-repository/some-image:image-tag localhost:5000/my-repository/some-image:image-tag
This will add a new tag to the image from the previous example. Note the registry part added (localhost:5000). If you call docker push on that tag (docker push localhost:5000/my-repository/some-image:image-tag) the image will be pushed to a registry found on localhost:5000 (of course you need the registry up and running before trying to push).
There's no need to do so. In order to prove that you can just tag the image and push it into the registry here's an example:
I made the following Dockerfile:
FROM alpine
RUN echo "Hello" > /usr/share/hello.txt
ENTRYPOINT cat /usr/share/hello.txt
Nothing special just generates a txt file and shows its content.
Then I can build my image using tags:
docker build . -t ddesyllas/dummy:201201241200 -t ddesyllas/dummy:201201241200
And then just push them to the registry:
$ docker push ddesyllas/dummy
The push refers to repository [docker.io/ddesyllas/dummy]
1aa99de3dbec: Pushed
6bc83681f1ba: Mounted from library/alpine
201908241504: digest: sha256:93e8407b1d52620aeadd769486ef1402b9e310018cae0972760f8c1a03377c94 size: 735
1aa99de3dbec: Layer already exists
6bc83681f1ba: Layer already exists
latest: digest: sha256:93e8407b1d52620aeadd769486ef1402b9e310018cae0972760f8c1a03377c94 size: 735
And as you can see from the output you can just build the tags and push it directly, good for your ci/cd pipeline. Though, generally speaking, you may need to launch the application into a container in order to do acceptance and other type of tests (eg. end-to-end tests).

What's the purpose of "docker build --pull"?

When building a docker image you normally use docker build ..
But I've found that you can specify --pull, so the whole command would look like docker build --pull .
I'm not sure about the purpose of --pull. Docker's official documentation says "Always attempt to pull a newer version of the image", and I'm not sure what this means in this context.
You use docker build to build a new image, and eventually publish it somewhere to a container registry. Why would you want to pull something that doesn't exist yet?
it will pull the latest version of any base image(s) instead of reusing whatever you already have tagged locally
take for instance an image based on a moving tag (such as ubuntu:bionic). upstream makes changes and rebuilds this periodically but you might have a months old image locally. docker will happily build against the old base. --pull will pull as a side effect so you build against the latest base image
it's ~usually a best practice to use it to get upstream security fixes as soon as possible (instead of using stale, potentially vulnerable images). though you have to trade off breaking changes (and if you use immutable tags then it doesn't make a difference)
Docker allows passing the --pull flag to docker build, e.g. docker build . --pull -t myimage. This is the recommended way to ensure that the build always uses the latest container image despite the version available locally. However one additional point worth mentioning:
To ensure that your build is completely rebuilt, including checking the base image for updates, use the following options when building:
--no-cache - This will force rebuilding of layers already available.
The full command will therefore look like this:
docker build . --pull --no-cache --tag myimage:version
The same options are available for docker-compose:
docker-compose build --no-cache --pull
Simple answer. docker build is used to build from a local dockerfile. docker pull is used to pull from docker hub. If you use docker build without a docker file it throws an error.
When you specify --pull or :latest docker will try to download the newest version (if any)
Basically, if you add --pull, it will try to pull the newest version each time it is run.

Is it possible to cache multi-stage docker builds?

I recently switched to multi-stage docker builds, and it doesn't appear that there's any caching on intermediate builds. I'm not sure if this is a docker limitation, something which just isn't available or whether I'm doing something wrong.
I am pulling down the final build and doing a --cache-from at the start of the new build, but it always runs the full build.
This appears to be a limitation of docker itself and is described under this issue - https://github.com/moby/moby/issues/34715
The workaround is to:
Build the intermediate stages with a --target
Push the intermediate images to the registry
Build the final image with a --target and use multiple --cache-from paths, listing all the intermediate images and the final image
Push the final image to the registry
For subsequent builds, pull the intermediate + final images down from the registry first
Since the previous answer was posted, there is now a solution using the BuildKit backend: https://docs.docker.com/engine/reference/commandline/build/#specifying-external-cache-sources
This involves passing the argument --build-arg BUILDKIT_INLINE_CACHE=1 to your docker build command. You will also need to ensure BuildKit is being used by setting the environment variable DOCKER_BUILDKIT=1 (on Linux; I think BuildKit might be the default backend on Windows when using recent versions of Docker Desktop). A complete command line solution for CI might look something like:
export DOCKER_BUILDKIT=1
# Use cache from remote repository, tag as latest, keep cache metadata
docker build -t yourname/yourapp:latest \
--cache-from yourname/yourapp:latest \
--build-arg BUILDKIT_INLINE_CACHE=1 .
# Push new build up to remote repository replacing latest
docker push yourname/yourapp:latest
Some of the other commenters are asking about docker-compose. It works for this too, although you need to additionally specify the environment variable COMPOSE_DOCKER_CLI_BUILD=1 to ensure docker-compose uses the docker CLI (with BuildKit thanks to DOCKER_BUILDKIT=1) and then you can set BUILDKIT_INLINE_CACHE: 1 in the args: section of the build: section of your YAML file to ensure the required --build-arg is set.
I'd like to add another important point to the answer
--build-arg BUILDKIT_INLINE_CACHE=1 caches only the last layer, and works in case nothing changed.
So, to enable the caching of layers for the whole build, this argument should be replaced by --cache-to type=inline,mode=max. See the documentation

Resources