I know Watchtower (https://github.com/v2tec/watchtower), but would like to understand if there is a way that I check for docker image updates without pull (downloading it)
I seems that if I ie. use docker pull pihole:latest it download the image, no matter if it the newest or already on my device.
I would like to get the digest or image id from the docker hub, so I can compare it with my local image id
Related
Is there a way to check if docker-compose is running the latest version of an image? I know that I can run docker-compose pull to get the latest version, but what happens if I run the pull command and docker-compose already has the newest image? Does the pull request count against my Docker Hub pull request limit?
My end goal is to check for a new image every 24 hours without using Watchtower.
When you do a docker pull against the ":latest" tag of an image, docker will only pull that image if this version of the image is not on your local repository/computer. Works lit git pull, basically. Docker-compose does a simple, classic docker pull, so same mechanic.
You can find a way to find your docker pull limit rate, and your number of remaining pulls in at this link
You can try consulting your remaining pulls, then lauching docker pull <some image>:latest, consulting your remaining pulls again (it'll show the same -1) and doing the same pull again. Pull checks the image version, does not detect changes, does not pull, your remaining pulls number stay the same.
Anyway, you get 100 pulls/6 hours if you are an anonymous user and 200 pulls/6 hours if you are an authentified user. For your use case, you would be okay even if you were pulling the image every day.
In 2019, I made a pull image of Python 3.6. After that, I was sure that the image was self-updating (I did not use it actively, I just hoped that the latest pushes themselves were pulled from the repository or something like that), but I was surprised when I accidentally noticed the download/creation date is 2019.
Q: How does image pull work? Are there flags so that the layer hash/its relevance* is checked every time the image is built? Perhaps there is a way to set this check through the docker daemon config file? Or do I have to delete the base image every time to get a new image?
What I want: So that every time I build my images, the base image is checked for the last push (publication of image) in the docker hub repository.
Note: I'm talking about images with an identical tag. Also, I'm not afraid to re-build my images, there is no purpose to preserve them.
Thanks.
You need to explicitly docker pull the image to get updates. For your custom images, there are docker build --pull and docker-compose build --pull options that will pull the base image (though there is not a "pull" option for docker-compose up --build).
Without this, Docker will never check for updates for an image it already has. If your Dockerfile starts FROM python:3.6 and you already have a local image with that name and tag, Docker just uses it without contacting Docker Hub. If you don't have it then Docker will pull it, once, and then you'll have it locally.
The other thing to watch for is that the updates do eventually stop. If you look at the Docker Hub python image page you'll notice that there are no longer rebuilds for Python 3.5. If you pin to a very specific patch version, the automated builds generally only build the latest patch version for each supported minor version; if your image is FROM python:3.6.11 it will never get updates because 3.6.12 is the latest 3.6.x version.
I'm using some docker images, which I have pulled from a registry:
docker pull registry.example.com/project/backend:latest
docker pull registry.example.com/project/frontend:latest
Now there is a new version on the server registry. If I do a new pull, I will overwrite the current images. But I need to keep the current working images in case I do get some problems with the newest latest images.
So, how do I create a kind of backup of my running backend:latest and frontend:latest? After that I can pull the latest latest image and in case I need to, I can use the old working images...
To keep the current image on your local environment you can use docker tag
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
For example:
docker tag registry.example.com/project/backend:latest registry.example.com/project/backend:backup
Then when you pull the latest, the registry.example.com/project/backend:backup still existing
Pulling an image never deletes an existing image. However, if you have an image with the same name, the old image will become unnamed, and you'll have to refer to it by its image ID.
You've now seen the downside to using :latest tags. This is why it is better to reference an image by a specific version tag that the maintainer won't re-push.
First, you shouldn't be using latest in production environments. Rather define a tag you confirmed working.
And instead of executing stuff in an image to set it up, you should write a Dockerfile and make the installation repeatable and create your local image. That's actually one of the main reasons why docker is used.
Is there a possibility to force pull of docker image?
I have redeployed docker image to another repository, but when I invoke
docker pull anotherrepo:port/my/image
nothing gets downloaded, instead I get info:
Digest: sha256:somehash
and that image is up to date.
docker rm/rmi doesn't work because the image is downloaded from originalrepo:port/my/image and I don't want to stop/delete it onyl for test purposes.
Is there possible to force pull to check if the image is correctly pushed?
The following should work. You add a temporary tag to avoid deletion of the image, delete the original tag and then pull:
docker tag "$originalTag" "tmpTag"
docker rmi "$originalTag"
docker pull "$originalTag"
docker rmi "tmpTag"
I think the answer lies in digests.
Images that use the v2 or later format have a content-addressable identifier called a digest. As long as the input used to generate the image is unchanged, the digest value is predictable.
Source: https://docs.docker.com/engine/reference/commandline/images/#list-the-full-length-image-ids
Maybe you don't need to verify if the push was successful, as Docker could be doing that automatically by using digests, but I'm not sure if this is indeed the case.
The only other way I can think of would be to pull from a different machine which has access to the new repository.
I was using centos image from https://registry.hub.docker.com/u/blalor/centos/
For some reason Blalor decided to remove passwd from the list of packages installed on the base image and my dockers stopped working on new deployments. Why does not docker know the build which was used for my dockers? I have had to change my base images now and change every server's docker image.
I could not use the tag feature because there is the tagging for the blalor's images? Do I have to use the source code and host the centos image myself so that it does not change again?
You do not need to use sources. If you have a working image, you can do docker history <your image> to see the image ID that was used and tag the proper one into shortfellow/centos. If you do not have a working image, on the link you provided, there is a build detail section with the history of build. You can see that on January 13th, 2014, it has been built and the image then was a531daec9f98. You can do FROM a531daec9f98 on your dockerfile to make sure it will never change or you can docker tag a531daec9f98 shortfellow/centos (you will need to docker pull a531daec9f98 before).
It is very similar to git in a sense that if you are using someone's repository, and if that someone does not use tags or branches, when he updates his reposiroty and you re pull, you will have the latest version with the new changes. In order to get back to the version you liked, you need to find the commit id. The solution would be to fork the repository. Which you can do on Docker by tagging the image under you username and pushing to a registry (docker push username/image)