I would like to publish docker images tagged with both semantic versions (like version 1.0) and also build numbers (like build 3). Operationally this might come about in the following way:
docker build -t my-project:1.0-1
# make minor changes to docker file
docker build -t my-project:1.0-2
# make minor changes to docker file
docker build -t my-project:1.0-3
# release new version of project
docker build -t my-project:1.1-1
I would expect some users to pin to particular build numbers
docker pull my-project:1.0-2
While other users would just ask for "the lastest of version 1.0"
docker pull my-project:1.0
Does this work? Is there a better way to accomplish this goal?
Yes, this works. A tag is just a friendly name attached to an image ID. Any given image can have as many tags as you would realistically want.
docker tag myproject my-project:1.0-2
docker tag myproject my-project:1.0
Then, if you docker images and find these tags, you'll see that the IMAGE ID for both tags is the same. Keep in mind you'd want to push both tagged images to your repository.
Looking at a couple of popular Docker Hub repos for inspiration:
ruby, python, postgres
Related
Where do I find a list of software packages included (the pre-installed packages) in Gitlab docker CI images?
I usually use the standard ruby:2.5 image, but I cannot seem to find a list of all packages and softwares/executables that are included in the available build images.
Where is a list of packages included? Or do I always have to test an image in a .gitlab-ci.yml file and see if it works?
(Surely there is a list of packages. Forgive a newbie in the world of CI.)
As mentioned in the GitLab docs: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#what-is-an-image.
The image keyword is the name of the Docker image the Docker executor uses to run CI/CD jobs.
By default, the executor pulls images from Docker Hub.
However, you can configure the registry location in the gitlab-runner/config.toml file. For example, you can set the Docker pull policy to use local images.
So, to see the image content you can go to the Docker Hub image page, for example, Ruby: https://hub.docker.com/_/ruby.
And click on a specific Docker Image Tag to see its layers with the steps and the installed packages: https://hub.docker.com/layers/library/ruby/2.5/images/sha256-dde6be82028418fa39afcc712ac958af05e67dcb31879a3bd76b688453fe4149?context=explore.
I am relatively new to docker and saw in other repositories that we can push multiple digests under same tag with different OS/ARCH in docker. For example:
How can I achieve the same? Right now whenever I do docker push [REPO_LINK] from different architectures, it replaces the last pushed one with it's architecture. Thanks!
You might be looking for fat manifest aka manifest list.
It enables building images with multiple architectures under same tag. You need to use docker manifest command, when using multiple machines.
Once you have pushed images from different machines, you have to finally combine the manifests of these images into single one (called as manifest list). See more from official docs.
This blog post was already mentioned in one comment, but you can still use that docker manifest example to combine manifests to single file, even if you are not working only on one machine.
Related question: Is it possible to push docker images for different architectures separately?
There are two options I know of.
First, you can have buildx run builds on multiple nodes, one for each platform, rather than using qemu. For that, you would use docker buildx create --append to add the additional nodes to the builder instance. The downside of this is you'll need the nodes accessible from the node running docker buildx which typically doesn't apply to ephemeral cloud build environments.
The second option is to use the experimental docker manifest command. Each builder would push a separate tag. And at the end of all those, you would use docker manifest create to build a manifest list and docker manifest push to push that to a registry. Since this is an experimental feature, you'll want to export DOCKER_CLI_EXPERIMENTAL=enabled to see it in the command line. (You can also modify ~/.docker/config.json to have an "experimental": "enabled" entry.)
I'm creating a CI process for a project. The project consists of 2 components:
petalinux image - the CI process builds linux image with petalinux-build and generates an SDK using petalinux-build --sdk. Don't have to know about petalinux to answer this question. Just know that this side of the projects creates an SDK which will be used in the second components of the project.
Each time I run this petalinux build, I also create a Docker image with the SDK in it, and I want to use this image when I build the second component - the application
Application - This is built using the Docker image created in the petalinux component.
The thing is, that each time I build the Docker image, I want to tag it with the version of the petalinux project and keep it in a Docker registry. So my Docker registry looks like:
-- sdk
|
+ - 1.0
|
+ - 1.3
|
+ - 1.6 .. etc.
Now, when I build the application, I want to use the latest sdk image. So basically I want each time I build the petalinux project to push 2 tags to the docker registry: the current version, say 1.9, and latest.
Does someone recognizes such a pattern? What's the best way to do this in Jenkins scripted?
To push several tags, just use shell script to specify several tags in docker build command, such as
docker build -t image_namespace/image_name:tag1 -t image_namespace/image_name:latest .
Then also push both of them, i.e.:
docker push image_namespace/image_name:tag1
docker push image_namespace/image_name:latest
You may also find multiline shell scripts useful - jenkins pipeline: multiline shell commands with pipe
I want to know whether it’s possible to make a FROM instruction in a Dockerfile pull the most recent image (e.g. image:latest) before proceeding with the build?
Currently, the image is only pulled if it’s not already stored locally.
docker build --pull OTHER_OPTIONS PATH
From https://docs.docker.com/engine/reference/commandline/build/
--pull Always attempt to pull a newer version of the image
Although there might be genuine use case for this for development pupose, I strongly suggest avoid depending on this option in production builds. Docker images must be immutable. Using this option can lead to situations where different images are generated from same source code and any behaviour changes resulting from such builds without corresponding changes in code are hard to debug.
Say there is a project called "derived project" which uses the base image myBaseImage:latest
FROM myBaseImage:latest
<snipped>
CMD xyz
docker build --pull -t myDerivedImage:<version of source code> .
Assuming the tag of derived image is based on it's source code version (a git commit hash for example) which is the most common way to tag the images, if a new base image under latest tag is published while there are no changes in derived project, the build of derived project will produce different images under same name before and after the base image change. Once an image is published under a name, it should not be mutated.
In order to build a docker image by updating the base image, you must use the option:
--pull
I leave you the official documentation where this option is discussed and many more: official docker documentation
I'm new to docker and I wonder why there is no command to fetch AUTOMATED BUILD-repo's Dockerfile to build image locally from it (can be convenient some times I guess, instead of opening browser, peeking for github reference on repo's page and then using git to clone)
I have created dockerfileview to fetch Dockerfile from Docker Hub.
https://github.com/remore/dockerfileview
The automated build normally has a githubrepo behind it and links to the original repository in the build details section under the Source Repository heading. Which automated build are you looking for the source file for?
If you would like to search for images from the command line you can run docker search TERM to find images (but not their docker files). You can also use docker history to give a rough approximation of the commands that went in the docker file.
e.g.