Docker image versioning with docker stack - docker

I've setup a staging and a production server. The flow is as follows. I develop and push my new image to the staging registry. There the code is tested and if everything fits I want to push the image to the production server. I also want to use versioning. I thought about incrementing the image tag.( Not just ass latest) I deploy my application with docker stack and a compose file.
Now I want to ask about a best practice.
Example:
current image: xyz:0.1
new image: xyz:0.2
In my compose file I referance the image xyz:auto_lastest_version_here
I want to be able to see the version string and not just the latest Tag.
Is there already a mechanism or a way to reduce an update to docker pull - > pull the latest version available
stack deploy ... to update the specific container.
EDIT: I guess I can write a script where I extract the latest Tag from the images and refer the this by using an env var in my compose file. I just thought there might be an easier way or standard way provided by docker.

Image repository names and tags are just strings attached to a blob which is the actual image data. Docker and the docker registry don't really have a concept of the most recent version of a blob -- the ":latest" tag doesn't even have internal significance - it's just a string which is used by default when building, and there's nothing preventing you from tagging an older image as :latest.
Fortunately, you can tag an image with multiple tags, and that provides a reasonable solution. For me, I tag with a string identifying the version I want on my production server, like ":live" or ":production", in addition to tagging with the actual version number. Thus, lets say you have images myimage:1.0.0, myimage:1.0.1, and myimage:1.1.0, you could run:
docker tag "myimage:1.1.0" "myimage:production"
...to add a production tag to it. The stack file you deploy on the production server would always then refer to myimage:production.
The real advantage to that is that if users start complaining about problems when you switch to 1.1.0, you can simply tag myimage:1.0.1 as myimage:production and redeploy, and it would switch back to the older version.
Not sure if this is the "best" practice, but it's the one I use.

Related

How to improve automation of running container's base image updates?

I want all running containers on my server to always use the latest version of an official base image e.g. node:16.3 in order to get security updates. To achieve that I have implemented an image update mechanism for all container images in my registry using a CI workflow which has some limitations described below.
I have read the answers to this question but they either involve building or inspecting images on the target server which I would like to avoid.
I am wondering whether there might be an easier way to achieve the container image updates or to alleviate some of the caveats I have encountered.
Current Image Update Mechanism
I build my container images using the FROM directive with the minor version I want to use:
FROM node:16.13
COPY . .
This image is pushed to a registry as my-app:1.0.
To check for changes in the node:16.3 image compared to when I built the my-app:1.0 image I periodically compare the SHA256 digests of the layers of the node:16.3 with those of the first n=(number of layers of node:16.3) layers of my-app:1.0 as suggested in this answer. I retrieve the SHA256 digests with docker manifest inpect <image>:<tag> -v.
If they differ I rebuild my-app:1.0 and push it to my registry thus ensuring that my-app:1.0 always uses the latest node:16.3 base image.
I keep the running containers on my server up to date by periodically running docker pull my-app:1.0 on the server using a cron job.
Limitations
When I check for updates I need to download the manifests for all my container images and their base images. For images hosted on Docker Hub this unfortunately counts against the download rate limit.
Since I always update the same image my-app:1.0 it is hard to track which version is currently running on the server. This information is especially important when the update process breaks a service. I keep track of the updates by logging the output of the docker pull command from the cron job.
To be able to revert the container image on the server I have to keep previous versions of the my-app:1.0 images as well. I do that by pushing incremental patch version tags along with the my-app:1.0 tag to my registry e.g. my-app:1.0.1, my-app:1.0.2, ...
Because of the way the layers of the base image and the app image are compared it is not possible to detect a change in the base image where only the uppermost layers have been removed. However I do not expect this to happen very frequently.
Thank you for your help!
There are a couple of things I'd do to simplify this.
docker pull already does essentially the sequence you describe, of downloading the image's manifest and then downloading layers you don't already have. If you docker build a new image with an identical base image, an identical Dockerfile, and identical COPY source files, then it won't actually produce a new image, just put a new name on the existing image ID. So it's possible to unconditionally docker build --pull images on a schedule, and it won't really use additional space. (It could cause more redeploys if neither the base image nor the application changes.)
[...] this unfortunately counts against the download rate limit.
There's not a lot you can do about that beyond running your own mirror of Docker Hub or ensuring your CI system has a Docker Hub login.
Since I always update the same image my-app:1.0 it is hard to track which version is currently running on the server. [...] To be able to revert the container image on the server [...]
I'd recommend always using a unique image tag per build. A sequential build ID as you have now works, date stamps or source-control commit IDs are usually easy to come up with as well. When you go to deploy, always use the full image tag, not the abbreviated one.
docker pull registry.example.com/my-app:1.0.5
docker stop my-app
docker rm my-app
docker run -d ... registry.example.com/my-app:1.0.5
docker rmi registry.example.com/my-app:1.0.4
Now you're absolutely sure which build your server is running, and it's easy to revert should you need to.
(If you're using Kubernetes as your deployment environment, this is especially important. Changing the text value of a Deployment object's image: field triggers Kubernetes's rolling-update mechanism. That approach is much easier than trying to ensure that every node has the same version of a shared tag.)

what is the best practicies of storing images in container registry

I need different images for dev,stage, and prod environments, how should I store images in dokckerhub?
should I use tags
my_app:prod
my_app:dev
my_app:stage
or maybe include env name in image like this
my_app_stage
my_app_stage
my_app_stage
Tags are primarily meant for versioning, as the default tag latest implies. If you use it for other meaning without versioning info, like tagging environment as my_app:dev and my_app:prod, there's no strict rule to prohibit that, but it could cause problem for deployment of the containers.
Imagine you have a container defined in docker-compose.yml that specifies my_app:prod as image. It's fine when you're developing locally, but when you deploy to production with Docker Compose or an orchestration service like Kubernetes, depending on policy, the controller can choose to reuse images from its local cache instead of pulling from registry every time. Now you just completed a new version of the image, and pushed it to Docker Hub feeling assured. Too bad it's still under the same name and tag, so the controller considers it's the same and uses the cached image, causing your old version to be deployed.
It could be worse than that. Not all nodes or clusters are configured the same, some will pull the latest version from the registry while some don't. Your swarm or deployment now contains a mixed set of old and new container versions, producing erratic behavior at best.
Now you know better and push your new version as my_app/prod:v2.0 and update the config. All controllers see the new version and pull down to use for replacing and scaling containers. Everything is consistent.
A simple version number as tag may sound a bit too simple, as practically you could have many properties that you find useful to add to an image, to help with documentation or query maybe. Or you need a specific name and tag so you can push to a certain cloud provider. Luckily you don't have to sacrifice versioning to do that, as Docker allows you to apply as many tags as you like:
docker build -t my_app:latest -t my_app:v2.0 -t my_app:prod -t cloud_user/app_image_id:v2.0 .

How stable are version-tagged docker baseimages? Should I make my own copy?

I am creating docker images based on base-images from docker.io (for example ubuntu:14.04).
I want my docker-builds to be 100% reproducible. One requirement for this is, that the baseimage does not change (or if it changes, it is a decision by me to use the changed baseimage).
Can I be sure that a version tagged base image (like ubuntu:14.04) will always be exactly the same?
Or should I make my own copy in my own private repository?
Version tags like ubuntu:14.04 can be expected to change with bug fixes. If you want to be sure you get the exact same image (still containing the fixed bugs) you can use the hash of the image:
FROM ubuntu#sha256:4a725d3b3b1c
But you can not be sure this exact version will be hosted forever by docker hub.
Safest way is to create your own docker repository server. Push the images you are using to that repository. Use the hash notation to pull the images from your local repos.
FROM dockerrepos.yourcompany.com/ubuntu#sha256:4a725d3b3b1c

How do I setup a docker image to dynamically pull app code from a repository?

I'm using docker cloud at the moment. I'm trying to figure out a development to production workflow using docker with docker compose to pull application code for multiple applications of the same type, but simply changing the repository each pulls from. I understand the concept of mounting a volume, but all the examples show the source code in the same repo with the dockerfile and docker compose file. example. I want the app code from this example to come from a remote, dynamic repo. Would I set an environment variable in the docker image? If so how?
Any example or link to a workflow example is appreciated.
If done right, the code "baked" into Docker images should be immutable and the only thing that should change at runtime is configurable parameters like environment variables (e.g. to set the port the app will listen on).
Ideally, you should bake your code into the image. Otherwise you're losing a lot of the benefit of using Docker in the first place.
The problem is..
.. your use case does not match with the best practice. You want an image without any code embedded in it, but rather fetched at each update. If you browse the docker hub you'll find many image named as service:version. That's one of the benefit of Docker, offering different versions of the same service. If you want to always get the most up-to-date code your workflow may have some down sides.
One solution could be
Webhooks, especially if your code is versionned on GH. Or any tools of continuous integration.

How to prevent docker images on docker hub from being overwritten?

Is there any way to prevent images being uploaded to docker hub with the same tags as existing images? Our use case is as follows.
We deploy to production with a docker-compose file with the tags of images as version numbers. In order to support roll-back to previous environments and idempotent deployment it is necessary that a certain tagged docker image always refer to the same image.
However, docker hub allows images to be uploaded with the same tags as existing images (they override the old image). This completely breaks the idea of versioning your images.
We currently have work-arounds which involve our build scripts pulling all versions of an image and looking through the tags to check that an overwrite will not happen etc. but it feels like there has to be a better way.
If docker hub does not support this, is there a way to do docker deployment without docker hub?
The tag system has no way of preventing images been overwritten; you have to come up with your own processes to handle this (and h3nrik's answer is an example of this).
However, you could use the digest instead. In the new v2 of the registry, all images are given a checksum, known as a digest. If an image or any of its base layers change, the digest will change. So if you pull by digest, you can be absolutely certain that the contents of that image haven't changed over time and that the image hasn't been tampered with.
Pulling by digest looks like:
docker pull debian#sha256:f43366bc755696485050ce14e1429c481b6f0ca04505c4a3093dfdb4fafb899e
You should get the digest when you do a docker push.
Now, I agree that pulling by digest is a bit unwieldy, so you may want to set up a system that simply tracks digest and tag and can verify that the image hasn't changed.
In the future, this situation is likely to improve, with tools like Notary for signing images. Also, you may want to look at using labels to store metadata such as git hash or build number.
Assuming you have a local build system to build your Docker images: you could include the build number from your local build job in your tag. With that you assure your requirement:
... it is necessary that a certain tagged docker image always refer to the same image.
When your local build automatically pushes to docker hub it is assured that each push pushes an image with a unique tag.

Resources