IF there is a docker image using a particular base image is running as a container and there is a new security upgrade for the base image. What is the best practice to apply that security patch to the docker image.
Also how to know if there is a security patch available for the base image .
Let's say that you have a Dockerfile that is based on an image called "Base:latest" and you've built an image called "MyImage:latest:latest"
If "Base:latest" has updated with security updates, you need to rebuild your "MyImage:latest".
Containers are image instances, so if you need the security updates that need to be reflected in a container, the container should be re-created based on the "MyImage:latest" image.
Notice that you wouldn't want to use the "latest" tag for base images in production, because you won't be able to reproduce the same deployed environment, so the best practice is to use a specific version tag like "1.0". If an update is available, you'll need to updated your Dockerfile from "Base:1.0" to "Base:1.1".
So if your image is based on another image and you want to run security updates without waiting for a new and updated version of the base image, you can run a security update command in your Dockerfile and make sure to rebuild your image occasionally and recreate the container.
You could probably automate this process using tools like Watchtower by automatically rebuild your image on a regular basis and then recreate your container.
Another option is to run automated updates in the container level, probably by using a script that runs every day, but you should take into consideration the impact on the running process load-wise (networking, cpu, etc.)
Related
I want all running containers on my server to always use the latest version of an official base image e.g. node:16.3 in order to get security updates. To achieve that I have implemented an image update mechanism for all container images in my registry using a CI workflow which has some limitations described below.
I have read the answers to this question but they either involve building or inspecting images on the target server which I would like to avoid.
I am wondering whether there might be an easier way to achieve the container image updates or to alleviate some of the caveats I have encountered.
Current Image Update Mechanism
I build my container images using the FROM directive with the minor version I want to use:
FROM node:16.13
COPY . .
This image is pushed to a registry as my-app:1.0.
To check for changes in the node:16.3 image compared to when I built the my-app:1.0 image I periodically compare the SHA256 digests of the layers of the node:16.3 with those of the first n=(number of layers of node:16.3) layers of my-app:1.0 as suggested in this answer. I retrieve the SHA256 digests with docker manifest inpect <image>:<tag> -v.
If they differ I rebuild my-app:1.0 and push it to my registry thus ensuring that my-app:1.0 always uses the latest node:16.3 base image.
I keep the running containers on my server up to date by periodically running docker pull my-app:1.0 on the server using a cron job.
Limitations
When I check for updates I need to download the manifests for all my container images and their base images. For images hosted on Docker Hub this unfortunately counts against the download rate limit.
Since I always update the same image my-app:1.0 it is hard to track which version is currently running on the server. This information is especially important when the update process breaks a service. I keep track of the updates by logging the output of the docker pull command from the cron job.
To be able to revert the container image on the server I have to keep previous versions of the my-app:1.0 images as well. I do that by pushing incremental patch version tags along with the my-app:1.0 tag to my registry e.g. my-app:1.0.1, my-app:1.0.2, ...
Because of the way the layers of the base image and the app image are compared it is not possible to detect a change in the base image where only the uppermost layers have been removed. However I do not expect this to happen very frequently.
Thank you for your help!
There are a couple of things I'd do to simplify this.
docker pull already does essentially the sequence you describe, of downloading the image's manifest and then downloading layers you don't already have. If you docker build a new image with an identical base image, an identical Dockerfile, and identical COPY source files, then it won't actually produce a new image, just put a new name on the existing image ID. So it's possible to unconditionally docker build --pull images on a schedule, and it won't really use additional space. (It could cause more redeploys if neither the base image nor the application changes.)
[...] this unfortunately counts against the download rate limit.
There's not a lot you can do about that beyond running your own mirror of Docker Hub or ensuring your CI system has a Docker Hub login.
Since I always update the same image my-app:1.0 it is hard to track which version is currently running on the server. [...] To be able to revert the container image on the server [...]
I'd recommend always using a unique image tag per build. A sequential build ID as you have now works, date stamps or source-control commit IDs are usually easy to come up with as well. When you go to deploy, always use the full image tag, not the abbreviated one.
docker pull registry.example.com/my-app:1.0.5
docker stop my-app
docker rm my-app
docker run -d ... registry.example.com/my-app:1.0.5
docker rmi registry.example.com/my-app:1.0.4
Now you're absolutely sure which build your server is running, and it's easy to revert should you need to.
(If you're using Kubernetes as your deployment environment, this is especially important. Changing the text value of a Deployment object's image: field triggers Kubernetes's rolling-update mechanism. That approach is much easier than trying to ensure that every node has the same version of a shared tag.)
We have a base image with the tag latest. This base image is being used for bunch of applications. There might be some update on the base image like ( OS upgrade, ...).
Do we need to rebuild and redeploy all applications when there is a change in the base image? Or, since the tag is latest and the new base image also will be with the tag latest, it will be updating in the docker layer and will be taken care without a restart?
Kubernetes has an imagePullPolicy: setting to control this. The default is that a node will only pull an image if it doesn’t already have it, except that if the image is using the :latest tag, it will always pull the image.
If you have a base image and then some derived image FROM my/base:latest, the derived image will include a specific version of the base image as its lowermost layers. If you update the base image and don’t rebuild the derived images, they will still use the same version of the base image. So, if you update the base image, you need to rebuild all of the deployed images.
If you have a running pod of some form and it’s running a :latest tag and the actual image that tag points at changes, Kubernetes has no way of noticing that, so you need to manually delete pods to force it to recreate them. That’s bad. Best practice is to use some explicit non-latest version tag (a date stamp works fine) so that you can update the image in the deployment and Kubernetes will redeploy for you.
There are two levels to this question.
Docker
If you use something like FROM baseimage:latest, this exact image is pulled down on your first build. Docker caches layers on consecutive builds, so not only will it build from the same baseimage:latest, but it will also skip execution of the Dockerfile elements untill first changed/not-cached one. To make the build notice changes to your baseimage you need to run docker pull baseimage:latest prior to the build, so that next run uses new content under latest tag.
The same goes for versioned tags when they aggregate minor/patch versions like when you use baseimage:v1.2 but the software is updated from baseimage:v1.2.3 to v1.2.4, and by the same process content of v1.2.4 is published as v1.2. So be aware of how versioning for particular image is handled.
Kubernetes
When you use :latest to deploy to Kubernetes you usually have imagePullPolicy: Always set. Which as for Docker build above, means that the image is always pulled before run. This is far from ideal, and far from immutable. Depending on the moment of container restart you might end up with two pods running at the same time, both the same :latest image yet the :latest for both of them will mean different actual image underneath it.
Also, you can't really change image in Deployment from :latest to :latest cause that's no change obviously, meaning you're out of luck for triggering rolling update, unless you pass version in label or something.
The good practice is to version your images somehow and push updates to cluster with that version. That is how it's designed and intended to use in general. Some versioning schemas I used were :
semantic (ie. v1.2.7) : nice if your CI/CD tool supports it well, I used it in Concourse CI
git_sha : works in many cases but is problematic for rebuilds that are not triggered by code changes
branch-buildnum or branch-sha-buildnum : we use it quite a lot
that is not to say I completely do not use latest. In fact most of my builds are built as branch-num, but when they are released to production that are also tagged and pushed to registry as branch-latest (ie. for prod as master-latest), which is very helpful when you want to deploy fresh cluster with current production versions (default tag values in our helm charts are pointing to latest and are set to particular tag when released via CI)
We are discussing how we should deploy our application running in a docker container. At the moment, we build our application image in the pipeline containing the application code. Which means we have to build the docker image every time the application updates.
Another approach we consider is putting the application code in a volume on the server. We then pull the latest release with git on the server. So the image has not to be rebuilt.
So our discussed options are:
Build the image containing the application code
Use a volume and store the application code on the server
What is best practice to do and why?
While the other answers here have explained the point of building code into your image, I'd like to go one step further and show you how to get the benefits of both worlds while following this best practice.
Docker best practices call for building source code into your image before deployment, rather than deploying an image with dependencies installed and then source code mounted in as a volume.
This gives you a self-contained, portable container that is straightforward to test, deploy, or rollback.
May I take a stab at why you are considering hot-mounting code?
Hot-mounting code is appealing for several reasons — and they're all easy to achieve without sacrificing this best practice of building a self-contained image:
Building Docker images can be slow, so why rebuild for a minor change when you can just hot-mount the code?
A complementary best practice is to use a "base image" that installs all dependencies -- usually the slow part of building a docker image. The key insight is that this base image won't change often!
But the image that derives from it -- your application image, which installs source code -- will change with every commit you want to deploy. That derived image will be very fast to build. The Dockerfile could be as simple as:
FROM myapp/base . # all dependencies installed in base image
ADD code.tar.gz /src # automatic untaring!
CMD [...] # whatever it takes to run your app
Hot-mounting enables faster development cycles, because a developer won't need to flush their docker container, rebuild, and run a new container just to see a change.
This is a fair point. I recommend making a "dev" image (which will also derive from your base image) that enables code mounting via a volume rather than the source code installation steps you'd have in your testing and deployment images.
When you build image every time with new application you have easy way to deploy it later on to the customer or on your production server. When the docker image is ready you can keep it in the repository. Additionally you have full control on that that your docker is working with current application.
In case of keeping the application in mounted volume you have to keep in mind following problems:
life cycle of application - what to do with container when you have to update the application - gently stop, overwrite and run again
how do you deploy your application - you have to do it manually over SSH, or you want just to run simple command docker run, and it runs your latest version from your repository
The mounted volumes are rather for following casses:
you want to have externally exposed settings for container - what is also not a good idea
you want to have externally access to the data produced by the application like logs, db, etc
To automate it totally, you can:
build image for each application and push to the repository
use for example watchtower to automatic update of the system on your production servers
I believe you should follow the first approach i.e. rebuilding the docker image every time there are changes in code. Reasons are-
Firstly, if you are using volume, every time you have to manage the clean closing and removing of the previous version of the application and check whether the new version of the application is running correctly. Your new application might get affected dependencies of your previous version of the application. That need to be taken care too.
Secondly, there might be some version updates of the frameworks installed and some new frameworks are to be installed with the current application. In this case, the first approach seems to be the only option.
Thirdly, As when you are using docker volume you will be removing the most important feature of docker i.e. abstraction from outside environment. Also, the image might become machine dependent because of it, which might affect if you want to publish the app in multiple environments.
My suggestion would be creating a pipeline using some continuous integration tool and fully automate the process starting from code building, building of docker image and deploying it to your environment.
I have been trying to figure out why one might choose adding every "step" of their setup to a Dockerfile which will create your container in a certain state.
The alternative in my mind is to just create a container from a simple base image like ubuntu and then (via shell input) configure your container the way you'd like.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
Strictly speaking, no. However, you can create a new image from an existing container using the docker commit command:
$ docker commit <container-name> <image-name>
This command will create a new image from the existing container that you can push and pull from/to registries, export and import and create new containers from.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
If you're already using some other mechanism for automated configuration, you can simply integrate your existing automation into the Docker build. For instance, if you are already configuring your images using shell scripts, simply add a build step in your Dockerfile in which to add your install scripts to the container and execute it. In theory, this can also work with configuration management utilities like Puppet, Salt and others.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
True. As mentioned in comments, there are clear advantages to have an automated and reproducible build of your image. If you build your containers manually and then create an image with docker commit, you don't necessarily know how to re-build this image at a later point in time (which may become necessary when you want to release a new version of your application or re-build the image on top of an updated base image).
I want to base my container off centos:centos6. But for some reason, centos:centos6 is updated somehow on the registry. This results in possible unfavorable different images when being built on different machines at different time. Such change caused segmentation fault in our application recently.
Is there a way I can specify the exact version for the from declaration so the build shall be the same even when the container being built on different machines at different time?
No, you can't tell a Dockerfile to be pinned to a particular image / layer ID, you have to use a tag (and if you don't use a tag, the tag latest is assumed and used as the default.
If you're worried that a remote registry image will change, you should take a copy of the Dockerfile and build your own version of the image yourself. You can either host it on the Docker Hub under your account or run your own private registry.
That way, you're in complete control over what's in there and when it gets updated (i.e., if you need to pin a particular package to an old version).