How to require Docker Certified images on Gitlab? - docker

Gitlab lets you use any image on Docker Hub but how can I restrict to Docker Certified images? The advice I read in Docker Reference Architecture: Building a Docker Secure Supply Chain implies that this is something I do (manually) when I look for an image:
Picking the right images from Docker Hub is critical. Start with
Certified Images, then move on to official images. Lastly, consider
community images. Only use community images that are an automated
build. This helps ensure that they are updated in a timely fashion.
Verification of the freshness of the image is important as well.
...
When searching Docker Hub for images, make sure to check the Docker Certified checkbox.
But can I set up Gitlab to ensure that the images I'm using are Certified Images? For example, suppose an image I chose one day loses its certification? I would want to be notified of the vulnerability automatically, let's say at build time or even more proactively.

Related

Making sure the "platform" part of a 3rd party image from DockerHub is up-to-date

I would like to use other peoples images from DockerHub. I trust the applications that the containers are for and I trust the maintainers to update the DockerHub image if their application gets an update. I however do not trust them to docker build --pull every time the baseimage they use gets an update (for example when they use FROM debian:bullseye-slim in their Dockerfile).
What I want: to check continuously that everything (including the "platform" aka the Debian baseimage in this example) is up to date without building everything myself (if possible).
My main problem: There seems to be no way to check a base image from a pulled docker image (see Docker missing layer IDs in output and other stackexchange questions about the same thing). Therefore I cannot automatically and generally compare the layers of the image from my running container with the upstream baseimage (or even the date of the last update of the application image with that of the baseimage).
Is there a way to do what I described above? Or do I really have to
a) fully trust the maintainer of the DockerHub image to have their build pipeline in check or
b) build everything myself (in which case: why would there even exist a DockerHub instead of a DockerFile-Hub)?
If that may be relevant: This is sort of a extension of this older question: https://serverfault.com/questions/611082/how-to-handle-security-updates-within-docker-containers

How to improve automation of running container's base image updates?

I want all running containers on my server to always use the latest version of an official base image e.g. node:16.3 in order to get security updates. To achieve that I have implemented an image update mechanism for all container images in my registry using a CI workflow which has some limitations described below.
I have read the answers to this question but they either involve building or inspecting images on the target server which I would like to avoid.
I am wondering whether there might be an easier way to achieve the container image updates or to alleviate some of the caveats I have encountered.
Current Image Update Mechanism
I build my container images using the FROM directive with the minor version I want to use:
FROM node:16.13
COPY . .
This image is pushed to a registry as my-app:1.0.
To check for changes in the node:16.3 image compared to when I built the my-app:1.0 image I periodically compare the SHA256 digests of the layers of the node:16.3 with those of the first n=(number of layers of node:16.3) layers of my-app:1.0 as suggested in this answer. I retrieve the SHA256 digests with docker manifest inpect <image>:<tag> -v.
If they differ I rebuild my-app:1.0 and push it to my registry thus ensuring that my-app:1.0 always uses the latest node:16.3 base image.
I keep the running containers on my server up to date by periodically running docker pull my-app:1.0 on the server using a cron job.
Limitations
When I check for updates I need to download the manifests for all my container images and their base images. For images hosted on Docker Hub this unfortunately counts against the download rate limit.
Since I always update the same image my-app:1.0 it is hard to track which version is currently running on the server. This information is especially important when the update process breaks a service. I keep track of the updates by logging the output of the docker pull command from the cron job.
To be able to revert the container image on the server I have to keep previous versions of the my-app:1.0 images as well. I do that by pushing incremental patch version tags along with the my-app:1.0 tag to my registry e.g. my-app:1.0.1, my-app:1.0.2, ...
Because of the way the layers of the base image and the app image are compared it is not possible to detect a change in the base image where only the uppermost layers have been removed. However I do not expect this to happen very frequently.
Thank you for your help!
There are a couple of things I'd do to simplify this.
docker pull already does essentially the sequence you describe, of downloading the image's manifest and then downloading layers you don't already have. If you docker build a new image with an identical base image, an identical Dockerfile, and identical COPY source files, then it won't actually produce a new image, just put a new name on the existing image ID. So it's possible to unconditionally docker build --pull images on a schedule, and it won't really use additional space. (It could cause more redeploys if neither the base image nor the application changes.)
[...] this unfortunately counts against the download rate limit.
There's not a lot you can do about that beyond running your own mirror of Docker Hub or ensuring your CI system has a Docker Hub login.
Since I always update the same image my-app:1.0 it is hard to track which version is currently running on the server. [...] To be able to revert the container image on the server [...]
I'd recommend always using a unique image tag per build. A sequential build ID as you have now works, date stamps or source-control commit IDs are usually easy to come up with as well. When you go to deploy, always use the full image tag, not the abbreviated one.
docker pull registry.example.com/my-app:1.0.5
docker stop my-app
docker rm my-app
docker run -d ... registry.example.com/my-app:1.0.5
docker rmi registry.example.com/my-app:1.0.4
Now you're absolutely sure which build your server is running, and it's easy to revert should you need to.
(If you're using Kubernetes as your deployment environment, this is especially important. Changing the text value of a Deployment object's image: field triggers Kubernetes's rolling-update mechanism. That approach is much easier than trying to ensure that every node has the same version of a shared tag.)

create esxi server in docker image

I would like to create a vmware image based on an linux ISO file. The whole thing should take place within a build pipeline - using docker images.
My approach is that I look for a suitable docker image with an free esxi server and use this as the basis in the build pipeline. Unfortunately I can not find an image on dockerhub.
Isn't this approach possible?
I would have expected that several people would have done this before me and that I could use an existing docker image accordingly.

How to prevent docker images on docker hub from being overwritten?

Is there any way to prevent images being uploaded to docker hub with the same tags as existing images? Our use case is as follows.
We deploy to production with a docker-compose file with the tags of images as version numbers. In order to support roll-back to previous environments and idempotent deployment it is necessary that a certain tagged docker image always refer to the same image.
However, docker hub allows images to be uploaded with the same tags as existing images (they override the old image). This completely breaks the idea of versioning your images.
We currently have work-arounds which involve our build scripts pulling all versions of an image and looking through the tags to check that an overwrite will not happen etc. but it feels like there has to be a better way.
If docker hub does not support this, is there a way to do docker deployment without docker hub?
The tag system has no way of preventing images been overwritten; you have to come up with your own processes to handle this (and h3nrik's answer is an example of this).
However, you could use the digest instead. In the new v2 of the registry, all images are given a checksum, known as a digest. If an image or any of its base layers change, the digest will change. So if you pull by digest, you can be absolutely certain that the contents of that image haven't changed over time and that the image hasn't been tampered with.
Pulling by digest looks like:
docker pull debian#sha256:f43366bc755696485050ce14e1429c481b6f0ca04505c4a3093dfdb4fafb899e
You should get the digest when you do a docker push.
Now, I agree that pulling by digest is a bit unwieldy, so you may want to set up a system that simply tracks digest and tag and can verify that the image hasn't changed.
In the future, this situation is likely to improve, with tools like Notary for signing images. Also, you may want to look at using labels to store metadata such as git hash or build number.
Assuming you have a local build system to build your Docker images: you could include the build number from your local build job in your tag. With that you assure your requirement:
... it is necessary that a certain tagged docker image always refer to the same image.
When your local build automatically pushes to docker hub it is assured that each push pushes an image with a unique tag.

How to share images between multiple docker hosts?

I have two hosts and docker is installed in each.
As we know, each docker stores the images in local /var/lib/docker directory.
So If I want to use some image, such as ubuntu, I must execute the docker pull to download from internet in each host.
I think it's slow.
Can I store the images in a shared disk array? Then have some host pull the image once, allowing every host, with access to the shared disk, to use the image directly.
Is it possible or good practice? Why docker is not designed like this?
It may need to hack the docker's source code to implement this.
Have you looked at this article
Dockerizing an Apt-Cacher-ng Service
http://docs.docker.com/examples/apt-cacher-ng/
extract
This container makes the second download of any package almost instant.
At least one node will be very fast, and I think it should possible to tell the second node to use the cache of the first node.
Edit : you can run your own registry, with a command similar to
sudo docker run -p 5000:5000 registry
see
https://github.com/docker/docker-registry
What you are trying to do is not supposed to work as explained by cpuguy83 at this github/docker issue.
Indeed:
The underlying storage driver would need to synchronize access.
Sharing /var/lib/docker is far not enough and won't work!
According to the doc.docker.com/registry:
You should use the Registry if you want to:
tightly control where your images are being stored
fully own your images distribution pipeline
integrate image storage and distribution tightly into your in-house development workflow
So I guess that this is the (/your) best option to work this out (but I guess that you got that info -- I just add it here to update the details).
Good luck!
Update in 2016.1.25 docker mirror feature is deprecated
Therefore this answer is not applicable now, leave for reference
Old info
What you need is the mirror mode for docker registry, see https://docs.docker.com/v1.6/articles/registry_mirror/
It is supported directly from docker-registry
Surely you can use public mirror service locally.

Resources