Does jenkins have similar feature like CircleCI DLC (docker layer cache) that allows us to mount docker layer cache so that it can reused in different nodes.
https://circleci.com/docs/2.0/docker-layer-caching/#:~:text=Learn%20More-,Overview,used%20to%20run%20your%20job
Related
Is there a way to do variable/file transforms in the *.config file of a docker image as it's deployed (to a Kubernetes cluster - AKS)?
This could be done by doing the replacement and creating a docker container image for each configuration but this would lead to a lot of extra container images when the only difference is the configuration file.
A Docker image is build as a readable/writeable layer on top of a bunch of read-only layers. These layers (also called intermediate images) are generated when the commands in the Dockerfile are executed during the Docker image build.
These intermediate layers are shared across the Docker image to help increase reusability, decrease disk usage, and speed up docker build by allowing each step to be cached.
So, if you create multiple images by changing the configuration setting, each image use the shared layers that are unchanged and only the changed configuration amounts to the extra size for the new images.
Alternately, you could define these configurations as ConfigMap if you use Kubernetes and mount them on the pod as volumes/environment variables.
If we build an image on machine 1 and tag it as machine1:latest and push it to our docker registry then build another image from the same Dockerfile on machine2 and tag it as machine2:latest and push it to the registry will the registry use the layers of machine1:latest? Or because we built the image on a different machine the layers will be different?
In general what factors will change/affect the layer sharing in docker?
Since the registry uses the hash to detect the layers and if possible share them if we build on different machines sharing is not possible because each docker daemon generates a different hash and this will prevent sharing of layers.
I'm new to Kubernetes and I'm learning about it.
Are there any circumstances where Kubernetes is used to create a Docker image instead of pulling it from a repository ?
Kubernetes natively does not create images. But you can run a piece of software such kaniko in the kubernetes cluster to achieve it. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata
Several options exist to create docker images inside Kubernetes.
If you are already familiar with docker and want a mature project you could use docker CE running inside Kubernetes. Check here: https://hub.docker.com/_/docker and look for the dind tag (docker-in-docker). Keep in mind there's pros and cons to this approach, so take care to understand them.
Kaniko seems to have potential but there's no version 1 release yet.
I've been using docker dind (docker-in-docker) to build docker images that run in production Kubernetes cluster with good results.
I want to create nexus 3 docker with pre-define configuration (few repos and dummy artifacts) for testing my library.
I can't call the nexus API from the docker file, because it require running nexus.
I tried to up the nexus 3 container, config it manually and create image from container
docker commit ...
the new image created, but when I start the new container from it, it doesn't contains all my manual configuration that I did before.
How can I customize the nexus 3 image?
If I understand well, you are trying to create a portable, standalone customized nexus3 installation in a self-contained docker image for testing/distribution purpose.
Doing this by extending the official nexus3 docker image will not work. Have a look at their Dockerfile: it defines a volume for /nexus_data and there is currently no way of removing this from a child image.
It means that when your start a container without any specific options, a volume is created for each new container. This is why your committed image starts with blank data. The best you can do is to name the data volume when you start the container (option -v nexus_data:/nexus_data for docker run) so that the same volume is being reused. But the data will still be in your local docker installation, not in the image.
To do what you wish, you need to recreate you own docker image without a data volume. You can do it from the above official Dockerfile, just remove the volume line. Then you can customize and commit your container to an image which will contain the data.
My understanding is that Docker creates an image layer at every stage of a dockerfile.
If I have X containers running on the same machine (where X >=2) and every container has a common underlying image layer (ie. debian), will docker keep only one copy of the base image on that machine, or does it have multiple copies for each container?
Is there a point this breaks down, or is it true for every layer in the dockerfile?
How does this work?
Does Kubernetes affect this in any way?
Dockers Understand images, containers, and storage drivers details most of this.
From Docker 1.10 onwards, all the layers that make up an image have an SHA256 secure content hash associated with them at build time. This hash is consistent across hosts and builds, as long as the content of the layer is the same.
If any number of images share a layer, only the 1 copy of that layer will be stored and used by all images on that instance of the Docker engine.
A tag like debian can refer to multiple SHA256 image hash's over time as new releases come out. Two images that are built with FROM debian don't necessarily share layers, only if the SHA256 hash's match.
Anything that runs the Docker Engine underneath will use this storage setup.
This sharing also works in the Docker Registry (>2.2 for the best results). If you were to push images with layers that already exist on that registry, the existing layers are skipped. Same with pulling layers to your local engine.