docker image - merged/diff/work/LowerDir components of GraphDriver - docker

Below is the manifest file entry snippet(docker inspect image redis) of redis image
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/bd512eb256c8aa56cbe9243d440a311820712d1a245fe6f523d39d19cd6c862d/diff:/var/lib/docker/overlay2/7fa1e90f35c78fc83c3a
4b86e36e45d742383b394adf9ce4cf9b339d919c9cbe/diff:/var/lib/docker/overlay2/2c1869386b5b8542959da4f0173a5272b9703326d619f27258b4edff7a1dbbf9/diff:/var/lib/docker/overlay2
/23ba3955c5b72ec17b9c409bd5233a3d92cbd75543c7d144b364f8188765788e/diff:/var/lib/docker/overlay2/87d8a92919103e8ff723221200acb36e17c611fa499571ab183d0f51458e6f24/diff",
"MergedDir": "/var/lib/docker/overlay2/e503ed41978e99fe9b71a4225763a40b7988e9a4f31d4c06ef1ec1af46b0b6ab/merged",
"UpperDir": "/var/lib/docker/overlay2/e503ed41978e99fe9b71a4225763a40b7988e9a4f31d4c06ef1ec1af46b0b6ab/diff",
"WorkDir": "/var/lib/docker/overlay2/e503ed41978e99fe9b71a4225763a40b7988e9a4f31d4c06ef1ec1af46b0b6ab/work"
},
"Name": "overlay2"
},
where overlay2 filesystem is used by docker image and container.
Within GraphDriver entry of manifest,
what does LowerDir / MergedDir / UpperDir / WorkDir indicate?

LowerDir: these are the read-only layers of an overlay filesystem. For docker, these are the image layers assembled in order.
UpperDir: this is the read-write layer of an overlay filesystem. For docker, that is equivalent to the container specific layer which contains changes made by that container.
WorkDir: this is a required directory for overlay, it needs an empty directory for internal use.
MergedDir: this is the result of the overlay filesystem. Docker effectively chroot's into this directory when running the container.
For more on overlay filesystems (overlay2 is a newer release, but I don't believe there are any user visible changes), see the kernel docs: https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt

Related

How to locate docker images in WSL?

I am running WSL and I want to find where the docker images are stored in my filesystem.
I've tried docker info and it shows Docker Root Dir: /var/lib/docker
but when I do ls /var/lib/docker it shows ls: cannot access '/var/lib/docker/': No such file or directory
Tried finding some info and came across this fcc blog regarding Docker on Windows:
but mine contains only txt files and a tmp folder.
I tried inspecting an image I've built and it shows: `
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/tm64pahi18wlse3ibs2uex6dm/diff:/var/lib/docker/overlay2/s4jgik32xo1e9qoqie33bt6d7/diff:/var/lib/docker/overlay2/m9ea6j2mrehihllvnm1s51uyi/diff:/var/lib/docker/overlay2/1o1586gytwxyzhuap98m5a9v4/diff:/var/lib/docker/overlay2/437dfxzj5gmfrbs6f5a9pj0xt/diff:/var/lib/docker/overlay2/4d53adbcad7cc93f261de7f36303a7e1c54ae1cf1accb2768c881be550dd4e95/diff:/var/lib/docker/overlay2/62106246a0f1977a89f193792c0f066a10bc8179e1406d1931cb8c8f15dc47f4/diff:/var/lib/docker/overlay2/28cf7a8fe3d1eab10f09a1056a7a36f88425c331cd0681b6cf1159156238cf4c/diff:/var/lib/docker/overlay2/6fd3c82e02de85abceb47d0a58a06405c1cab0301c73a70939f507a3810fa540/diff:/var/lib/docker/overlay2/4d5eb280e3c8445439d9ee40ba3c066e6c11caf0a76f104e2795195c50ac8389/diff",
"MergedDir": "/var/lib/docker/overlay2/9az4itvoy9m4jv669niircfar/merged",
"UpperDir": "/var/lib/docker/overlay2/9az4itvoy9m4jv669niircfar/diff",
"WorkDir": "/var/lib/docker/overlay2/9az4itvoy9m4jv669niircfar/work"
},
"Name": "overlay2"
}
but doing ls gives the same ls: cannot access '/var/lib/docker/': No such file or directory
The only place I can see anything that I can understand is using docker images and in Docker Desktop where I can see the size an image takes up.
My OS info is below
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian

NFS volume created manually mounts but shows empty contents

server: docker ubuntu, 18.06.3-ce
local : docker for mac, 19.03.13
I have created a volume in the swarm manually, to a remote nfs server. When I try to mount this volume in a service it appears to work, but the contents are empty and any writes seem to succeed (calling code doesn't crash), but the bytes are gone. Maybe even to /dev/null.
When I declare a similar volume inside the compose file it works. The only difference I can find is the label "com.docker.stack.namespace".
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
my_nfs
version: "3.5"
services:
my-api:
volumes:
- "compose_nfs:/data1/" # works fine
- "externl_nfs:/data2/" # empty contents, forgotten writes
volumes:
externl_nfs:
external: true
compose_nfs:
driver: local
driver_opts:
type: nfs
o: addr=10.0.1.100
device: ":/data/"
When inspecting the networks they are identical, except for that label.
{
"CreatedAt": "2020-20-20T20:20:20Z",
"Driver": "local",
"Labels": {
# label missing on the manually created one
"com.docker.stack.namespace": "stackie"
},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "compose_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}
If you use an external volume, swarm is deferring the creation of that volume to you. Volumes are also local to the node they are created on, so you must create that volume on every node where swarm could schedule this job. For this reason, many will delegate the volume creation to swarm mode itself and put the definition in the compose file. So in your example, before scheduling the service, on each node run:
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
external_nfs
Otherwise, when the service gets scheduled on a node without the volume defined, it appears that swarm will create the container, and that create command generates a default named volume, storing the contents on that local node (I could also see swarm failing to schedule the service because of a missing volume, but your example shows otherwise).
Answering this, since it is an older version of docker and probably not relevant to most people, considering the NFS part.
It appears to be a bug of some sort in docker/swarm.
Create a NFS volume on the swarm (via api, from remote)
Volume is correct on the manager node which was contacted
Volume is missing the options on all other worker nodes
As some strange side effect, the volume seems to work. It can be mounted, writes succeed without issue, but all bytes written disappear. Reads work but every file is "not found", which is logical considering the writes disappearing.
On manager:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T15:56:44+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}]
On worker:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T16:22:16+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": null,
"Scope": "local"
}]

See original image tag from Kubernetes annotations / docker labels / docker configuration

Problem
I am unable to determine the original image tag (and so the image version) on a running container in Kubernetes.
Description
Pure Docker:
When running a container in pure docker and the inspecting the container, I can always view the image tag. For example, when starting a container running ubuntu:18.04 and then inspecting it (using docker inspect) I see the following output (hugely shortened for brevity):
[
{
"Id": "e4109d8d4a3835f92629732d9dcb0967c16f9a716c6bbda8edf3a4423d714d01",
"Image": "sha256:2eb2d388e1a255c98029f40d6d7f8029fb13f1030abc8f11ccacbca686a8dc12",
"Config": {
"Hostname": "e4109d8d4a38",
"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],
"Image": "ubuntu:18.04",
"Labels": {}
}
}
]
Here I can see that .Config.Image shows my original image + tag (ubuntu:18.04).
Kubernetes:
When pulling the same image into a Kubernetes cluster, it seems that Kubernetes always uses the digest when it creates the running container. Here is an example from a test pod also using ubuntu:18.04, showing output from kubectl desribe pod...
Normal Scheduled 109s default-scheduler Successfully assigned default/test-697b599788-t9xp9 to docker-desktop
Normal Pulling 108s kubelet, docker-desktop Pulling image "ubuntu:18.04"
Normal Pulled 104s kubelet, docker-desktop Successfully pulled image "ubuntu:18.04"
Normal Created 104s kubelet, docker-desktop Created container ubuntu
Normal Started 103s kubelet, docker-desktop Started container ubuntu
Above, the image that has been pulled is the image that is described in my pod specification (namely, ubuntu:18.04). So far, so good.
When I inspect the container from the docker API (via the mounted socket), .Config.Image is showing the digest, not the original tag. I have again amended the output for brevity:
curl --unix-socket /var/run/docker.sock https://localhost/container/99a29951d0b385875c54e586497dbbf1d3c6266bfc10351d0c75e6774394c682/json
{
"Id": "99a29951d0b385875c54e586497dbbf1d3c6266bfc10351d0c75e6774394c682",
"Image": "sha256:1e4467b07108685c38297025797890f0492c4ec509212e2e4b4822d367fe6bc8",
"Config": {
"Hostname": "test-7787dcf6d-h58rn",
"Env": [
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_SERVICE_PORT_HTTPS=443",
"KUBERNETES_PORT=tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1",
"KUBERNETES_SERVICE_HOST=10.96.0.1",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Image": "ubuntu#sha256:5d1d5407f353843ecf8b16524bc5565aa332e9e6a1297c73a92d3e754b8a636d",
"Labels": {
"annotation.io.kubernetes.container.hash": "3ee6fba4",
"annotation.io.kubernetes.container.restartCount": "0",
"annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
"annotation.io.kubernetes.container.terminationMessagePolicy": "File",
"annotation.io.kubernetes.pod.terminationGracePeriod": "30",
"io.kubernetes.container.logpath": "/var/log/pods/default_test-7787dcf6d-h58rn_cf601af0-84fe-4df7-8126-736189b6f7a6/ubuntu/0.log",
"io.kubernetes.container.name": "ubuntu",
"io.kubernetes.docker.type": "container",
"io.kubernetes.pod.name": "test-7787dcf6d-h58rn",
"io.kubernetes.pod.namespace": "default",
"io.kubernetes.pod.uid": "cf601af0-84fe-4df7-8126-736189b6f7a6",
"io.kubernetes.sandbox.id": "cf212829401e97a4ed65fb7707d912be676f775a895977ef0c9c62bb28fec74f"
}
}
}
If I grep the complete output, at no point do I find the label "18.04". I understand the mutability aspect, and that using the digest is in line with best practice, but I am curious to know if there is a way to get my original tag.
If Kubernetes is pulling the digest of the image with the tag 18.04, is there an option to annotate the container with the original image tag, so that it becomes available in the docker API (in .Labels or .Env), or in short is there any way that the docker API can determine the original image tag, and thus the actual version of ubuntu (or any other application type)?

Difference between OCI image manifest and Docker V2.2 image manifest

I have a requirement of converting an OCI image manifest to Docker v2.2 image format and vice versa. But I am not able to find any difference between the two , is there any actual difference or they are same ?
Docker Image Manifest V 2, Schema 2
Registry image manifests define the components that make up an image on a container registry (see section on container registries). The more common manifest format we’ll be working with is the Docker Image Manifest V2, Schema 2 (more simply, V2.2). There is also a V2, Schema 1 format that is commonly used but more complicated than V2.2 due to backwards-compatibility reasons against V1.
The V2.2 manifest format is a JSON blob with the following top-level fields:
schemaVersion - 2 in this case
mediaType - application/vnd.docker.distribution.manifest.v2+json
config - descriptor of container configuration blob
layers - list of descriptors of layer blobs, in the same order as the rootfs of the container configuration
Blob descriptors are JSON objects containing 3 fields:
mediaType - application/vnd.docker.container.image.v1+json for a container configuration or application/vnd.docker.image.rootfs.diff.tar.gzip for a layer
size - the size of the blob, in bytes
digest - the digest of the content
Here is an example of a V2.2 manifest format (for the Docker Hub busybox image):
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1497,
"digest": "sha256:3a093384ac306cbac30b67f1585e12b30ab1a899374dabc3170b9bca246f1444"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 755724,
"digest": "sha256:57c14dd66db0390dbf6da578421c077f6de8e88edd0815b4caa94607ba5f4c09"
}
]
}
OCI Image Manifest
The OCI image format is essentially the same as the Docker V2.2 format, with a few differences.
mediaType - must be set to application/vnd.oci.image.manifest.v1+json
config.mediaType - must be set to application/vnd.oci.image.config.v1+json
Each object in layers must have mediaType be either application/vnd.oci.image.layer.v1.tar+gzip or application/vnd.oci.image.layer.v1.tar.
Source: https://containers.gitbook.io/build-containers-the-hard-way/#registry-format-oci-image-manifest

Get docker overlay hash path by layer sha256

Each docker image layer correlates to overlay data on disk (in case of using "overlay" engine).
I.e.
We do inspect of docker image and get it's layers:
docker inspect <image> | grep 'RootFS' -A5
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:9c2f1836d49346677f8280bf0eb89c20853f6af4aa6e2fad87b0000bb181fad2",
"sha256:97a77835754fbd5f0883e663cd168ae6263551318d56476920ea05e500e371e6",
"sha256:7ea85bb6d4ef6de5e5ccaf47325b770b954142666b89ab034a0dff3cb98a2808",
...
Layer "sha256:9c2f1836d49346677f8280bf0eb89c20853f6af4aa6e2fad87b0000bb181fad2" correlates to data on disk, for example to "/var/lib/docker/overlay/0608b16c5e9a58f48cfe30ce9559b5c8676e23655719c7141fe75ae86076c3a9/root".
Is it possible to get this docker overlay root path /var/lib/docker/overlay/{hash}/root by image layer sha256?

Resources