Docker image format - docker

I would like to build a Docker image without docker itself. I have looked at [Packer](http://www.packer.io/docs/builders/docker.html, but it requires that Docker be installed on the builder host.
I have looked at the Docker Registry API documentation but this information doesn't appear to be there.
I guess that the image is simply a tarball, but I would like to see a complete specification of the format, i.e. what exact format is required and whether there are any metadata files required. I could attempt downloading an image from the registry and look what's inside, but there is no information on how to fetch the image itself.
The idea of my project is to implement a script that creates an image from artifacts I have compiled, and uploads it to the registry. I would like to use OpenEmbedded for this purpose, essentially this would be an extension to Bitbake.

The Docker image format is specified here: https://github.com/docker/docker/blob/master/image/spec/v1.md
The simplest possible image is a tar file containing the following:
repositories
uniqid/VERSION
uniqid/json
uniqid/layer.tar
Where VERSION contains 1.0, layer.tar contains the chroot contents and json/repositories are JSON files as specified in the spec above.
The resulting tar can be loaded into docker via docker load < image.tar

After reading James Coyle's blog, I figured that docker save and docker load commands are what I need.
> docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
progrium/consul latest e9fe5db22401 11 days ago 25.81 MB
> docker save e9fe5db22401 | tar x
> ls e9fe5db22401*
VERSION json layer.tar
The VERSION file contains only 1.0, and json contains quite a lot of information:
{
"id": "e9fe5db224015ddfa5ee9dbe43b414ecee1f3108fb6ed91add11d2f506beabff",
"parent": "68f9e4929a4152df9b79d0a44eeda042b5555fbd30a36f98ab425780c8d692eb",
"created": "2014-08-20T17:54:30.98176344Z",
"container": "3878e7e9b9935b7a1988cb3ebe9cd45150ea4b09768fc1af54e79b224bf35f26",
"container_config": {
"Hostname": "7f17ad58b5b8",
"Domainname": "",
"User": "",
"Memory": 0,
"MemorySwap": 0,
"CpuShares": 0,
"Cpuset": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"53/udp": {},
"8300/tcp": {},
"8301/tcp": {},
"8301/udp": {},
"8302/tcp": {},
"8302/udp": {},
"8400/tcp": {},
"8500/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"HOME=/",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"SHELL=/bin/bash"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) CMD []"
],
"Image": "68f9e4929a4152df9b79d0a44eeda042b5555fbd30a36f98ab425780c8d692eb",
"Volumes": {
"/data": {}
},
"WorkingDir": "",
"Entrypoint": [
"/bin/start"
],
"NetworkDisabled": false,
"OnBuild": [
"ADD ./config /config/"
]
},
"docker_version": "1.1.2",
"author": "Jeff Lindsay <progrium#gmail.com>",
"config": {
"Hostname": "7f17ad58b5b8",
"Domainname": "",
"User": "",
"Memory": 0,
"MemorySwap": 0,
"CpuShares": 0,
"Cpuset": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"53/udp": {},
"8300/tcp": {},
"8301/tcp": {},
"8301/udp": {},
"8302/tcp": {},
"8302/udp": {},
"8400/tcp": {},
"8500/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"HOME=/",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"SHELL=/bin/bash"
],
"Cmd": [],
"Image": "68f9e4929a4152df9b79d0a44eeda042b5555fbd30a36f98ab425780c8d692eb",
"Volumes": {
"/data": {}
},
"WorkingDir": "",
"Entrypoint": [
"/bin/start"
],
"NetworkDisabled": false,
"OnBuild": [
"ADD ./config /config/"
]
},
"architecture": "amd64",
"os": "linux",
"Size": 0
}
The layer.tar file appears to be empty. So inspected the parent, and the grandparent, both contained no file in their layer.tar files.
So assuming that 4.0K is the standard size for an empty tarball:
for layer in $(du -hs */layer.tar | grep -v 4.0K | cut -f2)
do (echo $layer:;tar tvf $layer)
done
To see that these contain simple incremental changes to the filesystem.
So one conclusion is that it's probably best to just use Docker to build the image and push it the registry, just as Packer does.
The way to build an image from scratch is described in the docs.
It turns out that docker import - scratch doesn't care about what's in the tarball. I simply assumes that is the rootfs.
> touch foo
> tar c foo | docker import - scratch
02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab
> docker save 02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab | tar x
> ls 02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab/
VERSION json layer.tar
> tar tvf 02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab/layer.tar
drwxr-xr-x 0/0 0 2014-09-01 13:46 ./
-rw-r--r-- 500/500 0 2014-09-01 13:46 foo
In terms of OpenEmbedded integration, it's probably best to build the rootfs tarball, which is something Yocto provides out of the box, and use the official Python library to import the rootfs tarball with import_image(src='rootfs.tar', repository='scratch') and then push it private registry method.
This is not the most elegant solution, but that's how it would have to work at the moment. Otherwise one probably can just manage and deploy rootfs revisions in their own way, and just use docker import on the target host, which still won't be a nice fit, but is somewhat simple.

Related

Docker restart policy is ignored if no space left on device

Situation:
I run docker services on a production server.
The restart policy ist set to always.
Disk usage of some services is volantile (file sharing service) which means the disk is 100% full from time to time - and there is a delay till it's cleaned up again.
Problem:
If a docker service exits while the disk is full, docker does not try to restart that service. Even after disk space is available again, the service ist not restart automatically anymore. Manually restarting the service works - but that's not what I want for a production service.
The actual error is:
mkdir /var/lib/docker/overlay2/0e609c8b4059d3e0f1273bd8cb9e9a95c3d76730798a391dd360054ac450f3ed/merged: no space left on device
Question:
Is there a way to keep docker services restarting under any conditions?
Logs:
docker inspect <service> (official mongodb service in the following example - other service are similiar)
[
{
"Id": "b5b54e0dcd8c91b5ead96ce77d2b28afb4b973e205f81a7f6f2fb6b11920f40d",
"Created": "2022-01-13T08:24:03.142162294Z",
"Path": "docker-entrypoint.sh",
"Args": [
"mongod"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 133,
"Error": "mkdir /var/lib/docker/overlay2/0e609c8b4059d3e0f1273bd8cb9e9a95c3d76730798a391dd360054ac450f3ed/merged: no space left on device",
"StartedAt": "2022-01-21T19:12:56.019305407Z",
"FinishedAt": "2022-01-21T20:11:37.176315302Z"
},
...
"RestartCount": 4,
...
"HostConfig": {
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},
...
}
]

How can I calculate a deterministic and reproducible checksum of a docker image, locally, without pinging any registry?

How can I calculate a deterministic and reproducible checksum of a docker image, locally, without pinging any registry?
The checksum should not depend on the image name or in which registry it lives. It should solely depend on the content of all layers.
For example, assume the following:
a given file a
a dockerfile with the content
FROM scratch
COPY a /a
Then building the image with docker build . --no-cache multiple times should always yield the same checksum.
The regular image ID does not cut it, as it somehow uses content from intermediate containers and hence always changes. I am also aware that since Docker 1.10, images have a "RepoDigest" attribute, which uniquely identifies images based on their layers' content. However, as far as I can tell, that digest is only calculated when pulling or pushing to a registry. Is there a way to get this field without contacting a registry? (and is it actually deterministic, regardless of image name, tag or repo?)
Basically, I'm looking for a way to run a good ol' sha256sum on a docker image. This would help me to achieve something similar to as what can be done with Bazel: a hermetic build environment, which in turn enables:
declaring dependencies between docker images, and have a CI system only rebuild what is needed without using docker's cache (assuming that I have a build tool which already manages caches)
allow me to "sign" images using the same approach as signing classic tarballs (that is, publish a checksum and somehow sign that)
the big one: enable reproducible builds!
This should be what Sigstore is for. It is made up of three projects:
Cosign, which signs software.
Fulcio, a certificate authority that lets anyone access short-lived certificates via OpenID Connect.
Rekor, a secure log of signing events that allows you to verify the provenance of software artifacts.
You can then follow "Keyless Sign and Verify Your Container Images With Cosign" (Chris Nesbitt-Smith)
Behind the scenes, cosign creates the keypair ephemerally (they last 20 minutes) and gets them signed by Fulcio using your authenticated OIDC identity.
That is OIDC: OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol.
OIDC allows:
Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server,
as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
COSIGN_EXPERIMENTAL=1 cosign sign image:tag
COSIGN_EXPERIMENTAL=1 cosign verify image:tag
But you would need to setup your own local OCI registry in order to keep the all toolchain local, since cosign stores signatures in an OCI registry, and uses a naming convention (tag based on the sha256 of what we're signing) for locating the signature index.
It looks like you're working on the same problem that I'm actively solving right now.
The big issue with the question is that container image builds with docker build are not deterministic or reproducible unless it happens to reuse the cache from a previous build. A container image build, even with the same filesystem layers, contains metadata on that build, and the metadata contains timestamps:
$ regctl manifest get localhost:5000/library/alpine --platform linux/amd64 --format body | jq .
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1472,
"digest": "sha256:0ac33e5f5afa79e084075e8698a22d574816eea8d7b7d480586835657c3e1c8b"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 2814559,
"digest": "sha256:df9b9388f04ad6279a7410b85cedfdcb2208c0a003da7ab5613af71079148139"
}
]
}
$ regctl blob get localhost:5000/library/alpine sha256:0ac33e5f5afa79e084075e8698a22d574816eea8d7b7d480586835657c3e1c8b | jq .
{
"architecture": "amd64",
"config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh"
],
"Image": "sha256:d49869997c508135352366cebd3509ee756bba1ceb8eef708a4c3ff0d481084a",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"container": "b714116bd3f3418e7b61a6d70dd7244382f0844e47a8d1d66dbf61cb1cb02b2b",
"container_config": {
"Hostname": "b714116bd3f3",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/sh\"]"
],
"Image": "sha256:d49869997c508135352366cebd3509ee756bba1ceb8eef708a4c3ff0d481084a",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"created": "2022-04-05T00:19:59.912662499Z",
"docker_version": "20.10.12",
"history": [
{
"created": "2022-04-05T00:19:59.790636867Z",
"created_by": "/bin/sh -c #(nop) ADD file:5d673d25da3a14ce1f6cf66e4c7fd4f4b85a3759a9d93efb3fd9ff852b5b56e4 in / "
},
{
"created": "2022-04-05T00:19:59.912662499Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:4fc242d58285699eca05db3cc7c7122a2b8e014d9481f323bd9277baacfa0628"
]
}
}
Both the "created" and "history" steps have timestamps that will be unique to the build. Changing those timestamps changes the digest of the config blob, which changes the digest of the image manifest.
The next issue you'll run into is that json serialization would need to be canonical. Some tools will use pretty formatting like jq, others will eliminate all unneeded whitespace for compactness, the order of listing multiple keys in a map doesn't need to be alphabetical, etc. So you need to ensure the same tool is always used for serialization and it has canonical output.
To build without pushing to a registry, you can have docker's buildkit output to an OCI layout tar file:
docker build --output type=oci,dest=/path/to/file.tar .
And in that tar, you will find an index.json with the digest of an image manifest as it was created by buildkit. I've been taking this a step further with regclient's image modification features, changing timestamps (in my case to the git commit time) and stripping other mutable values from the build. Then I verify the result matches a previous build.
Tools like cosign will allow you to sign an image using a digest rather than depending on the image in the registry, even before that image has been pushed.
The image mod feature in regclient is still very much a WIP, but you can see the current features here:
$ regctl image mod --help
EXPERIMENTAL: Applies requested modifications to an image
Usage:
regctl image mod <image_ref> [flags]
Flags:
--annotation stringArray set an annotation (name=value) (default )
--annotation-base stringArray set base image annotations (image/name:tag,sha256:digest) (default )
--buildarg-rm string delete a build arg (default "")
--buildarg-rm-regex string delete a build arg with a regex value (default "")
--config-time-max string max timestamp for a config (default "")
--create string Create tag
--data-max stringArray sets or removes descriptor data field (size in bytes) (default )
--expose-add stringArray add an exposed port (default )
--expose-rm stringArray delete an exposed port (default )
--external-urls-rm remove external url references from layers (first copy image with "--include-external") (default )
-h, --help help for mod
--label stringArray set an label (name=value) (default )
--label-to-annotation set annotations from labels (default )
--layer-rm-created-by string delete a layer based on history (created by string is a regex) (default "")
--layer-rm-index uint delete a layer from an image (index begins at 0) (default )
--layer-strip-file string delete a file or directory from all layers (default "")
--layer-time-max string max timestamp for a layer (default "")
--replace Replace tag (ignored when "create" is used)
--time-max string max timestamp for both the config and layers (default "")
--to-oci convert to OCI media types (default )
--volume-add stringArray add a volume definition (default )
--volume-rm stringArray delete a volume definition (default )
Global Flags:
--logopt stringArray Log options
--user-agent string Override user agent
-v, --verbosity string Log level (debug, info, warn, error, fatal, panic) (default "warning")
The other part of the puzzle is to make RUN steps reproducible. That's less trivial since not only do the files have timestamps, but the contents of the files being created could have timestamps or other mutable content, and the commands could pull from external mutable sources. Solving that part of the problem is still a work in progress for me.
For a docker image, named "hello-world":
docker save --output hello-world.tar hello-world
sha256sum hello-world.tar
It should give you the content sha of image.

Read timeout connecting to server on Docker container

I'm trying to connect to a DB/2 container (image: ibmcom/db2) but it gives me a read timeout error. The host OS is Windows 10. I can see the port (50000) in the Windows PowerShell prompt, but it gives me a read timeout.
I've added an inbound Windows Defender rule to allow all local ports and an output rule to allow all remote ports. I have this regardless of the program. I realize this is not a good practice, but I'm trying to rule out a firewall issue. Despite this, it still gives me a read timeout error. I added more specific rules earlier, but they naturally did not help.
I also started an SSH server in that container and could log into it from within the container, but not outside of it. When connecting from outside, I got the same read timeout message. I do not feel this is a db2 issue.
Having said that, I was able to get sickp/alpine-sshd:7.5-r2 and gists/lighttpd to come start and be accessible from the host. That is, I can see the web default web page for lighttpd and log into the SSHD server for alpine-sshd. Both of these work with no appreciable delay. This worked before making the above firewall adjustments.
I'm convinced that somehow, this container is not working for me. Other people have tried the exact same docker run that I provide below, and it comes up for them.
I'm using Win 10, WSL2. Docker version 20.10.7, build f0df350.
I start the container by doing:
docker run -itd --name mydb-db2 \
--privileged=true \
-p 50000:50000 \
-e LICENSE=accept \
-e B2INSTANCE=db2inst1 \
-e DB2INST1_PASSWORD=<mypassword> \
-e DBNAME=MYDB \
-e TO_CREATE_SAMPLEDB=false \
-v db2:/database \
ibmcom/db2
Netstat evidence:
C:\Software>netstat /a /n |grep 50000
TCP 0.0.0.0:50000 0.0.0.0:0 LISTENING
TCP [::]:50000 [::]:0 LISTENING
Attempt to connect to jdbc:db2://localhost:50000/MYDB
on host system results in "Read timed out. ERRORCODE=-4499, SQLSTATE=08001"
Docker container status:
~/projects-new/db2$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
110aa19976dd ibmcom/db2 "/var/db2_setup/lib/…" 2 days ago Up 28 minutes 22/tcp, 55000/tcp, 60006-60007/tcp, 0.0.0.0:50000->50000/tcp, :::50000->50000/tcp mydb-db2
Inspection of container:
~/projects-new/db2$ docker container inspect 110aa
[
{
"Id": "110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2",
"Created": "2021-07-16T04:10:51.1247765Z",
"Path": "/var/db2_setup/lib/setup_db2_instance.sh",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 5459,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-07-18T03:56:45.0493495Z",
"FinishedAt": "2021-07-18T03:54:18.4239523Z"
},
"Image": "sha256:a6a5ee354fb1242a75d508982041cd48883f3fe7c9c9b485be0da6c0ebd44a39",
"ResolvConfPath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/hostname",
"HostsPath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/hosts",
"LogPath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2-json.log",
"Name": "/mydb-db2",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"db2:/database"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"50000/tcp": [
{
"HostIp": "",
"HostPort": "50000"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"label=disable"
],
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4-init/diff:/var/lib/docker/overlay2/6cf25bf1ac29315c3832316ef32b1cae8cf1ed6e71e4ddd9d08ab5566f81da9e/diff:/var/lib/docker/overlay2/76ca13571a6d253356b48ac20b408d33f80c5e6b429c132533e60c7578e99fb3/diff:/var/lib/docker/overlay2/e1a78196ef6f70929701e708904cb2696189c37a40839a0f20407148d2d90f1d/diff:/var/lib/docker/overlay2/efa2b4a3bc7e7411a671f05ad9121a4bb609452560b5f73d4b765e8519bfa36d/diff:/var/lib/docker/overlay2/933425814e17216adcfcac390e789c6dfc8ada12ded902db2ca9a542a5ff555c/diff:/var/lib/docker/overlay2/2ec2f25d859b77fd93a16468e40de569c41b35055c58277ad97d839cb33a01ac/diff:/var/lib/docker/overlay2/62aeaecc9fea67541671d95f691a2d8ddc9076ee0ae3bc96cd3b030a3ecc663b/diff:/var/lib/docker/overlay2/f04ce4e91dedc0c14073e43734ca252a7c0bd6f6ed9ab89f77d6797f72312f2d/diff:/var/lib/docker/overlay2/21b929e594040a64ffb0cd2c8bd4d3d7f630a3ec3dd79e8157c41c0d9783faa6/diff:/var/lib/docker/overlay2/c5e235fc2e9dc254394bcae472264b133530f5dfbb285cfe5f0ba0dac26ce4c4/diff:/var/lib/docker/overlay2/8f68a8bb1e9ca565aa1d8debc221bb498512a6ed24cc07bcf3ef07c8c42e045f/diff:/var/lib/docker/overlay2/745a0aa01d1a904ce08c22d07be527cdb39da0c37b87a66a57062cc307ca4d4c/diff:/var/lib/docker/overlay2/f0a873fda45d17a036833dd0dc9362f02b0ab00c590f23bf38ba59d06c624272/diff",
"MergedDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4/merged",
"UpperDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4/diff",
"WorkDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "db2",
"Source": "/var/lib/docker/volumes/db2/_data",
"Destination": "/database",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "47c06e44c75f70947a907a0972924536761f70f15971459e8be6015b29e2e48c",
"Source": "/var/lib/docker/volumes/47c06e44c75f70947a907a0972924536761f70f15971459e8be6015b29e2e48c/_data",
"Destination": "/hadr",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "110aa19976dd",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"50000/tcp": {},
"55000/tcp": {},
"60006/tcp": {},
"60007/tcp": {}
},
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"LICENSE=accept",
"B2INSTANCE=db2inst1",
"DB2INST1_PASSWORD=<mypassword>",
"DBNAME=BLUECOST",
"TO_CREATE_SAMPLEDB=false",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=oci",
"STORAGE_DIR=/database",
"HADR_SHARED_DIR=/hadr",
"DBPORT=50000",
"TSPORT=55000",
"SETUPDIR=/var/db2_setup",
"SETUPAREA=/tmp/setup",
"NOTVISIBLE=in users profile",
"LICENSE_NAME=db2dec.lic"
],
"Cmd": null,
"Image": "ibmcom/db2",
"Volumes": {
"/database": {},
"/hadr": {}
},
"WorkingDir": "",
"Entrypoint": [
"/var/db2_setup/lib/setup_db2_instance.sh"
],
"OnBuild": null,
"Labels": {
"architecture": "x86_64",
"build-date": "2021-06-01T05:31:45.840349",
"com.redhat.build-host": "cpt-1007.osbs.prod.upshift.rdu2.redhat.com",
"com.redhat.component": "ubi7-container",
"com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
"description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"desktop.docker.io/wsl-distro": "Ubuntu-20.04",
"distribution-scope": "public",
"io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"io.k8s.display-name": "Red Hat Universal Base Image 7",
"io.openshift.tags": "base rhel7",
"name": "ubi7",
"release": "405",
"summary": "Provides the latest release of the Red Hat Universal Base Image 7.",
"url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi7/images/7.9-405",
"vcs-ref": "a4e710a688a6374670ecdd56637c3f683d11cbe3",
"vcs-type": "git",
"vendor": "Red Hat, Inc.",
"version": "7.9"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "570856178f99951c7cdfccc638a3404f906a7a89905ba9d39181cd9310f4380b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": null,
"50000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "50000"
},
{
"HostIp": "::",
"HostPort": "50000"
}
],
"55000/tcp": null,
"60006/tcp": null,
"60007/tcp": null
},
"SandboxKey": "/var/run/docker/netns/570856178f99",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "a50d8643af88c0d677a9dc2d889f20ab909f46707bb7bd0f8168666b18d1b414",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "408fe3a7130f9791810b8668b60b7f90478f4673f79270539044362e8c12d88f",
"EndpointID": "a50d8643af88c0d677a9dc2d889f20ab909f46707bb7bd0f8168666b18d1b414",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
I didn't see the db2 container listed. These are my networks:
C:\Software>docker network ls
NETWORK ID NAME DRIVER SCOPE
408fe3a7130f bridge bridge local
38fc17e8e6f1 cirrus-ssc-file-sender_default bridge local
1668ab71959f host host local
4bf4f6b3a57e minikube bridge local
e07fc0032414 none null local
Instead, I found it on the bridge network.
I'm not trying to do anything fancy. I'd really rather it run on the same network host. If the host system can "see" the exposed port of 50000 via Netstat, wouldn't that mean it's not a firewall issue?
Update: I turned off Windows Defender and it still does not work.
Update 2: I hosted the same container on a different machine but on my home network. When I try to connect to it from the problem machine, it gives me the same read timeout error. However, it works from the hosting machine. Somehow there seems to be a problem between this particular Windows machine and this particular container.
Update 3: SVCENAME info:
I ran the following inside the db2 container:
$su db2inst1 (when I log in it goes to root)
$cd ~
$. ./.bashrc
$db2 get dbm cfg |grep SVCENAME
TCP/IP Service name (SVCENAME) = db2c_db2inst1
SSL service name (SSL_SVCENAME) =
$grep dbc2_db2inst1 /etc/services
db2c_db2inst1 50000/tcp
db2c_db2inst1_ssl 50001/tcp
DB2 Container OS Version info:
$ cat /etc/*release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.9 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.9 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.9:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.9
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.9"
Red Hat Enterprise Linux Server release 7.9 (Maipo)
Red Hat Enterprise Linux Server release 7.9 (Maipo)
WSL Linux version used:
$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Windows version info of the host system (from winver):
Windows 10
Version 21H1 (OS Build 19043.1110)
Computer successfully connecting to DB/2 container:
$ cat /etc/*release
Fedora release 30 (Thirty)
NAME=Fedora
VERSION="30 (Workstation Edition)"
ID=fedora
VERSION_ID=30
VERSION_CODENAME=""
PLATFORM_ID="platform:f30"
PRETTY_NAME="Fedora 30 (Workstation Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:30"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f30/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=30
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=30
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
Fedora release 30 (Thirty)
Fedora release 30 (Thirty)
Your symptom might be caused by some machine specific configuration, or some downlevel component (in particular WSL2).
On my hardware, at the current date, with current version of WSL2, ibmcom/db2 accepts connections from the local MS-Windows host (via jdbc ) with the following mix of components:
MS-Windows 10 Pro build 19043 (21H1) x64
the latest build of the "Linux WSL2 kernel package for x64 machines"
Docker Desktop 3.5.2 configured to use WSL2
However, with a previous mix of configurations , I recreated your failure symptom with WSL2, i.e. jdbc connection attempt from local MSWindows host into the linux container gave sqlcode -4499 (in my case reply.fill() insufficient data).
The failing combination was:
MS-Windows 10 Pro build 19041 x64.
older build of "Linux WSL2 kernel package for x64 machines" (downloaded before 22/July/2021)
Docker Desktop 3.5.2 configured for WSL2
With the previous failing combination, only WSL2 back end recreated your symptom, but the Hyper-V back end worked correctly.
With Docker-Desktop on a Win10PRO environment, right click on its icon, choose Settings, and it lets you tick (or untick) "Use WSL2 based engine", click Apply and Restart. You may get other notifications. You may lose your containers and images and need to download them again so if you need to preserve any data then arrange that separately before changing the back end.
If you cannot make progress via upgrading components, then a re-install or image may be an option.

Docker in Docker on AWS Batch?

Is is possible to run docker-in-docker on AWS batch?
I have tried the approach of mounting the docker socket via the container properties:
container_properties = <<CONTAINER_PROPERTIES
{
"command": ["docker", "run", "my container"],
"image": "docker/compose",
"jobRoleArn": "my-role",
"memory": 2000,
"vcpus": 1,
"privileged": true,
"mountPoints": [
{
"sourceVolume": "/var/run/docker.sock",
"containerPath": "/var/run/docker.sock",
"readOnly": false
}
]
}
However running this batch job in a SPOT compute environment with default configuration yields a job that immediately transitions to FAILED status with the status transition reason:
Status reason
Unknown volume '/var/run/docker.sock'.
The solution is that both volumes and mountPoints must be defined. For example the following container properties work:
{
"command": ["docker", "run", "<my container>"],
"image": "docker/compose",
"jobRoleArn": "my-role",
"memory": 2000,
"vcpus": 1,
"privileged": false,
"volumes": [
{
"host": {
"sourcePath": "/var/run/docker.sock"
},
"name": "dockersock"
}
],
"mountPoints": [
{
"sourceVolume": "dockersock",
"containerPath": "/var/run/docker.sock",
"readOnly": false
}
]
Access to your private ECR images works fine from the inner docker, however the authentication to ECR from the outer docker does not carry over, so you need to reauthenticate with
aws ecr get-login-password \
--region <region> \
| docker login \
--username AWS \
--password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
before you run your privately hosted docker container.
Turns out privileged is not even required which is nice.

Getting the Container configuration of a Docker image using the registry API

In a CLI, I can can do docker inspect --type image {some_image} and part of the answer is:
"ContainerConfig": {
"Hostname": "4beccaca9c40",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/sh\" \"-c\" \"cat /marker\"]"
],
"ArgsEscaped": true,
"Image": "sha256:111ecb4a6197242745f0d74c2ca4e377cfe4a1686b33160d3a8df3d3d1baea58",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"key1": "LabelValue1-L2",
"version": "1.2.0"
}
},
The registry API defines an answer type of
application/vnd.docker.container.image.v1+json: Container config JSON
but I cannot relate that to a specific API. When I use it with the ../manifests/.. URL I receive the answer in the default format (application/vnd.docker.distribution.manifest.v1+json)(this also happens if I try to use the "fat manifest" format).
Is this configuration information available somewhere?
The registry is the standard registry image pulled a couple of days ago (says "Created": "2018-01-10T01:22:39.470942376Z")
So, what is required is:
A first call to https://{registry}/v2/{imageName}/manifests/{tag} with an Accept header set to application/vnd.docker.distribution.manifest.v2+json
This returns a JSON, where config.mediaType is set to the content type of the V1 manifest (which is always application/vnd.docker.container.image.v1+json as far as I can tell).
a second call to https://{registry}/v2/{imageName}/manifests/{tag} with an Accept header set to the content-type obtained above (same URL, only the Accept changes).
This returns a JSON, where the member history is a list, and each member of this list has a single v1Compatibility attribute which is a string which can be re-parsed as a JSON.

Resources