I want to create a network in docker, i use this two ways:
1.- sudo docker network create -d overlay --subnet=192.168.57.0/24 --gateway=192.168.57.1 overlaydefinitivo2
However, after i create this network, if i use docker network inspect overlaydefinitivo2 the output is the following one:
[
{
"Name": "overlaydefinitivo2",
"Id": "mkv1jy6f1v2h3i04ss64rgn1k",
"Created": "2022-05-21T00:30:14.928276148Z",
"Scope": "swarm",
"Driver": "",
"EnableIPv6": false,
"IPAM": {
"Driver": "",
"Options": null,
"Config": null
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": null,
"Labels": null
}
]
As you can see, it doesn't save the IP address, gateway and the driver i used.
The other way i'm trying to create my network is the following one:
networks:
pepito:
driver: overlay
config:
- subnet="192.168.57.0/24"
- gateway="192.168.57.1"
However, when i try to use sudo docker stack deploy -c docker-compose.yml phpmyadmin123 it gets the following output:
networks.config must be a mapping or null
I don't know what i did wrong in the two ways i'm making the network, i already checked the yml indentation and it seems ok.
Thanks for your time.
`
fix your indentation and you'll not get that error
networks:
pepito:
driver: overlay
config:
- subnet="192.168.57.0/24"
- gateway="192.168.57.1"
I'm trying to connect to a DB/2 container (image: ibmcom/db2) but it gives me a read timeout error. The host OS is Windows 10. I can see the port (50000) in the Windows PowerShell prompt, but it gives me a read timeout.
I've added an inbound Windows Defender rule to allow all local ports and an output rule to allow all remote ports. I have this regardless of the program. I realize this is not a good practice, but I'm trying to rule out a firewall issue. Despite this, it still gives me a read timeout error. I added more specific rules earlier, but they naturally did not help.
I also started an SSH server in that container and could log into it from within the container, but not outside of it. When connecting from outside, I got the same read timeout message. I do not feel this is a db2 issue.
Having said that, I was able to get sickp/alpine-sshd:7.5-r2 and gists/lighttpd to come start and be accessible from the host. That is, I can see the web default web page for lighttpd and log into the SSHD server for alpine-sshd. Both of these work with no appreciable delay. This worked before making the above firewall adjustments.
I'm convinced that somehow, this container is not working for me. Other people have tried the exact same docker run that I provide below, and it comes up for them.
I'm using Win 10, WSL2. Docker version 20.10.7, build f0df350.
I start the container by doing:
docker run -itd --name mydb-db2 \
--privileged=true \
-p 50000:50000 \
-e LICENSE=accept \
-e B2INSTANCE=db2inst1 \
-e DB2INST1_PASSWORD=<mypassword> \
-e DBNAME=MYDB \
-e TO_CREATE_SAMPLEDB=false \
-v db2:/database \
ibmcom/db2
Netstat evidence:
C:\Software>netstat /a /n |grep 50000
TCP 0.0.0.0:50000 0.0.0.0:0 LISTENING
TCP [::]:50000 [::]:0 LISTENING
Attempt to connect to jdbc:db2://localhost:50000/MYDB
on host system results in "Read timed out. ERRORCODE=-4499, SQLSTATE=08001"
Docker container status:
~/projects-new/db2$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
110aa19976dd ibmcom/db2 "/var/db2_setup/lib/…" 2 days ago Up 28 minutes 22/tcp, 55000/tcp, 60006-60007/tcp, 0.0.0.0:50000->50000/tcp, :::50000->50000/tcp mydb-db2
Inspection of container:
~/projects-new/db2$ docker container inspect 110aa
[
{
"Id": "110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2",
"Created": "2021-07-16T04:10:51.1247765Z",
"Path": "/var/db2_setup/lib/setup_db2_instance.sh",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 5459,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-07-18T03:56:45.0493495Z",
"FinishedAt": "2021-07-18T03:54:18.4239523Z"
},
"Image": "sha256:a6a5ee354fb1242a75d508982041cd48883f3fe7c9c9b485be0da6c0ebd44a39",
"ResolvConfPath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/hostname",
"HostsPath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/hosts",
"LogPath": "/var/lib/docker/containers/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2/110aa19976ddb53d16eac9376476f974fee8e9c699da3f76c1e2e13c444655c2-json.log",
"Name": "/mydb-db2",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"db2:/database"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"50000/tcp": [
{
"HostIp": "",
"HostPort": "50000"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"label=disable"
],
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4-init/diff:/var/lib/docker/overlay2/6cf25bf1ac29315c3832316ef32b1cae8cf1ed6e71e4ddd9d08ab5566f81da9e/diff:/var/lib/docker/overlay2/76ca13571a6d253356b48ac20b408d33f80c5e6b429c132533e60c7578e99fb3/diff:/var/lib/docker/overlay2/e1a78196ef6f70929701e708904cb2696189c37a40839a0f20407148d2d90f1d/diff:/var/lib/docker/overlay2/efa2b4a3bc7e7411a671f05ad9121a4bb609452560b5f73d4b765e8519bfa36d/diff:/var/lib/docker/overlay2/933425814e17216adcfcac390e789c6dfc8ada12ded902db2ca9a542a5ff555c/diff:/var/lib/docker/overlay2/2ec2f25d859b77fd93a16468e40de569c41b35055c58277ad97d839cb33a01ac/diff:/var/lib/docker/overlay2/62aeaecc9fea67541671d95f691a2d8ddc9076ee0ae3bc96cd3b030a3ecc663b/diff:/var/lib/docker/overlay2/f04ce4e91dedc0c14073e43734ca252a7c0bd6f6ed9ab89f77d6797f72312f2d/diff:/var/lib/docker/overlay2/21b929e594040a64ffb0cd2c8bd4d3d7f630a3ec3dd79e8157c41c0d9783faa6/diff:/var/lib/docker/overlay2/c5e235fc2e9dc254394bcae472264b133530f5dfbb285cfe5f0ba0dac26ce4c4/diff:/var/lib/docker/overlay2/8f68a8bb1e9ca565aa1d8debc221bb498512a6ed24cc07bcf3ef07c8c42e045f/diff:/var/lib/docker/overlay2/745a0aa01d1a904ce08c22d07be527cdb39da0c37b87a66a57062cc307ca4d4c/diff:/var/lib/docker/overlay2/f0a873fda45d17a036833dd0dc9362f02b0ab00c590f23bf38ba59d06c624272/diff",
"MergedDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4/merged",
"UpperDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4/diff",
"WorkDir": "/var/lib/docker/overlay2/b6ecb6d5e949ab8e58d9238e34878a563a45f5045d57c684e5a08b6ec833ebb4/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "db2",
"Source": "/var/lib/docker/volumes/db2/_data",
"Destination": "/database",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "47c06e44c75f70947a907a0972924536761f70f15971459e8be6015b29e2e48c",
"Source": "/var/lib/docker/volumes/47c06e44c75f70947a907a0972924536761f70f15971459e8be6015b29e2e48c/_data",
"Destination": "/hadr",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "110aa19976dd",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"50000/tcp": {},
"55000/tcp": {},
"60006/tcp": {},
"60007/tcp": {}
},
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"LICENSE=accept",
"B2INSTANCE=db2inst1",
"DB2INST1_PASSWORD=<mypassword>",
"DBNAME=BLUECOST",
"TO_CREATE_SAMPLEDB=false",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=oci",
"STORAGE_DIR=/database",
"HADR_SHARED_DIR=/hadr",
"DBPORT=50000",
"TSPORT=55000",
"SETUPDIR=/var/db2_setup",
"SETUPAREA=/tmp/setup",
"NOTVISIBLE=in users profile",
"LICENSE_NAME=db2dec.lic"
],
"Cmd": null,
"Image": "ibmcom/db2",
"Volumes": {
"/database": {},
"/hadr": {}
},
"WorkingDir": "",
"Entrypoint": [
"/var/db2_setup/lib/setup_db2_instance.sh"
],
"OnBuild": null,
"Labels": {
"architecture": "x86_64",
"build-date": "2021-06-01T05:31:45.840349",
"com.redhat.build-host": "cpt-1007.osbs.prod.upshift.rdu2.redhat.com",
"com.redhat.component": "ubi7-container",
"com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
"description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"desktop.docker.io/wsl-distro": "Ubuntu-20.04",
"distribution-scope": "public",
"io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"io.k8s.display-name": "Red Hat Universal Base Image 7",
"io.openshift.tags": "base rhel7",
"name": "ubi7",
"release": "405",
"summary": "Provides the latest release of the Red Hat Universal Base Image 7.",
"url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi7/images/7.9-405",
"vcs-ref": "a4e710a688a6374670ecdd56637c3f683d11cbe3",
"vcs-type": "git",
"vendor": "Red Hat, Inc.",
"version": "7.9"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "570856178f99951c7cdfccc638a3404f906a7a89905ba9d39181cd9310f4380b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": null,
"50000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "50000"
},
{
"HostIp": "::",
"HostPort": "50000"
}
],
"55000/tcp": null,
"60006/tcp": null,
"60007/tcp": null
},
"SandboxKey": "/var/run/docker/netns/570856178f99",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "a50d8643af88c0d677a9dc2d889f20ab909f46707bb7bd0f8168666b18d1b414",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "408fe3a7130f9791810b8668b60b7f90478f4673f79270539044362e8c12d88f",
"EndpointID": "a50d8643af88c0d677a9dc2d889f20ab909f46707bb7bd0f8168666b18d1b414",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
I didn't see the db2 container listed. These are my networks:
C:\Software>docker network ls
NETWORK ID NAME DRIVER SCOPE
408fe3a7130f bridge bridge local
38fc17e8e6f1 cirrus-ssc-file-sender_default bridge local
1668ab71959f host host local
4bf4f6b3a57e minikube bridge local
e07fc0032414 none null local
Instead, I found it on the bridge network.
I'm not trying to do anything fancy. I'd really rather it run on the same network host. If the host system can "see" the exposed port of 50000 via Netstat, wouldn't that mean it's not a firewall issue?
Update: I turned off Windows Defender and it still does not work.
Update 2: I hosted the same container on a different machine but on my home network. When I try to connect to it from the problem machine, it gives me the same read timeout error. However, it works from the hosting machine. Somehow there seems to be a problem between this particular Windows machine and this particular container.
Update 3: SVCENAME info:
I ran the following inside the db2 container:
$su db2inst1 (when I log in it goes to root)
$cd ~
$. ./.bashrc
$db2 get dbm cfg |grep SVCENAME
TCP/IP Service name (SVCENAME) = db2c_db2inst1
SSL service name (SSL_SVCENAME) =
$grep dbc2_db2inst1 /etc/services
db2c_db2inst1 50000/tcp
db2c_db2inst1_ssl 50001/tcp
DB2 Container OS Version info:
$ cat /etc/*release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.9 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.9 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.9:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.9
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.9"
Red Hat Enterprise Linux Server release 7.9 (Maipo)
Red Hat Enterprise Linux Server release 7.9 (Maipo)
WSL Linux version used:
$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Windows version info of the host system (from winver):
Windows 10
Version 21H1 (OS Build 19043.1110)
Computer successfully connecting to DB/2 container:
$ cat /etc/*release
Fedora release 30 (Thirty)
NAME=Fedora
VERSION="30 (Workstation Edition)"
ID=fedora
VERSION_ID=30
VERSION_CODENAME=""
PLATFORM_ID="platform:f30"
PRETTY_NAME="Fedora 30 (Workstation Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:30"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f30/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=30
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=30
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
Fedora release 30 (Thirty)
Fedora release 30 (Thirty)
Your symptom might be caused by some machine specific configuration, or some downlevel component (in particular WSL2).
On my hardware, at the current date, with current version of WSL2, ibmcom/db2 accepts connections from the local MS-Windows host (via jdbc ) with the following mix of components:
MS-Windows 10 Pro build 19043 (21H1) x64
the latest build of the "Linux WSL2 kernel package for x64 machines"
Docker Desktop 3.5.2 configured to use WSL2
However, with a previous mix of configurations , I recreated your failure symptom with WSL2, i.e. jdbc connection attempt from local MSWindows host into the linux container gave sqlcode -4499 (in my case reply.fill() insufficient data).
The failing combination was:
MS-Windows 10 Pro build 19041 x64.
older build of "Linux WSL2 kernel package for x64 machines" (downloaded before 22/July/2021)
Docker Desktop 3.5.2 configured for WSL2
With the previous failing combination, only WSL2 back end recreated your symptom, but the Hyper-V back end worked correctly.
With Docker-Desktop on a Win10PRO environment, right click on its icon, choose Settings, and it lets you tick (or untick) "Use WSL2 based engine", click Apply and Restart. You may get other notifications. You may lose your containers and images and need to download them again so if you need to preserve any data then arrange that separately before changing the back end.
If you cannot make progress via upgrading components, then a re-install or image may be an option.
I want to control which external IP is used to send traffic from my swarm containers, this can be easily used with a bridge network and iptables rules.
This works fine for local-scoped bridge networks:
docker network create --driver=bridge --scope=local --subnet=172.123.0.0/16 -o "com.docker.network.bridge.enable_ip_masquerade"="false" -o "com.docker.network.bridge.name"="my_local_bridge" my_local_bridge
and on iptables:
sudo iptables -t nat -A POSTROUTING -s 172.123.0.0/16 ! -o my_local_bridge -j SNAT --to-source <my_external_ip>
This is the output of docker network inspect my_local_bridge:
[
{
"Name": "my_local_bridge",
"Id": "...",
"Created": "...",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.123.0.0/16"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
},
"Options": {
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.bridge.name": "my_local_bridge"
},
"Labels": {}
}
]
But if I try to attach a swarm container to this network I get this error:
network "my_local_bridge" is declared as external, but it is not in the right scope: "local" instead of "swarm"
Alright, great, let's switch the scope to swarm then, right? Wrong, oh so wrong.
Creating the network:
docker network create --driver=bridge --scope=swarm --subnet=172.123.0.0/16 -o "com.docker.network.bridge.enable_ip_masquerade"="false" -o "com.docker.network.bridge.name"="my_swarm_bridge" my_swarm_bridge
Now let's check docker network inspect my_swarm_bridge:
[
{
"Name": "my_swarm_bridge",
"Id": "...",
"Created": "...",
"Scope": "swarm",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
},
"Options": {},
"Labels": {}
}
]
I can now attach it to swarm containers just fine, but neither the options are set, nor the subnet is what I defined...
How can I set these options for "swarm"-scoped bridge networks? Or, how can I set iptables to use a defined external IP if I can't set com.docker.network.bridge.enable_ip_masquerade to false?
Do I need to make a script to check the subnet assigned and manually delete the iptables MASQUERADE rule?
thanks guys
I'm pretty sure you can't use the bridge driver with swarm, and that you should use the overlay driver.
From Docker documentation :
Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.
I might not understand your particular use case though ...
I would like to build a Docker image without docker itself. I have looked at [Packer](http://www.packer.io/docs/builders/docker.html, but it requires that Docker be installed on the builder host.
I have looked at the Docker Registry API documentation but this information doesn't appear to be there.
I guess that the image is simply a tarball, but I would like to see a complete specification of the format, i.e. what exact format is required and whether there are any metadata files required. I could attempt downloading an image from the registry and look what's inside, but there is no information on how to fetch the image itself.
The idea of my project is to implement a script that creates an image from artifacts I have compiled, and uploads it to the registry. I would like to use OpenEmbedded for this purpose, essentially this would be an extension to Bitbake.
The Docker image format is specified here: https://github.com/docker/docker/blob/master/image/spec/v1.md
The simplest possible image is a tar file containing the following:
repositories
uniqid/VERSION
uniqid/json
uniqid/layer.tar
Where VERSION contains 1.0, layer.tar contains the chroot contents and json/repositories are JSON files as specified in the spec above.
The resulting tar can be loaded into docker via docker load < image.tar
After reading James Coyle's blog, I figured that docker save and docker load commands are what I need.
> docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
progrium/consul latest e9fe5db22401 11 days ago 25.81 MB
> docker save e9fe5db22401 | tar x
> ls e9fe5db22401*
VERSION json layer.tar
The VERSION file contains only 1.0, and json contains quite a lot of information:
{
"id": "e9fe5db224015ddfa5ee9dbe43b414ecee1f3108fb6ed91add11d2f506beabff",
"parent": "68f9e4929a4152df9b79d0a44eeda042b5555fbd30a36f98ab425780c8d692eb",
"created": "2014-08-20T17:54:30.98176344Z",
"container": "3878e7e9b9935b7a1988cb3ebe9cd45150ea4b09768fc1af54e79b224bf35f26",
"container_config": {
"Hostname": "7f17ad58b5b8",
"Domainname": "",
"User": "",
"Memory": 0,
"MemorySwap": 0,
"CpuShares": 0,
"Cpuset": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"53/udp": {},
"8300/tcp": {},
"8301/tcp": {},
"8301/udp": {},
"8302/tcp": {},
"8302/udp": {},
"8400/tcp": {},
"8500/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"HOME=/",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"SHELL=/bin/bash"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) CMD []"
],
"Image": "68f9e4929a4152df9b79d0a44eeda042b5555fbd30a36f98ab425780c8d692eb",
"Volumes": {
"/data": {}
},
"WorkingDir": "",
"Entrypoint": [
"/bin/start"
],
"NetworkDisabled": false,
"OnBuild": [
"ADD ./config /config/"
]
},
"docker_version": "1.1.2",
"author": "Jeff Lindsay <progrium#gmail.com>",
"config": {
"Hostname": "7f17ad58b5b8",
"Domainname": "",
"User": "",
"Memory": 0,
"MemorySwap": 0,
"CpuShares": 0,
"Cpuset": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"53/udp": {},
"8300/tcp": {},
"8301/tcp": {},
"8301/udp": {},
"8302/tcp": {},
"8302/udp": {},
"8400/tcp": {},
"8500/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"HOME=/",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"SHELL=/bin/bash"
],
"Cmd": [],
"Image": "68f9e4929a4152df9b79d0a44eeda042b5555fbd30a36f98ab425780c8d692eb",
"Volumes": {
"/data": {}
},
"WorkingDir": "",
"Entrypoint": [
"/bin/start"
],
"NetworkDisabled": false,
"OnBuild": [
"ADD ./config /config/"
]
},
"architecture": "amd64",
"os": "linux",
"Size": 0
}
The layer.tar file appears to be empty. So inspected the parent, and the grandparent, both contained no file in their layer.tar files.
So assuming that 4.0K is the standard size for an empty tarball:
for layer in $(du -hs */layer.tar | grep -v 4.0K | cut -f2)
do (echo $layer:;tar tvf $layer)
done
To see that these contain simple incremental changes to the filesystem.
So one conclusion is that it's probably best to just use Docker to build the image and push it the registry, just as Packer does.
The way to build an image from scratch is described in the docs.
It turns out that docker import - scratch doesn't care about what's in the tarball. I simply assumes that is the rootfs.
> touch foo
> tar c foo | docker import - scratch
02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab
> docker save 02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab | tar x
> ls 02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab/
VERSION json layer.tar
> tar tvf 02bb6cd70aa2c9fbaba37c8031c7412272d804d50b2ec608e14db054fc0b9fab/layer.tar
drwxr-xr-x 0/0 0 2014-09-01 13:46 ./
-rw-r--r-- 500/500 0 2014-09-01 13:46 foo
In terms of OpenEmbedded integration, it's probably best to build the rootfs tarball, which is something Yocto provides out of the box, and use the official Python library to import the rootfs tarball with import_image(src='rootfs.tar', repository='scratch') and then push it private registry method.
This is not the most elegant solution, but that's how it would have to work at the moment. Otherwise one probably can just manage and deploy rootfs revisions in their own way, and just use docker import on the target host, which still won't be a nice fit, but is somewhat simple.