docker login <dtr-server> gives error 404 not found - docker

when I try to login on docker private registry it gives the following error:
$docker login https://dtr-ip:443
Error response from daemon: Login: <html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.8.0</center>
</body>
</html>
(Code: 404; Headers: map[Date:[Wed, 22 Jun 2016 13:51:33 GMT] Content-Type:[text/html] Content-Length:[168] X-Replica-Id:[fa6e7b73277d] Server:[nginx/1.8.0]])
My docker trusted registry and UCP are on same node.
docker logs in client side:
time="2016-06-22T19:25:08.338336106+05:30" level=info msg="Error logging in to v2 endpoint, trying next endpoint: login attempt to https://54.179.144.153:443/v2/ failed with status: 404 Not Found"
time="2016-06-22T19:25:08.621784740+05:30" level=error msg="Handler for POST /v1.23/auth returned error: Login: <html>\r\n<head><title>404 Not Found</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx/1.8.0</center>\r\n</body>\r\n</html>\r\n (Code: 404; Headers: map[Content-Type:[text/html] Content-Length:[168] X-Replica-Id:[fa6e7b73277d] Server:[nginx/1.8.0] Date:[Wed, 22 Jun 2016 13:55:08 GMT]])"
$docker info
Containers: 29
Running: 16
Paused: 0
Stopped: 13
Images: 19
Server Version: 1.11.2-cs3
Storage Driver: devicemapper
Pool Name: docker-202:1-201339217-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.725 GB
Data Space Total: 107.4 GB
Data Space Available: 49.69 GB
Metadata Space Used: 3.461 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.144 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host overlay
Kernel Version: 3.10.0-229.14.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.26 GiB
Name: automation
ID: Z4XA:KGME:WMYE:RSP4:ILH7:CPFC:PTIN:QUJT:66UT:PC7R:H65R:BIDX
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): true
File Descriptors: 82
Goroutines: 159
System Time: 2016-06-22T13:59:28.058948802Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-ip6tables is disabled
Cluster store: etcd://<server-ip>:2050
Cluster advertise: <server-ip>:12376
And version of docker are:
$docker version
Client:
Version: 1.11.2-cs3
API version: 1.23
Go version: go1.5.4
Git commit: c81a77d
Built: Wed Jun 8 01:23:22 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2-cs3
API version: 1.23
Go version: go1.5.4
Git commit: c81a77d
Built: Wed Jun 8 01:23:22 2016
OS/Arch: linux/amd64
I think that when I login to https://dtr-ip:443 it searches for https://dtr-ip:443/v2/. And this url does not have any data.

I had the same generic error; my infrastructure was working fine for 30 days about, but after that I received the below error:
Error response from daemon: Login: <html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.8.0</center>
</body>
</html>
(Code: 404; Headers: map[Date:[Wed, 22 Jun 2016 13:51:33 GMT] Content-Type:[text/html] Content-Length:[168] X-Replica-Id:[fa6e7b73277d] Server:[nginx/1.8.0]])
I took from the event on DTR web-console that the license was expired; after new license installation I haven't seen the error message.

Related

Container abruptly killed with warning "cleaning up after killed shim"

We have recently upgraded from docker version 17.06.0-ce to 18.09.2 on our deployment environment.
Experienced container got killed suddenly after running for few days without much information in docker logs.
Monitored the memory usage, and the affected containers are well below all limits (per container and also the host has enough memory free).
Setup observations during the issue:
docker version with 18.09.2 with around 30 running containers.
Experienced container got killed after running for few days.
Docker Logs observed during container crash
Nov 16 15:42:11 site1 containerd[1762]: time="2020-11-16T15:42:11.171040904Z" level=info msg="shim reaped" id=d39355d3061d461ad4a305c717b699bd332aae50d47c2bf2b547bef50f767c7d
Nov 16 15:42:11 site1 containerd[1762]: time="2020-11-16T15:42:11.171156262Z" level=warning msg="cleaning up after killed shim" id=d39355d3061d461ad4a305c717b699bd332aae50d47c2bf2b547bef50f767c7d namespace=moby
Nov 16 15:42:11 site1 dockerd[3022]: time="2020-11-16T15:42:11.171164295Z" level=warning msg="failed to delete process" container=d39355d3061d461ad4a305c717b699bd332aae50d47c2bf2b547bef50f767c7d error="ttrpc: client shutting down: ttrpc: closed: unknown" module=libcontainerd namespace=moby process=b0d77b1ebf2c82b09c152530a5e24491d76e216b852e385686c46128c94e7f5a
Nov 16 15:42:11 site1 c73920e3476c[3022]: INFO: 2020/11/16 15:42:11.396872 [nameserver a6:0c:6a:18:69:1f] container d39355d3061d461ad4a305c717b699bd332aae50d47c2bf2b547bef50f767c7d died; tombstoning entry test-endpoint-s104.weave.local. -> 10.44.0.14
Output of Docker version
Client:
Version: 18.09.2
API version: 1.39
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 04:13:50 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 03:42:13 2019
OS/Arch: linux/amd64
Experimental: false
Output of Docker Info:
Containers: 30
Running: 25
Paused: 0
Stopped: 5
Images: 236
Server Version: 18.09.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-171-generic
Operating System: Ubuntu 16.04.6 LTS
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 62.92GiB
Name: fpas-site1-dra-director-a
ID: KKSM:3YNF:LE7N:NVFE:Y5C4:C6CN:LAQT:QRRZ:VYQS:O4PP:VQKG:DXTK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
com.broadhop.swarm.uuid=uuid4:d96aef99-b5fc-44e3-b7fa-65b08b7e30f3
com.broadhop.swarm.role=endpoint-role
com.broadhop.swarm.node=
com.broadhop.swarm.hostname=site1
com.broadhop.swarm.mode=
com.broadhop.network.interfaces=internal:172.26.50.13
Experimental: false
Insecure Registries:
registry:5000
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: API is accessible on http://127.0.0.1:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
NOTE:
Since this deployment is on critical infrastructure and that we want to understand why this happened and ascertain that this does not occur again. Did anyone faced same kind of issue in any environment and please let us know if there are known issues in with the docker versions being used.
Your go lang version is quite old, you may try to update. I found this issue in the github.
https://github.com/moby/moby/issues/38742

Intermittent connection failures between Docker containers

Description
I am experiencing some intermittent communications issues between containers in the same overlay network. I have been struggling to find a solution to this for weeks but everything I see in Google relating to communications issues dosen't quite match what I am seeing. So I am hoping someone here can help me figure out what is going on.
We are using Docker 17.06
We are using standalone swarm with three masters and one node.
We have multiple overlay networks
Containers attached to each overlay network:
1 container running Apache Tomcat 8.5 and HAproxy 1.7 (called the controller)
1 container just running Apache Tomcat 8.5 (called the apps container)
3 containers running Postgresql 9.6
1 container running an FTP service
1 container running Logstash
Steps to reproduce the issue:
Create a new overlay network
Attach containers
Look at the logs and after a short while you see the errors
Describe the results you received:
The "controller" polls a servlet on "apps" container every few seconds.
Every 15 minutes or so we see a connect timed out error in the log files of the "controller". And perodically we see connection attempt failed when the controller tries to access its database in one of the Postgresql containers.
Error when polling apps container
org.apache.http.conn.ConnectTimeoutException: Connect to srvpln50-webapp_1.0-1:5050 [srvpln50-webapp_1.0-1/10.0.1.6] failed: connect timed out
Error when trying to connect to database
JavaException: com.ebasetech.xi.exceptions.FormRuntimeException: Error getting connection using Database Connection CONTROLLER, SQLEx
ception in StandardPoolDataSource:getConnection exception: java.sql.SQLException: SQLException in StandardPoolDataSource:getConnection no connection available java.sql.SQLException: Cannot
get connection for URL jdbc:postgresql://srvpln50-controller-db_latest:5432/ctrldata : The connection attempt failed.
I turned on debug mode on the docker deamon node.
Every time these errors occur I see the following corrosponding entry in the docker logs:
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.422797691Z" level=debug msg="Name To resolve: srvpln50-webapp_1.0-1."
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.422905040Z" level=debug msg="Lookup for srvpln50-webapp_1.0-1.: IP [10.0.1.6]"
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.648262289Z" level=debug msg="miss notification: dest IP 10.0.0.3, dest MAC 02:42:0a:00:00:03"
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.716329366Z" level=debug msg="miss notification: dest IP 10.0.0.6, dest MAC 02:42:0a:00:00:06"
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.716952000Z" level=debug msg="miss notification: dest IP 10.0.0.6, dest MAC 02:42:0a:00:00:06"
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.802320875Z" level=debug msg="miss notification: dest IP 10.0.0.3, dest MAC 02:42:0a:00:00:03"
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.944189349Z" level=debug msg="miss notification: dest IP 10.0.0.9, dest MAC 02:42:0a:00:00:09"
Feb 09 14:27:26 swarm-node-1 dockerd[12193]: time="2018-02-09T14:27:26.944770233Z" level=debug msg="miss notification: dest IP 10.0.0.9, dest MAC 02:42:0a:00:00:09"
IP 10.0.0.3 is the "controller" container
IP 10.0.0.6 is the "apps" container
IP 10.0.0.9 is the "postgresql" container that the "controller" is trying to connect to.
Describe the results you expected:
Not to have the connection errors
Additional information you deem important (e.g. issue happens only occasionally):
Output of docker version:
Client:
Version: 17.06.1-ce
API version: 1.30
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22:51:12 2017
OS/Arch: linux/amd64
Server:
Version: 17.06.1-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22:50:04 2017
OS/Arch: linux/amd64
Experimental: false
Output of docker info:
Containers: 19
Running: 19
Paused: 0
Stopped: 0
Images: 18
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 385
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-108-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.784GiB
Name: swarm-node-1
ID: O5ON:VQE7:IRV6:WCB7:RQO4:RIZ4:XFHE:AUCX:ZLM2:GPZL:DXQO:BCIX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 217
Goroutines: 371
System Time: 2018-02-09T15:50:01.902816981Z
EventsListeners: 2
Registry: https://index.docker.io/v1/
Labels:
name=swarm-node-1
Experimental: false
Cluster Store: etcd://localhost:2379/store
Cluster Advertise: 10.80.120.13:2376
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Additional environment details (AWS, VirtualBox, physical, etc.):
Swarm masters, node and containers are running Ubuntu 16.04 on bare metal servers
If there is anything I have missed that would aid diagnose please let me know.
Having read many comments from the Docker folks on Google about many communication issues being fixed in the latest version of Docker we upgraded to 17.12 CE and all the issues we were experiencing went away.
Would love to know what the issue was but am more than happy to see them gone.

Docker remove container error

When I want to rerun container with another volumes or update image.
I stop and try to remove container, but often geterror on rm command
# docker rm containername
Error response from daemon: Driver devicemapper failed to remove root filesystem dbe6....f91f: Device is Busy
I need to restart docker daemon to remove container.
~ # docker version root#CentOS-72-64-minimal
Client:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 02:23:59 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 02:23:59 2016
OS/Arch: linux/amd64
------------------------------------------------------------
~ # docker info root#CentOS-72-64-minimal
Containers: 40
Running: 11
Paused: 0
Stopped: 29
Images: 32
Server Version: 1.12.5
Storage Driver: devicemapper
Pool Name: docker-8:3-28705145-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 14.83 GB
Data Space Total: 107.4 GB
Data Space Available: 92.54 GB
Metadata Space Used: 21.15 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.126 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.135-RHEL7 (2016-09-28)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge overlay host null
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 30.96 GiB
Name: CentOS-72-64-minimal
ID: SMTY:72HJ:5QIS:AT63:6GPI:U2UQ:KUYY:C7M6:UIOY:37AR:JS53:JAGA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
I've been experiencing this issue quite a bit on a Red Hat host. The fix according to the reported issue on this is to upgrade to a newer kernel. As a workaround for places where that's not an option, I've been using docker rm -f ... which still throws the error but the container does get cleaned up. Much quicker and less intrusive than a restart of the daemon.
I experienced the same problem, service docker restart almost always fixes the issue.(restarting docker service)
More information about this issue availbale here.

Flag provided but not defined -d while running docker container

I am trying to setup this https://github.com/jwasham/computer-science-flash-cards on my local pc using docker but after I have built my image,when I try
docker run -d -p 8000:8000 --name cs-flash-cards cs-flash-cards
it says
flag provided but not defined: -d
Any ideas how to fix this and run this container?
EDIT(on docker info;docker version I get the following info):
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 10
Server Version: 1.12.5
Storage Driver: devicemapper
Pool Name: docker-202:1-312980-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 877.5 MB
Data Space Total: 107.4 GB
Data Space Available: 2.019 GB
Metadata Space Used: 1.913 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.019 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.110 (2015-10-30)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-45-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 990.7 MiB
Name: ip-172-31-33-253
ID: QPUK:E7BB:Y2PW:MPJR:L2X4:4AMT:VHAT:SOXK:3A2N:UKI2:ZXRK:QF4S
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
Client:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 02:42:17 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 02:42:17 2016
OS/Arch: linux/amd64
Just for the next person who has the same problem.
this was for SystemD on Ubuntu LTS and the docker.service file
was the fix.
If you are on Centos7, you may have an extra file called override.conf
this is used to override the docker.service in systemd.
This file also has the -d flag
and is located in
/etc/systemd/system/docker.service.d/override.conf
replace -d with deamon here
do systemctl daemon-reload and docker will start up.

Building docker image that downloads large file fails with error

I'm new to docker. I've been trying it for less than two weeks. We have a service that we'd like to migrate into a container. The service makes use of about 50G worth of data, so we expect the image to be very large. We've written a Dockerfile for it. When we run the build it fails with the ff:
ApplyLayer exit status 1 stdout: stderr: write /mnt/spine_features/spine_features_subset.lmdb/data.mdb: input/output error
When we check docker ps -a for containers, we can see the build container listed with status:
Exited (1) About a minute ago
When we try to commit the container, we get the same error:
Error response from daemon: ApplyLayer exit status 1 stdout: stderr: write /mnt/spine_features/spine_features_subset.lmdb/data.mdb: input/output error
We can also docker inspect the container. When we exclude downloading the largest files, we are able to complete building the service image. Is there some sort of configuration we can change succeed while still including the larger files?
docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.7
Git commit: 23cf638
Built: Fri Aug 19 02:03:02 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.7
Git commit: 23cf638
Built: Fri Aug 19 02:03:02 2016
OS/Arch: linux/amd64
docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 22
Server Version: 1.12.1
Storage Driver: devicemapper
Pool Name: docker-8:2-7603782-pool
Pool Blocksize: 65.54 kB
Base Device Size: 214.7 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 67.78 GB
Data Space Total: 107.4 GB
Data Space Available: 39.59 GB
Metadata Space Used: 37.04 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.11 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.134 (2016-09-07)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.7.4-1-ARCH
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 15.58 GiB
Name: mega-haro
ID: MDQ5:JIT3:BVQX:XYO6:YTXI:HTRE:N2UQ:ML4V:ENIE:DDCO:ZGYF:3P5F
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
misty:5000
127.0.0.0/8
As Haoming Zhang recommended, mounting the host directory into the container is an acceptable solution. We are also exploring the possibility of using FUSE to load the data into the container at runtime instead of baking it into the image during build or having the host pass it into the container when the container is run.

Resources