Sometimes my internet connection gets slow and unstable. And when I use docker pull image_name_here for images that are large in volume, I see that they get stuck in the middle of the download process. Or the internet connection is lost and it times out or exist with other errors.
But when I execute that pull command again, the layers that are already downloaded won't be saved on my drive. They would be downloaded again.
This means for large images on an unstable network I literally can't pull the image.
Is there a way for me to somehow resume the pull process from where it was interrupted.
Is there a third party app that does that?
I'm on Linux (Ubuntu & Debian)
Related
Synopsis. A remote instance gets connected to the Internet via satellite modem when technician visits the cabin. Technician setups the application stack via docker compose and leaves the location. The location has no internet connection and periodically loses electricity (once in a few days).
The application stack is typical, like mysql + nodejs. And it is used by "polar bears". I mean nobody, it is a monitoring app.
How to ensure that docker images will be persisted for an undefined amount of time and the compose stack survives through endless reboots?
Unfortunately there is no real easy solution.
But with a little bit of yq magic to parse docker-compose.yaml and docker save command it is possible to store the images locally to a specific location.
Then we can add startup script to import these images using docker load into the local docker cache.
transmission-remote is running in a docker container with good internet connection, downloading a file with sufficient seeders. As the torrent approaches 99% completion, the download speed will slow, and then the completion rate will be reduced to 98% or even 97%, before climbing back up. Similarly, the total data will fluctuate between 4.83 and 4.88GB. This problem previously occurred at lower levels of completion, but was mitigated by removing other torrents. This increased the download rate, but didn't stop data from disappearing. The data is stored in a host volume mapped to a container directory.
What could cause the data to disappear, and how can this be prevented?
Not sure this is relevant, but the transmission container connects to the internet through another container running nordvpn.
transmission-remote version 3.00
docker version 20.10.10
I had an experience on 2021.09.07: a freshly created docker image is downloaded so slow via docker pull (from hub.docker.com)...
The last layer was the obstacle – it took 40-50 minutes to be finished. What can be the reason?
e249e58386a8: Downloading [===> ] 83.73MB/303.3MB
Check your internet connectivity. Specially if you have a proxy or behind firewall. (If firewall has rules, ask admin to whitelist hub.docker.com - this could be the reason)
Your PC's firewall, virus-guard etc.
Restart the node and check.
We are using a self hosted version of Jfrog Artifactory and for some reason Artifactory went down and which we were able to resolve by restarting the server. After reboot everything seemed to come back up correctly, however we realized that anything using a particular layer ddad3d7c1e96 which is actually pulling down alpine 3.11.11 from docker hub was now getting stuck at pulling fs layer and would just sit and keep retrying.
We tried "Zapping Caches" and running the maintenance option for Garbage Collection/Unused artifacts but we were still unable to download any image that had this particular layer.
Is it possible that this layer was somehow cached in artifactory and ended up being corrupted. We were also looked through the filestore and were unsuccessful in finding the blob.
After a few days, the issue resolved itself and images can now be pulled but we are now left without an explanation...
Does anyone have an idea of what could have happened and how we can clear out cached parent images which are not directly in our container registry but pulled from docker hub or another registry.
We ran into a similar issue - zapping caches worked for us however it took some time to complete
We're setting up a server to host Windows containers.
This server gets the images from an internal Docker registry we have setup.
The issue is that the server is unable to pull down images because it's trying to get a base image from the internet, and the server has no internet connection.
I found a troubleshooting script from Microsoft and notice one passage:
At least one of 'microsoft/windowsservercore' or
'microsoft/nanoserver' should be installed
Try docker pull microsoft/nanoserver or docker pull
microsoft/windowsservercore to pull a Windows container image
Since my PC has internet connection, I downloaded these images, pushed them to the registry, but pulling the images on the new server fails:
The description for Event ID '1' in Source 'docker' cannot be found. The local computer may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'Error initiating layer download: Get https://go.microsoft.com/fwlink/?linkid=860052: dial tcp 23.207.173.222:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.'
That link it's trying to get is a base image on the internet, but I thought the registry was storing the complete image, so what gives? Is it really not possible to store the base images in a registry?
Doing some reading I found this: https://docs.docker.com/registry/deploying/#considerations-for-air-gapped-registries
Certain images, such as the official Microsoft Windows base images,
are not distributable. This means that when you push an image based on
one of these images to your private registry, the non-distributable
layers are not pushed, but are always fetched from their authorized
location. This is fine for internet-connected hosts, but will not work
in an air-gapped set-up.
The doc then details how to setup the registry to store non-distributable layers, but they also say to be mindful of the terms of use for non-distributable layers.
So two possible solutions are:
Make sure you can store the non-distributable layers, then reconfigure the registry to store the non-distributable layers
Connect the server to the internet, download the base images, then use those images