docker pull twice from remote registry - docker

In the latest Docker, I encountered an issue like this.
docker pull mongo:4.0.10
4.0.10: Pulling from library/mongo
f7277927d38a: Pull complete
8d3eac894db4: Downloading
edf72af6d627: Download complete
3e4f86211d23: Download complete
5747135f14d2: Download complete
f56f2c3793f6: Download complete
f8b941527f3a: Download complete
4000e5ef59f4: Download complete
ad518e2379cf: Download complete
919225fc3685: Download complete
45ff8d51e53a: Download complete
4d3342ddfd7b: Download complete
26002f176fca: Download complete
4.0.10: Pulling from library/mongo
f7277927d38a: Pulling fs layer
8d3eac894db4: Pulling fs layer
edf72af6d627: Pulling fs layer
When I pull an image, it will pull it from my registry-mirrors firstly(quickly), then the official hub( I guess, very slow).
but I do not have this problem before.
The docker version I used at the moment(Docker for Windows).
docker -v
Docker version 19.03.13-beta2, build ff3fbc9d55
Update: Occurred again today. Not sure somewhat changed its config then affected Docker. I played Minikube and Kind in these days.
Update:, create an issue moby/moby#41547), please vote it if you are encountering the same problem.

I have the same issue with you(from china).
After my researching, below is the reason why docker will pull twice.
8d3eac894db4: Downloading
This means this file can not get download from your registry-mirrors.
So after time out, docker will pull this mongo image from official docker hub.

Related

Why can't I pull the latest fmriprep singularity version?

I tried to pull the latest version of the Singularity image for fmriprep into an HPC, which to my understanding is 21.0.1.
I did it using the following bash script:
module load singularity
singularity pull --name fmriprep_latest.sif docker://poldracklab/fmriprep:latest
Unfortunately, for some reason, it pulled a very old and deprecated version of fmriprep.
In addition, when I try to write a specific version (e.g., docker://poldracklab/fmriprep:20.2.3) I get an error message saying that the manifest is unknown.
Any idea for how can I pull the latest version?
If you don't specify a different registry, Singularity fetches the image from Docker Hub. It is pulling the tags you specify, but the images available on Docker Hub are quite old.
https://hub.docker.com/r/poldracklab/fmriprep/tags
This may be a little late. But I had the same issue. To draw on what #tsnowlan says above, you can obtain the image from the nipreps registry. Here is what I used:
singularity build fmriprep-21.0.1.simg docker://nipreps/fmriprep:21.0.1

docker image stuck at pulling fs layer

We are using a self hosted version of Jfrog Artifactory and for some reason Artifactory went down and which we were able to resolve by restarting the server. After reboot everything seemed to come back up correctly, however we realized that anything using a particular layer ddad3d7c1e96 which is actually pulling down alpine 3.11.11 from docker hub was now getting stuck at pulling fs layer and would just sit and keep retrying.
We tried "Zapping Caches" and running the maintenance option for Garbage Collection/Unused artifacts but we were still unable to download any image that had this particular layer.
Is it possible that this layer was somehow cached in artifactory and ended up being corrupted. We were also looked through the filestore and were unsuccessful in finding the blob.
After a few days, the issue resolved itself and images can now be pulled but we are now left without an explanation...
Does anyone have an idea of what could have happened and how we can clear out cached parent images which are not directly in our container registry but pulled from docker hub or another registry.
We ran into a similar issue - zapping caches worked for us however it took some time to complete

GitHub Packages Docker - Error pulling image configuration: unknown blob

GitHub packages started returning error pulling image configuration: unknown blob this weekend when trying to pull docker images. It still works to push images to the registry. I haven't found any infromation pointing to problems at GitHub.
000eee12ec04: Pulling fs layer
db438065d064: Pulling fs layer
e345d85b1d3e: Pulling fs layer
f6285e273036: Waiting
2354ee191574: Waiting
69189c7cf8d6: Waiting
771c701acbb7: Waiting
error pulling image configuration: unknown blob
How do I troubleshoot this?
This is the result of a failed push where the push appears to have been successful but something went wrong on the registry side and something is missing.
To fix it build your container again and push it again.
While this is likely a rare situation it would be possible to test for this by deleting your image locally after pushing and pulling it again to ensure pulls work as expected.
One possible cause of failure of pulling or pushing image layer is the unreliable network connection outlined in this blog. By default docker engine has 5 parallel upload operations.
You can update the docker engine to only use single upload or download operation by specifying values for max-concurrent-downloads for download or max-concurrent-uploads for upload.
On windows, you should update via C:\Users\{username}\.docker\daemon.json or via the Docker for Desktop GUI:
{
...
"max-concurrent-uploads": 1
}
On *Nix, open /etc/docker/daemon.json (If the daemon.json file doesn’t exist in /etc/docker/, create it.) and add following values as needed:
{
...
"max-concurrent-uploads": 1
}
And restart daemon.
Note: Currently this is not possible to specify these options in docker push or docker pull command as per this post.

how to run a big docker image on the Google Cloud Platform?

I would like to run a quite big docker image (~6 GB). I can create the docker image from a config file using Google Cloud Platform cloudshell
gcloud builds submit --timeout=36000 --tag gcr.io/docker-ml-dl-xxxx/docker-anaconda-env-ml-dl
This works perfectly fine and I can see the buidl is succesfull
https://console.cloud.google.com/cloud-build/
I can also see my image in the Registry Container:
https://console.cloud.google.com/gcr/images/docker-ml-dl-xxxxx
so far so good. The issue is when I try to run this image from cloudshell:
xxxxx#cloudshell:~ (docker-ml-dl-xxxxx)$ docker run gcr.io/docker-ml-dl-xxxxx/docker-anaconda-env-ml-dl
Unable to find image 'gcr.io/docker-ml-dl-xxxx/docker-anaconda-env-ml-dl:latest' locally
latest: Pulling from docker-ml-dl-xxxx/docker-anaconda-env-ml-dl
993c50d47469: Pull complete
c71c2bfd82ad: Pull complete
05fbbe050330: Pull complete
5586ce1e5329: Pull complete
1faf1ec50c57: Pull complete
fda25b84aec7: Pull complete
b5b4ca70f42c: Extracting [=======================> ] 708MB/1.522GB
0088935a1845: Download complete
36f80eb6aa84: Download complete
b08b38d2d4a3: Download complete
5ae3364fe2cf: Download complete
25da48fc753b: Downloading [==================================================>] 5.857GB/5.857GB
302cfeb76ade: Download complete
1f6d69ed4c84: Download complete
58c798a01f92: Download complete
docker: write /var/lib/docker/tmp/GetImageBlob997013344: no space left on device.
See 'docker run --help'.
Ok so my docker image is to big to be run from cloudshell.
Is this correct ?
What are the other/best option ? (to be 100% I can run the docker image on my Mac)
creating a custom VM
with 10 GB storage
install all software needed on this VM: docker gcloud ...
I need to devellop and run Machine Learning and Deep Learning code (this is the exploration phase, not the deployment phase with kubernetes).
Is this the best work on the cloud ?
The docker image is too big to run on Cloud Shell. You might run it on Kubernetes or Compute Engine instead, but since you're still in the early stages and you've already said you can run the tools you need locally, then this might not be necessary for your needs. Looking into the future, when you're more concerned with performance, you might want to consider a solution such as Cloud ML Engine or BigQuery ML.

How to completely destroy docker container from marathon UI?

I have docker registry from where Mesos pull the containers.My problem is when i destroy the app from Marathon UI and again call the Marathon rest api to deploy the app with same version of app, Mesos is not pulling image from the Master Docker registry It's pulling the image somewhere from local registry or cache. I realized this thing because Mesos completes the task in seconds and if I change the version it takes good time to deploy.
Please let me know If anyone has solution(or confusion related to the question) for this because I read all the documents i didn't get any solution.
Thanks
Try setting the forcePullImage flag to true as mentioned in here. Force pull instructs docker binary to pull the image from the registry even if it is already downloaded on the slave. Please refer to the corresponding documentation for how the docker pull command works.

Resources