I've installed the newest version of Docker for Windows. Everything works except docker pull. When I try to pull any image I always get information about network time out, but I know that my network is fine. Also I'm not behind any proxy. I don't know what's wrong with this. Anybody has any idea?
Related
I've been using nexus as a docker repository for a while to mitigate flakey internet. However recently, I've hit an issue that seems a bit weird. So, if I run:
docker pull server:8042/alpine:3.16.2
it works fine and it all gets cached. However, if I try and run
docker pull server:8042/sameersbn/gitlab:15.0.3
I get the following error:
Error response from daemon: unknown image in /v1.41/images/create?fromImage=server%3A8042%2Fsameersbn%2Fgitlab&tag=15.0.3
Running a direct pull from docker works fine, but using the cache, any nested tag with a username fails. I'm using engine 20.10.20 if that helps.
Thanks in advance
This appears to be a bug introduced somewhere between engine 20.10.17 and 20.10.20. Rolling back to an earlier version makes everything work again. I have also reported this to docker, however as I'm not a paid member, I suspect this will go unnoticed.
We are using a self hosted version of Jfrog Artifactory and for some reason Artifactory went down and which we were able to resolve by restarting the server. After reboot everything seemed to come back up correctly, however we realized that anything using a particular layer ddad3d7c1e96 which is actually pulling down alpine 3.11.11 from docker hub was now getting stuck at pulling fs layer and would just sit and keep retrying.
We tried "Zapping Caches" and running the maintenance option for Garbage Collection/Unused artifacts but we were still unable to download any image that had this particular layer.
Is it possible that this layer was somehow cached in artifactory and ended up being corrupted. We were also looked through the filestore and were unsuccessful in finding the blob.
After a few days, the issue resolved itself and images can now be pulled but we are now left without an explanation...
Does anyone have an idea of what could have happened and how we can clear out cached parent images which are not directly in our container registry but pulled from docker hub or another registry.
We ran into a similar issue - zapping caches worked for us however it took some time to complete
I specified a docker image when creating a small VM. Because of this feature, I expected a fairly hands-off way of updating the container to the latest image, but I can't find any documentation on how to do that, or at least a method that works. What the documentation says is that updating the configuration will cause the container to be updated to the latest image and the VM will be stopped & restarted, but this doesn't happen.
I've only been able to update the container by using the Cloud shell from the container registry page. Am I missing a more obvious way to do this?
Docker Tags
The version of the image is specified in the tag.
If you want the most recent, use the latest tag.
Otherwise, a version can be specified.
Example:
fedora/httpd:version1.0 will grab fedora image with version1.0.
fedora/httpd:latest will grab the latest fedora image.
Check what versioning format your image is using, and specify a version when pulling the image.
Updating your container to use newest image
To do this, you likely need to just stop the container, specify the image you want to use, and run the new container.
The key here is to trigger a new pull from the registry. If you are using the latest tag, the latest image should be pulled from the registry. Your important data/configuration should all be made persistent through volume mounting, etc. So you should just be able to plug-and-play with this new image.
If you are looking for the simplest way, maybe try writing a script to stop your running container, pull the latest image, and run this image.
It is slightly challenging to give an exact answer on this issue because there a couple ways to go about it.
Documentation for Docker Pull and Docker Tag
Use Tag for image instead of latest.
use something like this
Image: name_of_imgae:1.0
I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.
I started to use Docker a few days ago so I'm still a newbie in this domain, so I deeply apologize if my questions seem obvious, because so far, most of them aren't for me.
My goal is to create a custom image from a Rails application, to send it up to the Docker Hub, then pull it from a server and simply make it run.
I used this doc to create my image excepted that I chose to use MariaDB (works fine). So far, my project only contains a CRUD / scaffold that works nicely.
I then pushed it to a private repository on Docker Hub using this link. Again, no problem, hub is telling me the push went okay, same for my console.
Then, I connected to a private server running Debian, pulled the project from the hub, made sure it existed using docker images.
My main question is the following: what should I do next?
If I refer to the first link, I create the rails project from close to empty Gemfile, then synchronise the local files with the image. However, on my server, all I have is an empty directory. If I'm not stupid, redoing the Docker's tutorial will "reset" my image.
This is where I'm currently lost: what should I do now? I don't believe that running docker run repo/my-image rails server is the good solution here
Thank you in advance
You are going good till now. Now think what is the use of you pushing the image to private repository - You and others who have access to repo should be able to get the image and should be able to create containers from it.
The point where you lost is exactly what you should do now i.e. execute docker run
redoing the Docker's tutorial will "reset" my image.
Docker is smart enough to download image once and use again. Resetting will remove your locally downloaded images but it won't remove from private repo.