I have a quirky bug with my 'docker pull' function.
I create 2 images for a project on one local server, 2 repos on same docker service - for this description, we'll call those api and templates. I then log into remote servers to pull and deploy those images as containers.
The first one I do a pull like:
docker pull 10.9.8.7:5000/api:api-1
api-1 is the tag, and I can pull that one just fine.
For the templates pull,
docker pull 10.9.8.7:5000/templates:template-1
The pull starts and then goes into this hellish waiting/retry routine, finally ending in: unexpected EOF
I'm pulling from the same docker service - seems like I should have problems with both or neither.
I've seen this bug in my search and there's so many different suggestions, but looking for any input or suggestions on why it works for one and not the other.
Related
I have Docker Desktop, but create my images on a Linux ARM64 machine, not the MacBook with the Docker Desktop application on it. I want to push these ARM64 images from the Linux host itself, but have run into the following problem:
When I push my image to my Docker Hub private repo with the command:
docker push myDockerHubUserName/myPrivateRepoName:tagOfImage
It fails with the error:
invalid reference format
The form of the command Docker has provided in my Docker account is described as:
Docker commands
To push a new tag to this repository,
docker push myDockerHubUserName/myPrivateRepoName:tagname
I've double-checked the values and syntax: ALL are 100% correct. But the push nonetheless fails.
How is this broken?!?!
Intro:
The error:
invalid reference format
is actually a red herring that will mislead you into believing that there is a syntax error in your push command when there isn't and you'll waste your time...
Problem:
Feel free to jump to the "SOLUTION" section below if you don't care how the commands fails and just want to know how to make it execute successfully ;-)
The instructions for pushing the image that Docker provides in their user's accounts have some large & material gaps. You will waste your time- a fair bit of it- if you do not have the following context:
You must login to your Docker Account before trying to push anything to it:
docker login -u yourDockerAccountUsername
The command Docker Hub gives you implies that the image you're trying to push was already TAGGED with the private repo as part of the tag itself.
It just appears to be a string comprised of (3) parts:
<dockerUserName>/<privateRepoName>:<tagname>It is NOT. You CANNOT merely concatenate
"myDockerHubUserName/myPrivateRepoName"
with
"tagname"
delimited with a colon! "myDockerHubUserName/myPrivateRepoName" must itself be part of the tag or the command will fail!!!
This "help" from Docker I found to be worse than giving no "help" at all because it served to create undue confusion. This is absolutely fundamental stuff and deserves better treatment.
Solution:
Login to your Docker Account:
docker login -u yourDockerAccountUsername
Get the Image ID for the Image that you want to push:
docker image ls
Tag the image with your Docker Hub User ID, Docker Hub Repo Name AND Image Name:
docker tag ae1b95b73ef7 myDockerHubUserName/myPrivateRepoName:myImageName
Push the Image:
docker push myDockerHubUserName/myPrivateRepoName:myImageName
Note the COLON separating the repo & image name: I've seen this described as a forward slash in other answers but I found a colon is required for the command to succeed.
Conclusion:
This was a big and unnecessary time-waster that sent me down the rabbit hole. Hopefully this answer will save others wasted time.
Attribution:
This answer was actually pieced together from several different Stack questions which got me across the finish line. Kudos to Abhishek Dasgupta for the general procedure & kudos to Илья Хоришко for the form of the tag which worked in the end. You folks are stars!
My overall goal is to install a self-hosted gitlab-runner that is restricted to use prepared docker images from my own docker registry only.
For that I have a system.d configuration that looks like:
/etc/systemd/system/docker.service.d/allow-private-registry-only.conf
BLOCK_REGISTRY='--block-registry=all'
ADD_REGISTRY='--add-registry=my.private.registry:8080'
By this, docker pull is allowed to pull images from my.private.registry/ only.
After I had managed to get this working, I wanted to clean up my local registry and remove old docker images. It was during that process when I stumbled over a docker image named gitlab/gitlab-runner-helper which presumably is some component used by the gitlab-runner itself and presumably has been pulled from docker.io.
Now I'm wondering if it is even possible/advisable to block images from docker.io when using a gitlab-runner?
Any hints are appreciated!
I feel my sovereign duty to extend the accepted answer (it is great btw), because the word 'handle' basically tells us not so much, it is too abstract. Let me explain the whole flow in far more details:
When the build is about to begin, gitlab-runner creates a docker volume (you can observe it with docker volume ls if you want). This volume will server as a storage for caches and artifacts that you are using during the build.
The second thing - You will have at least 2 containers involved in each stage: gitlab-runner-helper, container and the container created from the image you specified (in .gitlab-ci.yml or in config.toml). What gitlab-runner-helper container does it, essentially, just cloning the remote git repository (that you are building) in the aforementioned docker volume along with caches and artifacts.
It can do it becuase within gitlab-runner-helper image itself are 2 important utilities: git (obviously - to clone the repo) and gitlab-runner-helper binary (this utility can pull and push artifacts, caches)
The gitlab-runner-helper container starts before each stage for a couple of seconds, to pull artifacts and caches, and then terminates. After that the container, created from image that you specified will be launched, ant it will also have this volume (from step 1) attached - this is how it receives artifacts btw.
The rest of the details about the registry from where gitlab-runner-helper get pulled are described by #Nicolas pretty well. I append this comment just for someone, who, perhaps, want to know what exactly means this sneaky 'handle' word.
Hope it helps, have a nice day, my friend!
gitlab-runner-helper image is used by GitLab Runner to handle Git, artifacts, and cache operations for docker, docker+machine or kubernetes executors.
As you prefer pulling an image from a private registry, you can override the helper image. Your configuration could be :
[[runners]]
(...)
executor = "docker"
[runners.docker]
(...)
helper_image = "my.private.registry:8080/gitlab/gitlab-runner-helper:tag"
Please ensure the image is present on your registry or your configuration enable proxying docker hub or registry.gitlab.com. For this last, you need to run at least Gitlab runner version 13.7 and having enabled FF_GITLAB_REGISTRY_HELPER_IMAGE feature flag.
I have two docker repositories running on the same JFrog cloud account/instance. One for internal release candidates and the other for potentially external GC releases. I want to be able to build the docker images and push to the internal repository, let QA/UAT go to town, and then copy the image to the release repository. I don't want to rebuild the image from source. Unfortunately, when I try to pull, tag and then push the image, I'm getting an error:
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked.
Both repositories block schema 1 manifests, but I am pushing fine to the internal repository, so it doesn't make much sense I wouldn't be able to push the same image to the release repository.
I've setup a pretty simple test to confirm (actual repository URLs censored):
% docker pull hello-world:latest
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
...
% docker tag hello-world:latest internal-rc.jfrog.io/hello-world:1.0.0-beta
% docker push internal-rc.jfrog.io/hello-world:1.0.0-beta
The push refers to repository [internal-rc.jfrog.io/hello-world]
9c27e219663c: Pushed
...
% docker system prune -a
...
Total reclaimed space: 131.8MB
% docker image pull internal-rc.jfrog.io/hello-world:1.0.0-beta
1.0.0-beta: Pulling from hello-world
0e03bdcc26d7: Pull complete
...
% docker image tag internal-rc.jfrog.io/hello-world:1.0.0-beta docker-release.jfrog.io/hello-world:1.0.0
% docker image push docker-release.jfrog.io/hello-world:1.0.0
The push refers to repository [docker-release.jfrog.io/hello-world]
9c27e219663c: Layer already exists
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker-release.jfrog.io registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked. For more information visit https://www.jfrog.com/confluence/display/RTF/Advanced+Topics#AdvancedTopics-DockerManifestV2Schema1Deprecation
So I can upload the image fine to the first repository, and confirm that it is using schema 2:
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7004,
"digest": "sha256:66f750f4871ba45724699d7341ee7135caba46f63fb205351197464a66b55eff"
...
Does that mediaType being v1 matter? It seems like the manifest itself is version 2... But I don't know how I would change that, or why it would be allowed in one repository but not the other.
I'm using I believe the latest version of docker Docker version 19.03.8, build afacb8b
Anyone have any idea what's going on there? Is the schema version being changed between when I upload it the first time and when I download it? Or is it when I tag it or upload it the second time?
The root of this problem can probably be classified as user error. Specifically the user I'm using somehow had permissions removed from the release repository. Once that was restored everything works as expected.
I say "probably" because the error message has nothing to do with the actual problem, and cost me 2-3 hours worth of wild goose chasing.
So... If you see this error, go ahead and double check everything else around permissions/access before trying to figure out if there's something actually wrong with your image schema version.
We had a different case today with a similar error. I'm adding here because this is the top google result at the moment.
Pulling Docker images with manifest v2 schema 1 to this repository is blocked.
The fix was to change a setting on the remote repository.
Via UI:
Artifactory Admin -> Repositories -> Repositories -> Remote tab
Then select your Docker Hub repo, whatever you named it, then under Basic settings -> Docker Settings, uncheck the checkbox labeled
Block pulling of image manifest v2 schema 1
After that our images began pulling properly again.
There is a similar checkbox on local repos for pushing.
For what it's worth, we're on Artifactory version 7.18.5 rev 71805900
edit: The surprisingness of our particular issue is (potentially) explained in some more detail here: https://www.jfrog.com/jira/browse/RTFACT-2591
Docker pull requests fail due to a change in Docker Hub behavior. Now Docker Hub HTTP response headers return in lower case, for example, 'content-type' instead of 'Content-Type', causing Artifactory to fail to download and cache Docker images from Docker Hub.
but we have not yet tested whether an upgrade allows us to re-enable the aforementioned checkbox.
I have been getting the below errors while either pulling or pushing docker images from build servers. i have a proxy in env which used to connect docker registry. My DNS server while resolving the proxy FQDN it was returning a non functional IP-address. i have 4 DNS servers and multiple proxy servers based on region. Once DNS is updated and working/functional proxy returned it started working. Just check on network side too , it may solve the issue. the error messages were misleading initially i though to be docker layer issue, credential issue ..but no network issue. for the below errors
error pulling image configuration: unknown blob or
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
manifest invalid: manifest invalid
. Will start No.6 try.
I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.
I have a Docker image that I'd like to push to Docker Hub:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mattthomson/hadoop-java8 0.1 d9926f422c14 11 days ago 857.9 MB
```
I've run docker login, logged in as mattthomson, and run docker push mattthomson/hadoop-java8:0.1. This takes a while, showing a progress bar of the upload.
However, it seems not to have worked. If I run docker pull mattthomson/hadoop-java8:0.1 from another computer, I get "Tag 0.1 not found in repository mattthomson/hadoop-java8". The images doesn't show up here, either.
What am I doing wrong?
I had to confirm my email address so my repositories show up on docker hub. Simple but didn't notice at first.
I experienced this when my Docker Hub organisation had reached its limit on the number of private repositories I was allowed to create (and I also got an email from Docker Hub about reaching this limit).
To solve the problem I upgraded my Docker Hub subscription to allow more repositories.
It would be nice if the error message from Docker Hub contained some hint to the cause.
I retried a number of times before this went through successfully. I was misled by the fact that the upload was failing partway through, without displaying an error message, just a timestamp. The error code was non-zero.