I have been having a lot of problems pushing images with gcloud docker push over the past few weeks. I've read through the many stack overflow discussions and github issues and workarounds but I haven't come across a solution to the inconsistency yet.
Typically I will attempt to push a container image or two. The first push will almost always fail with the following retry-until-timeout output:
I can only get around it with gcloud auth login. At most 5 minutes later I will attempt to push a second image, and will again see the retry-until-timeout issue. I will see this on every attempt until I gcloud auth login again.
Often I will have to manually retry several more times immediately after authenticating before the image is actually pushed.
Am I actually being logged out (I can still access pods and instances, etc with kubectl and gcloud machines)? If so, why is being logged out inconsistent and what does building docker containers do that it would invalidate my local gcloud session?
If not, why can't I gcloud docker push until I authenticate again? After that, why is this still inconsistent (I suspect it may have little or nothing to do with the real issue).
Is there a way to make pushing images on OSX with docker-machine and gcloud docker push reliable? Is there another way to get images to the cloud repository (preferably from the command line)?
gcloud --version
alpha 2016.01.12
beta 2016.01.12
bq 2.0.18
bq-nix 2.0.18
core 2016.02.11
core-nix 2016.02.05
gcloud
gsutil 4.16
gsutil-nix 4.15
kubectl
kubectl-darwin-x86_64 1.1.7
docker --version
Docker version 1.10.1, build 9e83765
docker-machine --version
docker-machine version 0.6.0, build e27fb87
virtualbox version 5.0.14 r105127
I had the same or similar problem. After a few minutes of retry loop depicted with screenshoot above, the command will fail with net/http: TLS handshake timeout.
The solution that fixed it for me was editing the docker daemon configuration with
DOCKER_OPTS="--max-concurrent-uploads=1"
I had a feeling this issue was connected with docker clogging up the network, as I noticed even browsing to gmail can get a timeout(!)
Switching to regular docker push doesn't help timeouts. This appears to be related to your ISP and uploading assets.
I was receiving the same error. After moving the Docker build process to the cloud (which has a much larger pipeline), gcloud docker builds and deploys the image just fine.
I never faced the problems you mentioned with gcloud docker, but regarding your last point,
Is there another way to get images to the cloud repository (preferably from the command line)?
it is indeed possible to push to the gcr.io repos without going through gcloud, e.g:
docker login -e dummy#example.com -p $(gcloud auth print-access-token) -u _token https://gcr.io
docker push [your-image]
Credits to mattmoor, more info in original answer here:
Access google container registry without the gcloud client
Related
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
I'm trying to follow the Docker Get Started guide. Currently I'm at part 4. Everything up until the point
docker stack deploy -c docker-compose.yml getstartedlab
worked well. However, after trying to deploy the services, when I run docker stack ps getstartedlab, I see that the swarm manager keeps trying to restart the containers, since every time they get the error "No such image: username/get-st…" and have their state as "Rejected 6 seconds ago" etc.
I tried to search for solutions a bit but surprisingly it seems that nobody encountered this error before whatsoever. The issue here and a similar section in the Get Started guide talks about situations where one wants to pull from a private registry. However, throughout the tutorial I've been working with the default public registry. All previous steps (e.g. launching the swarm locally, without using virtualbox) worked fine.
Versions:
Docker version 18.02.0-ce, build fc4de447b5
Virtualbox 5.2.8 r120774
System Kernel: 4.14.25-1-MANJARO
Any idea what might have been the problem?
Surprisingly passing in the flag --with-registry-auth worked even though my repo is apparently on Docker Hub. Not sure what the problem was but maybe the claim that one would only need this flag if they're using a private registry is a bit inaccurate then.
This error occurs when trying to push an image to the public repository on Docker Hub. There have been no issues with other registries I have tried.
I have looked at numerous sites, blogs including StackOverflow and there is still no clear answer.
You can try to replicate this issue as follows.
As shown in the screenshot above, I have an image aspc-mvc-app on local docker host. As shown, it has 3 tags - 1.0.5, 1.0.5.latest and latest.
Assume that we are trying to push using an account name of janedoe at Docker Hub
Per documentation on Docker.io and numerous other sites, there are 3 steps to pushing.
(1) Login
docker login "index.docker.io" -u janedoe -p <password>
--> I get Login Succeeded which is good!
(2) Add one or more tags
Of the 3 tags, let's just tag the latest.
docker tag janedoe/aspc-mvc-app:latest janedoe/aspc-mvc-app
--> The prompt returns with no error. So far so good.
(3) Push
docker push janedoe/aspc-mvc-app
--> This is where the error occurs.
As shown on the screenshot below, initial checks seem to occur fine until you get the error denied: requested access to the resource is denied
At step (2), I have tried numerous other formats including the following.
docker tag janedoe/aspc-mvc-app:latest janedoe/aspc-mvc-app:latest
docker tag janedoe/aspc-mvc-app janedoe/aspc-mvc-app:latest
docker tag aspc-mvc-app:latest janedoe/aspc-mvc-app
docker tag aspc-mvc-app janedoe/aspc-mvc-app:latest
docker tag 306a8fd79d88 janedoe/aspc-mvc-app
docker tag 306a8fd79d88 janedoe/aspc-mvc-app:latest
All fail with the same error.
As a comparison, with the same exact image, I had no problem pushing to Azure Container Registry.
Since Docker Hub is so popular, can anyone shed light on what the mystery is, or if there is a detailed documentation anywhere?
Updated 5/9/2017
I am fairly up-to-date on docker cli and server versions. Right now, my cli is 17.05.0-ce-rc1 and server is 17.04.0-ce as shown below.
The solution is simply to change the way of logging in at step (1).
docker login -u janedoe -p <password>
Everything else can stay the way described above. The image was successfully pushed to Docker Hub!
First login by typing sudo docker login in the terminal. Enter username and password
Visit your docker account and create a new repository. In my case I created a repository crabigator1900/dockerhub
Say you have a docker image with repository name:crabigator/django and tag:latest.
In that case you will need to tag this image with a label of your wish. I decided to tag it with the label:myfirstimagepush. You tag the image by typing the command
sudo docker tag crabigator/django:latest crabigator1900/dockerhub:firstimagepush
Finally push the image to your repo using the command
sudo docker push crabigator1900/dockerhub:firstimagepush
That's all there is to it.
I too had the same issue, but after trying some combinations this worked.
Whenever you push - that refers to docker.io/ followed by registry path.
In my case my username is rushmith and I created a sample repository called docker under rushmith.
My link is : "hub.docker.com/r/rushmith/docker/"
Now I created a tag to my image that I want to push as: rushmith/docker
And It worked successfully.
$ docker login -u rushmith
(Give the password then type as below)
$ docker push rushmith/docker:latest
Output:
The push refers to a repository [docker.io/rushmith/docker]
7fbb0e1e64cb: Pushed
33f1a94ed7fc: Pushed
b27287a6dbce: Pushed
47c2386f248c: Pushed
2be95f0d8a0c: Pushed
2df9b8def18a: Pushed
latest: digest:
sha256:4d749d86b4a2d9304a50df474f6236140dc2d169b9aabc354cdbc6ac107390f2 size: 1569
I hope this late solution might help someone.
The reason of this error message was you haven't named your images properly.
Let say your account name on docker.io was your-name then your new repo name is going to be your-name/your-new-image-name.
In order to push your image, first you have to tag (name) your local images as:
docker tag local-image[:tag-name] your-name/your-new-image-name[:tag-name]
Things in the brackets is optional. You may want to check the result with docker image ls. Then let push your image to your docker repo:
docker push your-name/your-new-image-name[:tag-name]
Done! Your image was pushed to docker repos.
You can follow the following steps:
Step 1: docker login -u <username> -p <password>
A message with "Login Succeeded" will appear, confirming your successful login.
Step 2:
Now in order to push the image just make sure the path which you are using must have your username included in the tag.
e.g: Suppose link is: "hub.docker.com/u/xyz/"
Create a tag to image as docker push xyz/docker:latest.
If you already have some different tag change it using command
docker tag <old tag> <new tag>
Hope this helps.
after 1 hour's struggling with different ways mentioned above,
I reinstalled the neweast version of Docker Desktop app in my mbp, then it is solved.
the neweast version is 20.10.2
and the old version is 17.x, which was installed 5 years ago.
First you need to ensure you have logged into your account
You need to create a repository, below is the command to create a repository -
docker tag local-image:tagname YOUR-ACCOUNT-NAME/tagname
docker push YOUR-ACCOUNT-NAME/tagname
Create a repository from a website.
It possible that you don't have a permission for creating repository.
docker push does not create a repo name so if not present it says access not available
This worked for me.
> docker login -u janedoe
Password:
Login Succeeded
> docker tag myapp:0.0.1 janedoe/myflinkapp:0.0.1
> docker push janedoe/myapp:0.0.1
The push refers to repository [docker.io/janedoe/myapp]
b763be657a2c: Pushed
e534dae385a8: Pushed
5af3d5d57035: Pushed
0e44828b51e2: Pushed
fdd771f27095: Pushed
ef9a7b8862f4: Pushed
a1f2f42922b1: Pushed
4762552ad7d8: Pushed
0.0.1: digest: sha256:0069ee2c39b422e64f0493d2b2e9cbe7736a size: 2154
In my case, I was facing this issue even after logging into Docker registry successfully.
So, I tried running the docker push as sudo and it worked.
Make sure you follow these steps:
Run docker login
After logging in successfully, run the docker push command
If the push failed, run this: sudo docker push repoName:tagName
If you're using 2FA and run
docker login -u <your_docker_user_name>
you will get Login successful but you won't be able to push.
This is because you're using 2FA which requires one-time password to login into your account.
To be able to push with 2FA enabled you need to use an access token. To generate one go to Account settings/Security on Docker Hub website and click New Access Token. As of Access Permissions preferably choose Read & Write - this is the entry level for being able to push. Only generate Read, Write, Delete token if you really need it!
You'll be prompted with instructions on what to do next. Just to keep the answer full, you'll have to run
docker login -u <you_docker_username>
and when prompted for Password paste your Personal Access Token.
IMPORTANT: save your Personal Access Token in a password manager and never share with anyone and never push to github or add to your source code. NEVER! Please.
Now, when you run docker push <your_docker_username>/<your_docker_repo_name>:<tag_of_your_image> you should be able to push the image to the Docker Hub.
I have the same problem and it was solved by running the push command with sudo. I think it is only a privilege problem.
sudo docker push janedoe/aspc-mvc-app
I have docker registry from where Mesos pull the containers.My problem is when i destroy the app from Marathon UI and again call the Marathon rest api to deploy the app with same version of app, Mesos is not pulling image from the Master Docker registry It's pulling the image somewhere from local registry or cache. I realized this thing because Mesos completes the task in seconds and if I change the version it takes good time to deploy.
Please let me know If anyone has solution(or confusion related to the question) for this because I read all the documents i didn't get any solution.
Thanks
Try setting the forcePullImage flag to true as mentioned in here. Force pull instructs docker binary to pull the image from the registry even if it is already downloaded on the slave. Please refer to the corresponding documentation for how the docker pull command works.
I try to push my docker container to the google container registry, using this tutorial, but when I run
gcloud docker push b.gcr.io/my-bucket/image-name
I get the error :
The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1)
Sending image list
Error: Status 403 trying to push repository my-bucket/my-image: "Access denied."
I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both services.
Anything I'm missing ?
You need to make sure the VM instance has enough access rights. You can set these at the time of creating the instance, or if you have already created the instance, you can also edit it (but first, you'll need to stop the instance). There are two ways to manage this access:
Option 1
Under the Identity and API access, select Allow full access to all Cloud APIs.
Option 2 (recommended)
Under the Identity and API access, select Set access for each API and then choose Read Write for Storage.
Note that you can also change these settings even after you have already created the instance. To do this, you'll first need to stop the instance, and then edit the configuration as mentioned above.
Use gsutil to check the ACL to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You'll need to check which group the account you are using is in ('owners', 'editors', 'viewers' etc.)
EDIT: I have experienced a very similar problem to this myself recently and, as #lampis mentions in his post, it's because the correct permission scopes were not set when I created the VM I was trying to push the image from. Unfortunately there's currently no way of changing the scopes once a VM has been created, so you have to delete the VM (making sure the disks are set to auto-delete!) and recreate the VM with the correct scopes ('compute-rw', 'storage-rw' seems sufficient). It doesn't take long though ;-).
See the --scopes section here: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
I am seeing this but on an intermittent basis. e.g. I may get the error denied: Permission denied for "latest" from request "/v2/...."., but when trying again it will work.
Is anyone else experiencing this?
For me I forgot to prepend gcloud in the line (and I was wondering how docker would authenticate):
$ gcloud docker push <image>
In your terminal, run the code below
$ sudo docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://[HOSTNAME]
Where
-[HOSTNAME] is your container registry location (it is either gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io). Check your tagged images to be sure by running $ sudo docker images).
If this doesn't fix it, try reviewing the VM's access scopes.
If you are using Docker 1.7.0, there was a breaking change to how they handle authentication, which affects users who are using a mix of gcloud docker and docker login.
Be sure you are using the latest version of gcloud via: gcloud components update.
So far this seems to affect gcloud docker, docker-compose and other tools that were reading/writing the Docker auth file.
Hopefully this helps.
Same problem here, the troubleshooting section from https://cloud.google.com/tools/container-registry/#access_denied wasn't very helpful. I have Docker and GCloud full updated. Don't know what else to do.
BTW, I'm trying to push to "gcr.io".
Fixed. I was using a VM in compute engine as my development machine, and looks like I didn't give it enough rigths in Storage.
I had the same problem with access denied and I resolved it with creating new image using Tag:
docker tag IMAGE_WITH_ACCESS_DENIED gcr.io/my-project/my-new-image:test
After that I could PUSH It to Container registry:
gcloud docker -- push gcr.io/my-project/my-new-image:test
Today I also got this error inside Jenkins running on Google Kubernetes Engine when pushing the docker container. The reason was a node pool node version upgrade from 1.9.6-gke.1 to 1.9.7-gke.0 in gcp I did before. Worked again after the downgrade.
You need to login to gcloud from the machine you are:
gcloud auth login