While following Heroku's docs for how to push a docker image to their registry, I keep running into this error:
> docker push registry.heroku.com/<MY-APP>/web cd
Using default tag: latest
The push refers to repository [registry.heroku.com/<MY-APP>/web]
e0d052f1dc62: Preparing
41ec0e96eb83: Preparing
d081ada49467: Waiting
73c3e7ef7bc6: Waiting
unauthorized: authentication required
I continue to get a Login Succeeded whenever I try to use docker login, so I'm not sure what the issue is.
I tried to debug using the Docker Daemon logs but those weren't helpful.
Turns out I was bitten by what I'd consider to be a bug with the Heroku registry that stems from a debate about how to deny the user properly when they're logged in but try to access a resource that either doesn't exist or isn't theirs so that sensitive info, like the existence of a resource, isn't exposed (check this summary if you're interested).
TL;DR - Heroku shuold be sending a 404 but send a 401 instead - Go make the app via the UI and then try again.
Related
I'm using Gitea (on Kubernetes, behind an Ingress) as a Docker image registry. On my network I have gitea.avril aliased to the IP where it's running. I recently found that my Kubernetes cluster was failing to pull images:
Failed to pull image "gitea.avril/scubbo/<image_name>:<tag>": rpc error: code = Unknown desc = failed to pull and unpack image "gitea.avril/scubbo/<image_name>:<tag>": failed to resolve reference "gitea.avril/scubbo/<image_name>:<tag>": failed to authorize: failed to fetch anonymous token: unexpected status: 530
While trying to debug this, I found that I am unable to login to the registry, even though curling with the same credentials succeeds:
$ curl -k -u "scubbo:$(cat /tmp/gitea-password)" https://gitea.avril/v2/_catalog
{"repositories":[...populated list...]}
# Tell docker login to treat `gitea.avril` as insecure, since certificate is provided by Kubernetes
$ cat /etc/docker/daemon.json
{
"insecure-registries": ["gitea.avril"]
}
$ docker login -u scubbo -p $(cat /tmp/gitea-password) https://gitea.avril
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://gitea.avril/v2/": received unexpected HTTP status: 530
The first request shows up as a 200 OK in the Gitea logs, the second as a 401 Unauthorized.
I get a similar error when I kubectl exec onto the Gitea container itself, install Docker, and try to docker login localhost:3000 - after an error indicating that server gave HTTP response to HTTPS client, it falls back to the http protocol and similarly reports a 530.
I've tried restart Gitea with GITEA__log__LEVEL=Debug, but that didn't result in any extra logging. I've also tried creating a fresh user (in case I have some weirdness cached somewhere) and using that - same behaviour.
EDIT: after increasing log level to Trace, I noticed that successful attempts to curl result in the following lines:
...rvices/auth/basic.go:67:Verify() [T] [638d16c4] Basic Authorization: Attempting login for: scubbo
...rvices/auth/basic.go:112:Verify() [T] [638d16c4] Basic Authorization: Attempting SignIn for scubbo
...rvices/auth/basic.go:125:Verify() [T] [638d16c4] Basic Authorization: Logged in user 1:scubbo
whereas attempts to docker login result in:
...es/container/auth.go:27:Verify() [T] [638d16d4] ParseAuthorizationToken: no token
This is the case even when doing docker login localhost:3000 from the Gitea container itself (that is - this is not due to some authentication getting dropped by the Kubernetes Ingress).
I'm not sure what could be causing this - I'll start up a fresh Gitea registry to compare.
EDIT: in this Github issue, the Gitea team pointed out that standard docker authentication includes creating a Bearer token which references the ROOT_URL, explaining this issue.
Text below preserved for posterity:
...Huh. I have a fix, and I think it indicates some incorrect (or, at least, unexpected) behaviour; but in fairness it only comes about because I'm doing some pretty unexpected things as well...
TL;DR attempting to docker login to Gitea from an alternative domain name can result in an error if the primary domain name is unavailable; apparently because, while doing so, Gitea itself makes a call to ROOT_URL rather than localhost
Background
Gitea has a configuration variable called ROOT_URL. This is, among other things, used to generate the copiable "HTTPS" links from repo pages. This is presumed to be the "main" URL on which users will access Gitea.
I use Cloudflared Tunnels to make some of my Kubernetes services (including Gitea) available externally (on <foo>.scubbo.org addresses) without opening ports to the outside world. Since Cloudflared tunnels do not automatically update DNS records when a new service is added, I have written a small tool[0] which can be run as an initContainer "before" restarting the Cloudflared tunnel, to refresh DNS[1].
Cold-start problem
However, now there is a cold-start problem:
(Unless I temporarily disable this initContainer) I can't start Cloudflared tunnels if Gitea is unavailable (because it's the source for the initContainer's image)
Gitea('s public address) will be unavailable until Cloudflared tunnels start up.
To get around this cold-start problem, in the Cloudflared initContainers definition, I reference the image by a Kubernetes Ingress name (which is DNS-aliased by my router) gitea.avril rather than by the public (Cloudflared tunnel) name gitea.scubbo.org. The cold-start startup sequence then becomes:
Cloudflared tries to start up, fails to find a registry at gitea.avril, continues to attempt
Gitea (Pod and Ingress) start up
Cloudflared detects that gitea.avril is now responding, pulls the Cloudflared initContainer image, and successfully deploys
gitea.scubbo.org is now available (via Cloudflared)
So far, so good. Except that testing now indicates[2] that, when trying to docker login (or docker pull, or presumably, many other docker commands) to a Gitea instance will result in a call to the ROOT_URL domain - which, if Cloudflared isn't up yet, will result in an error[3].
So what?
My particular usage of this is clearly an edge case, and I could easily get around this in a number of ways (including moving my "Cloudflared tunnel startup" to a separately-initialized, only-privately-available registry). However, what this reduces to is that "docker API calls to a Gitea instance will fail if the ROOT_URL for the instance is unavailable", which seems like unexpected behaviour to me - if the API call can get through to the Gitea service in the first place, it should be able to succeed in calling itself?
However, I totally recognize that the complexity of fixing this (going through and replacing $ROOT_URL with localhost:$PORT throughout Gitea) might not be worth the value. I'll open an issue on the Gitea team, but I'd be perfectly content with a WILLNOTFIX.
Footnotes
[0]: Note - depending on when you follow that link, you might see a red warning banner indicating "_Your ROOT_URL in app.ini is https://gitea.avril/ but you are visiting https://gitea.scubbo.org/scubbo/cloudflaredtunneldns_". That's because of this very issue!
[1]: Note from the linked issue that the Cloudflared team indicate that this is unexpected usage - "We don't expect the origins to be dynamically added or removed services behind cloudflared".
[2]: I think this is new behaviour, as I'm reasonably certain that I've done a successful "cold start" before. However, I wouldn't swear to it.
[3]: After I've , the error is instead error parsing HTTP 404 response body: unexpected end of JSON input: "" rather than the 530-related errors I got before. This is probably a quirk of Cloudflared's caching or DNS behaviour. I'm working on a minimal reproducing example that circumvents Cloudflared.
Recently got a new Mac, and now I am struggling to push docker containers to GCR - receiving the error:
unauthorized: You don't have the needed permissions to perform this operation, and you
may have invalid credentials. To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
Commands that led to this error:
docker build -t our-node-container ./
docker tag our-node-container gcr.io/our-gcp-project/our-grc-images-directory
docker push gcr.io/our-gcp-project/our-grc-images-directory
Confirming that:
I have a GCP account with billing, have enabled the Container Registry API and installed Cloud SDK, and have Docker installed.
I have authenticated with gcloud auth login, which opened a window where I selected my email address associated with the GCP account. It led to this page.
and afterwards, I ran gcloud config set project our-gcp-project. I have closed my terminal window and attempted to docker push again, but continue to get this unauthorized error. How else can I troubleshoot this in an effort to solve the problem?
As is standard, we solved the issue just moments after posting the question. Rather than deleting the question, I'll post an answer incase anyone runs into same issue.
We simply missed the last step, which was to run gcloud auth configure-docker to update the config file in /home/.docker/config.json
I’m trying to push to harbor registry 2.2.
It works with ssl and the storage is on locally mounted NFS share.
The error I get is: unauthorized to access repository: test/flask, action: push: unauthorized to access repository: test/flask, action push.
I tried to push with the admin user to project that I’ve created it with.
I tried to change the permission of the nfs share and it didn’t work.
The registry is on compose and not on Kubernetes.
Had the same inexplicable issue, just started happening one day after several months with no issues. Required me to explicitly logout of Harbor registry and then login.
docker logout registry.example.com
docker login registry.example.com
After this sequence, the "unauthorized to access" went away, and pushes began working again.
I had the similar problem and the solution was docker login registry.example.com .
I had the same issue. In my case, the problem was that the username and password that were used in the GitLab pipeline were protected. This means that they were only shared with pipelines from a protected branch like master for example. Since I was testing my changes in the pipeline in a feature branch, all I had to do was to go to variable settings and uncheck the protected flag for harbor user and password so it can be shared with the pipelines that were running from feature branches.
I'm having problems pushing images to my docker repo in Artifactory. Pulling the images works as expected, but pushing them gives me an error. I can see the progress bar pushing the image, but somehow it times out w/ a "I/O Timeout"
My setup consists of an Artifactory instance running in my k8 cluster and I have a F5 in front of it for SSL offloading. I followed these instruction for using the repository path method.
On the http settings I've tried using the nginx/http reverse proxy or just using the embedded tomcat. I either the the "I/O timeout" or a "503 Service Unavailable" (when using the embedded).
I know network wise everything is ok, since I can push other items. i.e, files, npm etc... It's a bit frustrating that I'm able to pull but not push. Has anyone seen this before??
Do the docker push command again with artifactory UI open ( Admin -> System logs -> Request log )
You should see a few requests coming in with '/api/docker' in the path. What's the return code and full path shows in request log?
The docker registry push would require docker login. You may need to get credentials for the docker registry so that you push. Say if you have saved password in a file
docker login --username=yourhubusername --email=youremail#company.com
And then try push.
I am trying to push an image to docker hub.
I created an account "kaffeekaethe" and created a public repository "reservierung".
I build the image "kaffeekaethe/reservierung" (no typos, I double checked that).
Afterwards I logged into docker hub and tried to push the image, but I am alwasy getting the error "access to the requested resource is not authorized". Everyone else who had this problem wasn't logged in prior to the push command, and that was the only solution that I've found. Is there any other reason for this error?