Issues using skopeo copy to gcr, unable to authenticate on google cloud - docker

I'm trying to pull and push images in a Gitlab pipeline avoiding to use docker-in-docker approach so I'm trying to use skopeo for that.
But right now I'm having issues authenticating skopeo on gcr because we use key authentication for Service Accounts, that doesn't seems to be supported (at least I couldn't make it work) by skopeo, and we don't want to use user and password for that.
The error message is this one:
unable to retrieve auth token: invalid username/password: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
What are the possibilities I can explore to make authentication work?

I just found a way to solve that authentication issue, we must use --dest-registry-token flag like this:
skopeo copy --dest-registry-token "$(gcloud auth print-access-token)" docker://nginx:1.23.1 docker://us.gcr.io/project/nginx:1.23.1
And make sure that before doing that you activate the Service Account.
gcloud auth activate-service-account --key-file $SOME_FILE

Related

Pushing docker image to gitlab fails: unauthorized

I'm trying to push an image to my gitlab registry which i previously built with success.
docker login registry.gitlab.com
I give the credentials and it returns me a "Login Succeeded"
Then, as always, i do a
docker push registry.gitlab.com/username/registry/base:latest
And it ends with
unauthorized: authentication required
i already tried to
docker logout registry.gitlab.com
and login again.
The process can be found here, it's pretty simple
link to github/gitlabhq
I'm used to do it like that, first time i face the issue, don't understand
Any help appreciated !
Ensure that your account has read/write access to the registry you are trying to access. What you might need to do is to create a new Access Token as there is a difference between API/Access tokens and your "normal" user password. Use this access token as described in the documentation (https://github.com/gitlabhq/gitlabhq/blob/master/doc/user/packages/container_registry/index.md#authenticate-with-the-container-registry)
docker login registry.example.com -u <username> -p <token>
The token can be created by going to Edit Profile -> Access Tokens -> Select Scopes -> Ticking off 'Read registry' & 'Write registry'
The Gitlab support told me this is due to a native limitation of the token duration.
You cannot customize this duration in Saas mode.
So pushing large image results in auto logout.

Docker login: access denied you must use a personal access token

Trying to login from docker to gitlab using the command:
sudo docker login registry.gitlab.com?private_token=XXX
But I still have the following error message:
Error response from daemon: Get https://registry.gitlab.com/v2/: unauthorized: HTTP Basic: Access denied\nYou must use a personal access token with 'api' scope for Git over HTTP.\nYou can generate one at https://gitlab.com/-/profile/personal_access_tokens
The token has the right access I doubled checked... I am rather new to docker, any hint/help? thanks!
The correct command line (that works in my case at least) was:
docker login registry.example.com -u <your_username> -p <your_personal_access_token>
If you are using 2 factor authentication, then personal access tokens are required.
More information on the following webpage,
https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html
According to https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html, your username actually gets ignored:
Though required, GitLab usernames are ignored when authenticating with a personal access token. There is an issue for tracking to make GitLab use the username.
So, if you're not able to connect, it might not be because of the username.

GCP docker authentication: How is using gcloud more secure than just using a JSON keyfile?

Setting up authentication for Docker  |  Artifact Registry Documentation suggests that gcloud is more secure than using a JSON file with credentials. I disagree. In fact I'll argue the exact opposite is true. What am I misunderstanding?
Setting up authentication for Docker | Artifact Registry Documentation says:
gcloud as credential helper (Recommended)
Configure your Artifact Registry credentials for use with Docker directly in gcloud. Use this method when possible for secure, short-lived access to your project resources. This option only supports Docker versions 18.03 or above.
followed by:
JSON key file
A user-managed key-pair that you can use as a credential for a service account. Because the credential is long-lived, it is the least secure option of all the available authentication methods
The JSON key file contains a private key and other goodies giving a hacker long-lived access. The keys to the kingdom. But only to the Artifact Repository in this instance, because the service account that the JSON file is for only has specifically those rights.
Now gcloud has two auth options:
gcloud auth activate-service-account ACCOUNT --key-file=KEYFILE
gcloud auth login
Lets start with gcloud and a service account: Here it stores KEYFILE in unencrypted in ~/.config/gcloud/credentials.db. Using the JSON file directly boils down docker login -u _json_key --password-stdin https://some.server < KEYFILE which stores the KEYFILE contents in ~/.docker/config.json. So using gcloud with a service account or just using the JSON file directly should be equivalent, security wise. They both store the same KEYFILE unencrypted in a file.
gcloud auth login requires login with a browser where I give consent to giving gcloud access to my user account in its entirety. It is not limited to the Artifact Repository like the service account is. Looking with sqlite3 ~/.config/gcloud/credentials.db .dump I can see that it stores an access_token but also a refresh_token. If the hacker has access to ~/.config/gcloud/credentials.db with access and refresh tokens, doesn't he own the system just as much as if he had access to the JSON file? Actually, this is worse because my user account is not limited to just accessing the Artifact Registry - now the user has access to everything my user has access to.
So all in all: gcloud auth login is at best security-wise equivalent to using the JSON file. But because the access is not limited to the Artifact Registry, it is in fact worse.
Do you disagree?

Can login into docker-registry but not push image (github)

So I want to use the docker registry from GitHub.
I do the flowing:
docker login docker.pkg.github.com --username username
docker build . --tag docker.pkg.github.com/user-name/repo/IMAGENAME:snapshot
docker push docker.pkg.github.com/user-name/repo/IMAGENAME:snapshot
Note that the repository is private and not mine but I got write access to it.
When I go to packages tab I can also see the instructions on how to get started and I follow them(kind of, I tag the docker image in one go).
But when I run the 3 commands at the top I get the following output(push command fails):
unauthorized: Your token has not been granted the required scopes to execute this query. The 'id' field requires one of the following scopes: ['read:packages'], but your token has only been granted the: [''] scopes. Please modify your token's scopes at: https://github.com/settings/tokens.
When I visit the site referenced there is nothing there only unrelated tokens.
Any ideas what I could try or what may cause this...?
You need to use an API Token to log in like shown in the docs. Log in via password is not possible.
https://help.github.com/en/packages/using-github-packages-with-your-projects-ecosystem/configuring-docker-for-use-with-github-packages
You must use a personal access token.

Permission issues while docker push

I'm trying to push my docker image to google container image registry but get an error which says I do not have the needed permission to perform this operation.
I have already tried gcloud auth configure-docker but it doesn't work for me.
I first build the image using:
docker build -t gcr.io/trynew/hello-world-image:v1 .
Then I'm trying to attach a tag and push it:
docker push gcr.io/trynew/hello-world-image:v1
This is my output :
The push refers to repository [gcr.io/trynew/hello-world-image]
e62774cdb1c2: Preparing
0f6265b750f3: Preparing
f82351274ce3: Preparing
31a16430afc8: Preparing
67298499a3ed: Preparing
62d5f39c8fe4: Waiting
9f8566ee5135: Waiting
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials.
To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
Google cloud services have specific information how to grant permissions for docker push, this is the first thing you should have a look I think, https://cloud.google.com/container-registry/docs/access-control
After checking that you have sufficient permissions you should proceed with authentication with something like:
gcloud auth configure-docker
See more here: https://cloud.google.com/container-registry/docs/pushing-and-pulling
If you are running docker as root (i.e. with sudo docker), then make sure to configure the authentication as root. You can run for example:
sudo -s
gcloud auth login
gcloud auth configure-docker
...that will create (or update) a file under /root/.docker/config.json.
(Are there any security implications of gcloud auth login as root? Let me know in the comments.)
In order to be able to push images to the private registry you need two things: API Access Scopes and Authenticate your VM with the registry.
For the API Access Scopes (https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform) we can read in the official documentation:
For GKE:
By default, new Google Kubernetes Engine clusters are created with
read-only permissions for Storage buckets. To set the read-write
storage scope when creating a Google Kubernetes Engine cluster, use
the --scopes option.
For GCE:
By default, a Compute Engine VM has the read-only access scope
configured for storage buckets. To push private Docker images, your
instance must have read-write storage access scope configured as
described in Access scopes.
So first, verify if your GKE cluster or GCE instance actually has the proper scopes set.
The next is to authenticate to the registry:
a) If you are using a Linux based image, you need to use "gcloud auth configure-docker" (https://cloud.google.com/container-registry/docs/advanced-authentication).
b) For Container-Optimized OS (COS), the command is “docker-credential-gcr configure-docker” (https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_google_container_registry)
Windows / Powershell
I got this error on Windows when I was trying to run docker push from a normal powershell window after authenticating in the google cloud shell that had opened when I installed the SDK.
The solution was simple:
Start a new powershell window to run docker push after running the gcloud auth configure-docker command.
Make sure you've activated the registry too:
gcloud services enable containerregistry.googleapis.com
Also Google has a tendency to jump to a default account (maybe your personal gmail) which may or may not be the one you want (your business email). Make sure if you're opening any links in a browser that you're in the correct Google account.
I'm not exactly sure what's going on yet because I'm brand new to docker, but something got refreshed when starting a new Powershell instance.
as noted https://stackoverflow.com/a/59799035/26283371 there appears to be a bug in the Linux version of cloud sdk where authentication fails using the standard authentication method (gcloud auth configure-docker). Instead, create a JSON keyfile per this and that tends to work.
I still can't get the gcloud auth configure-docker helper to work. What did was authenticating with an access token, like so
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
where HOSTNAME is gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io. (Be sure to include https://, otherwise it won't work).
You can view options for print-access-token here.
First thing, Make sure you covered all points listed in the following official documentation
https://cloud.google.com/container-registry/docs/advanced-authentication
This error occurs mostly due to docker config update, which you can check using command cat .docker/config.json
Now update with gcr with following command
gcloud auth configure-docker
Just in case anyone else is banging their head against a wall my PIA VPN caused this behavior.
"unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"
Turn my VPN off and it works fine. Turn it back on and it breaks again.
This is the only way that worked for me. I found it in a kubernetes/kompose Github issue.
Remove the credsStore key in ~/.docker/config.json
This will force docker to write the auth into the json when you use docker login. You can't untick Securely store Docker logins in macOS keychain in the docker desktop any more -- and the current credStore is no longer macOS keychain, it's desktop.
gcloud auth login Auth with gcloud (just to be explicit)
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://eu.gcr.io
You should see this:
WARNING! Your password will be stored unencrypted in /Users/andrew/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Source: https://github.com/kubernetes/kompose/issues/1043#issuecomment-609019141
The fix is as follows: run gcloud auth login (the browser will open and allow you to authenticate) then run gcloud auth configure-docker and select Y - then redo push. It should work like charm.
I also have the same issue in the Linux environment. So I just set the Docker to run as a non-root user, (https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user), and it works.
In my case DOCKER_CONFIG env variable was defined with an invalid value (not pointing to a docker config json).
I had the same issue, but for me, the problem was with internal users in my Linux system. I authenticated with gcloud my personal Linux user and when pushing, I was doing with root. So I had to authenticate my root user with gcloud as well:
sudo gcloud init
This issue happens to me when i switch service account which is pointing to different GCP Projects. Even though the service account has permission to push it says it does not have the permission. To resolve this by deleting config.json file which is present in .docker
Once this is done run the below commands and you should be able to push the image.
gcloud auth configure-docker
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
Where HOSTNAME= gcr.io , asia.gcr.io etc

Resources