Somewhat of a GCR newbie question.
I have not been able to find any documentation on whether it is possible to push signed docker images to GCR. So I attempted it but it fails with following error below.
I first built a docker image, then tagged it to point to my project in GCR with "docker tag gcr.io/my-project/image-name:tag"
Then attempted signing using
"docker trust sign gcr.io/my-project/image-name:tag"
Error: error contacting notary server: denied: Token exchange failed for project 'gcr.io:my-project'. Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=gcr.io:my-project before performing this operation.
GCR API for my project is enabled and I have permissions to push to it.
Do I need to something more in my project in GCP to be able to push signed images OR it is just not supported?
If later, how does one (as a image consumer) verify the integrity of the image?
thanks,
J
This is currently not supported in Google Cloud Platform.
You can file a feature request to request its implementation here.
To verify an images integrity, use image digests. Basically they are cryptographic hashes associated with the image. You can compare the hash of the image you pulled with the hash you are expecting. Command reference here
Google now implements the concept of Binary Authorization and "attestations" based off of Kritis. The intention is for this to be used within your CI/CD pipeline to ensure images have been created and validated correctly.
Full docs are here but the process basically consists of signing an image via a PKIX signature and then using the gcloud tool to create an attestation.
You then specify a Binary Authorization policy on your GKE cluster to enforce which attestations are required before an image is allowed to be used within the cluster.
Related
We have a Docker Registry running that uses native basic authentication with nginx, so images can only be pushed to the Registry after authentication. Is it possible to get the user who pushed the image to the Registry?
It's not part of the registry API. You would need to check the logs of that registry and auth server. It's possible the user may self report who they are by setting a label on the image (or the legacy maintainer field), but I wouldn't depend on that for any security critical tasks.
For more on the registry API, see: https://github.com/opencontainers/distribution-spec
Docker also has their API (which predates OCI) documented at: https://docs.docker.com/registry/spec/api/
This one is a real head-scratcher, because everything had worked fine for years until yesterday. I have a google cloud account and the billing is set up correctly. I have private images in my GCR registry which I can 'docker pull' and 'docker push' from my laptop (MacBook Pro with Big Sur 11.4) with no problems.
The problem I detail here started happening yesterday after I deleted a project in the google cloud console, then created it again from scratch with the same name. The previous project had no problem pulling GCR images, the new one couldn't pull the same images. I have now used the cloud console to create new, empty test projects with a variety of names, with new clusters using default GKE values. But this new problem persists with all of them.
When I use kubectl to create a deployment on GKE that uses any of the GCR images in the same project, I get ErrImagePull errors. When I 'describe' the pod that won't load the image, the error (with project id obscured) is:
Failed to pull image "gcr.io/test-xxxxxx/test:1.0.0": rpc error: code
= Unknown desc = failed to pull and unpack image "gcr.io/test-xxxxxx/test:1.0.0": failed to resolve reference
"gcr.io/test-xxxxxx/test:1.0.0": unexpected status code [manifests
1.0.0]: 401 Unauthorized.
This happens when I use kubectl from my laptop (including after wiping out and creating a new .kube/config file with proper credentials), but happens exactly the same when I use the cloud console to set up a deployment by choosing 'Deploy to GKE' for the GCR image... no kubectl involved.
If I ssh into a node in any of these new clusters and try to 'docker pull' a GCR image (in the same project), I get a similar error:
Error response from daemon: unauthorized: You don't have the needed
permissions to perform this operation, and you may have invalid
credentials. To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
My understanding from numerous articles is that no special authorization needs to be set up for GKE to pull GCR images from within the same project, and I've NEVER had this issue in the past.
I hope I'm not the only one on this deserted island. Thanks in advance for your help!
I tried Implementing the setup and faced the same error both on the GKE Cluster and the Cluster’s nodes. This was caused because the access to Cloud Storage API is “Disabled” on the Cluster Nodes which can be verified from Node (VM instance) details under the “Cloud API access scopes” section.
We can rectify this by changing the “Access scopes” to “Set access for each API” and modify access to specific API in the Node Pools -> default-pool -> Security section when creating the cluster. In our case we need at least "Read Only" access for Storage API to enable access to Cloud Storage where the Image is stored. Changing the service account and access scopes for an instance for more information.
I am trying to push docker image from jenkins configured on compute engine with default service account. But it is failing with this error:
[Docker] ERROR: failed to push image gcr.io/project-id/sms-impl:work ERROR: Build step failed with exception com.github.dockerjava.api.exception.DockerClientException: Could not push image: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
What do I need to do?
To authenticate to Container Registry, use gcloud as a Docker credential helper. To do so, run the following command:
gcloud auth configure-docker
You need to run this command once to authenticate to Container Registry. We strongly recommend that you use this method when possible. It provides secure, short-lived access to your project resources. Please follow steps as link 1.
At the bottom of the page that was linked, you will see a further link to Using GCR with GCP, in particular, this section describes what you need to do.
To summarize, the service account needs the permissions to write to the storage bucket for GCR. Since you mentioned you were using the default service account, it further will need the access scopes set for that instance. The default only grants 'read' unless you have specified all scopes.
A few ways to do this:
When you create the instance using gcloud, specify --scopes https://www.googleapis.com/auth/devstorage.read_write
In the console, select the scope specifically or select "all scopes", e.g.:
(... many lines of scopes omitted ...)
You can also add the scopes after the fact, if needed, by editing the instance while it is stopped.
Note that the first push for a project may additionally require "admin" rights, in order to create the bucket.
I'm having trouble uploading my docker image to GCP Container registry. I was following the instructions here.
As you can see in the screenshot below, I've:
Logged into my google cloud shell and built a docker image via a dockerfile
Tagged my image correctly (I think)
Tried to push the image using the correct command (I think)
However, I'm getting this error:
denied:
Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=bentestproject-184220 before performing this operation.
When I follow that link, it takes me to the wrong project:
When I select my own project, I can see that "Google Container Registry API" is indeed enabled:
How do I upload my docker images?
I seems that you mistype you project ID. You project name is BensTestsProject but ID is bentestproject-184220.
I have the same issue and solved it. In my case the project name in the image tag was wrong. you must be re-check your "bentestproject-184220" in your image tag is correctly your projectID.
I'm having trouble pushing to GitLab Container Registry.
I can login successfully using my username and a personal access token but when I try to push the image to the registry, I get the following error:
$ docker push registry.gitlab.com/[groupname]/dockerfiles/nodemon
The push refers to a repository
[registry.gitlab.com/[groupname]/dockerfiles/nodemon]
15d2ea6e1aeb: Preparing
2260f979a949: Preparing
f8e848bb8c20: Preparing
740a5345706a: Preparing
5bef08742407: Preparing
denied: requested access to the resource is denied
I assume the issue is not with authentication because when I run a docker login registry.gitlab.com, I get a Login Succeeded message.
Where is the problem?
How should I push my images to GitLab Container Registry?
I got it working by including api scope to my personal access token.
The docs states The minimal scope needed is read_registry. But that probably applies for read only access.
Reference: https://gitlab.com/gitlab-com/support-forum/issues/2370#note_44796408
In my case it was really dumb, maybe even a gitlab bug :
I renamed the gitlab project after the creation of the container registry, so the container registry url was still with the old name ...
The project name under gitlab had the typo error corrected but not the registry link and it led to this error
Had a similar issue, it was because of the url that was used for tagging and pushing the repo.
It should be
docker push registry.gitlab.com/[account or group-name]/[reponame]/imagename
It was previously a correct answer to say that the personal access token needs to include the api permission, and several answers on this page say exactly that.
Recently, GitLab appear to have improved the granularity of their permission system. So if you want to push container images to the GitLab Docker registry, you can create a token merely with the read_registry and write_registry permissions. This is likely to be a lot safer than giving full permissions.
I have tested this successfully today.
Enable the personal access token by adding api scope as per this guidelines. After creating the token and username, use these credentials for logging into the Docker environment or pushing.
Deploy tokens created under CI/CD setup is not sufficient for pushing the image to a Docker registry.
I had the same issue.
In my case, the issue was I had AutoDevOps enabled before, which seem to generate a deploy token automatically.
Now deploy tokens are just API keys basically for deployment.
But GitLab has a special handling for gitlab-deploy-token which you can then access via $CI_DEPLOY_USER and $CI_DEPLOY_PASSWORD as a predefined variable.
However, I did not double-check the default token.
In my case, it only had read_registry, of course though, it also needs write_registry permissions.
If you do this, then you can follow the official documentation.
Alternatively, you can apparently also switch to $CI_REGISTRY_USER and $CI_REGISTRY_PASSWORD, which are ephemeral, however.