docker push to google cloud GCP fails with name unknown: Buckets - docker

Trying to push a local image to a google cloud project, which fails with this error. Any help?
$ docker push gcr.io/myprojectID/myrstudio:latest
The push refers to repository [gcr.io/ myprojectID /myrstudio]
07fc541c7837: Preparing
5f40edd3a036: Preparing
8243f7003c86: Preparing
55903d33bbd7: Preparing
6f15325cc380: Preparing
1e77dd81f9fa: Preparing
030309cad0ba: Preparing
1e77dd81f9fa: Waiting
030309cad0ba: Waiting
6f15325cc380: Layer already exists
1e77dd81f9fa: Layer already exists
030309cad0ba: Layer already exists
55903d33bbd7: Pushed
07fc541c7837: Pushed
5f40edd3a036: Pushed
8243f7003c86: Pushed
name unknown: Buckets(myprojectID ,artifacts. myprojectID.appspot.com)
Looks like something was pushed, but at some point failing...
EDIT: running wind 10 version 20H2 (OS build 19042.1288)

When you perform a docker push to Google Container Registry your image is stored inside a bucket in your project;
Container Registry uses Cloud Storage buckets as the underlying storage for container images. You control access to your images by granting permissions to the bucket for a registry.
If this is a first time that you try to push an image - the account you're using must have Storage Admin role. That's because a bucket that will store your images has to be created.
Have a look at the documentation that describes all the steps to grant necessary permissions.
If you're not familiar with GCP Cloud Storage then have a look at the guide describing how to use GCP's storage buckets.
Also you may consider trying out Artifact Registry which gives you more control over your images.

Related

Docker push to GCP registry fails due to "uknown buckets"

I am trying to push my own docker image to the GCP registry but when I do that I get the following output:
docker push gcr.io/<my-project-id>/<image>
Using default tag: latest
The push refers to repository [gcr.io/<my-project-id>/<image>]
f15150e56202: Pushed
f4d9210559a2: Pushed
740ec60a06f1: Pushed
5f70bf18a086: Layer already exists
707ca84b07d6: Pushed
cd120726f64b: Layer already exists
033eaa4a923c: Layer already exists
3f6108380787: Layer already exists
1f8751be0506: Layer already exists
59b0c7a2fe4d: Layer already exists
7372faf8e603: Layer already exists
9be7f4e74e71: Layer already exists
36cd374265f4: Layer already exists
5bdeef4a08f3: Layer already exists
name unknown: Buckets(<my-project-id>,artifacts.<my-project-id>.appspot.com)
This is the same behaviour as the one from this question.
The bucket artifacts.<my-project-id>.appspot.com is created, and some stuff is pushed to it, so there is no access problem. It just seems that the docker push suddenly does not recognize the bucket.
I already double checked that I have all the permissions I need on my service account, I am logged in in my gcloud CLI and that docker is configured to use the gcp auth.
Any clues on why this might happen?
Thanks!
Recreating the GCP project seems to have fixed this issue.

Is there any way to submit an image with specific stage to GCR?

I want to submit an image with specific stage that i specified in dockerfile to GCR, how can i do this?
Or may be there is a way so i can push a local image to GCR?
something like this:
docker build --target local -t my-local-image .
gcloud builds submit --tag gcr.io/PROJECT-ID/my-local-image --image my-local-image
To push any local image to Container Registry using Docker or another third-party tool, you need to first tag it with the registry name and then push the image.
The following factors might impact uploads for large images:
Upload time :Any request sent to the Container Registry has a 2 hour timeout limit. If you authenticate to the Container Registry using an access token, the token expires after 60 minutes. If you expect your upload time to exceed 60 minutes, use a different authentication method.
Image size :Container Registry uses Cloud Storage for each registry's underlying storage. Cloud storage quotas and limits apply to each registry, including the 5 TB maximum size for an object in storage.
Pushing an image requires one of the following Cloud Storage roles, or a role with the same permissions:
1.Pushing the first image to a registry in your project :
Role: Storage Admin (roles/storage.admin) at the Google cloud project level. The predefined Owner role includes these permissions.The first time you push an image to a registry host in your project (such as gcr.io), Container Registry creates a storage bucket for the registry. The Storage Admin role has the necessary permissions to create the storage bucket.
2.Pushing images to an existing registry in your project :This role has permission to push and pull images for existing registry hosts in your project. For example, if your project only contains the gcr.io registry, a user with the Storage Legacy Bucket Writer role can push images to gcr.io but cannot push images to asia.gcr.io.
Refer to documentation on Pushing and Pulling images for more detailed information.

Deploy image to kubernetes without storing the image in a dockerhub

I'm trying to migrate from docker-maven-plugin to kubernetes-maven-plugin for an test-setup for local development and jenkins-builds. The point of the setup is to eliminate differences between the local development and the jenkins-server. Since docker built the image, the image is stored in the local repository and doesn't have to be uploaded to a central server where the base-images are located. So we can basically verify our build without uploading anything to the server and the images is discarded after the task is done (running integrationstests).
Is there a similar way to trick kubernetes to store the image into the local repository without having to take the roundtrip to a central repository? Eg, behave as if the image is already downloaded? Note that I still need to fetch the base-image from the central repository.
If you don't want to use any docker repo (public or private), you can use what is called Pre-pulled-images.
This is a bit annoying as you need to make sure all the kubernetes nodes have the images there and also set the ImagePullPolicy to Never in every kubernetes manifest.
In your case, if what you call local repository is some private docker registry, you just need to store the credentials to the private registry in a kubernetes secret and either patch you default service account with ImagePullSecrets or your actual deployment/pod manifest. More details about that https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

pushing signed docker images to GCR

Somewhat of a GCR newbie question.
I have not been able to find any documentation on whether it is possible to push signed docker images to GCR. So I attempted it but it fails with following error below.
I first built a docker image, then tagged it to point to my project in GCR with "docker tag gcr.io/my-project/image-name:tag"
Then attempted signing using
"docker trust sign gcr.io/my-project/image-name:tag"
Error: error contacting notary server: denied: Token exchange failed for project 'gcr.io:my-project'. Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=gcr.io:my-project before performing this operation.
GCR API for my project is enabled and I have permissions to push to it.
Do I need to something more in my project in GCP to be able to push signed images OR it is just not supported?
If later, how does one (as a image consumer) verify the integrity of the image?
thanks,
J
This is currently not supported in Google Cloud Platform.
You can file a feature request to request its implementation here.
To verify an images integrity, use image digests. Basically they are cryptographic hashes associated with the image. You can compare the hash of the image you pulled with the hash you are expecting. Command reference here
Google now implements the concept of Binary Authorization and "attestations" based off of Kritis. The intention is for this to be used within your CI/CD pipeline to ensure images have been created and validated correctly.
Full docs are here but the process basically consists of signing an image via a PKIX signature and then using the gcloud tool to create an attestation.
You then specify a Binary Authorization policy on your GKE cluster to enforce which attestations are required before an image is allowed to be used within the cluster.

Access denied pushing images to gcr repository

No matter what I do I can't push images to google repository. I followed this guide and I do these commands directly from the google cloud shell
docker build -t eu.gcr.io/[project-id]/[imagename]:[tag] ~/[folder]
docker tag eu.gcr.io/[project-id]/[imagename]:[tag] eu.gcr.io/[project-id]/[imagename]:[tag]
docker push eu.gcr.io/[project-id]/[imagename]:[tag]
I get this output when pushing
4d1ea31bd998: Preparing
03b6a2b0817c: Preparing
104044bed4c7: Preparing
2222fefcbbfc: Preparing
75166708bd17: Preparing
5eefc1b802bb: Waiting
5c33df241050: Waiting
ffc4c11463ee: Waiting
denied: Unable to access the repository, please check that you have permission to access it.
I've search for this online but everyone seems to have authentication issues. Since I can't execute this neither from my local machine or the google cloud shell I don't think there's a problem there since when I'm on the shell I'm using the owner account [owner]#[project-id]. I have billing and Container Registry API active
From my understanding pushing should create a bucket for this but I even tried creating a bucket but I have no idea if and how to configure it to be used for image repository. I have billing and Container Registry API activated
You probably did not authenticate with the registry. Please try to login before pushing. Just type in the console and enter your credentials:
docker login eu.gcr.io

Resources