Can't push the docker image to gcp-cluster - docker

So I did a tutorial based on tensorflow-servings and Kubernetes. All steps are working fine except the docker image pushing to the cluster.
this is the tutorial that i have tried.
https://www.tensorflow.org/tfx/serving/serving_kubernetes
And when I'm trying to push the docker image it gives an error like this,
I have tried to create the cluster with scopes also. But the result is same as above.
The command I use to create a cluster with scopes:
gcloud container clusters create resnet-serving-cluster --num-nodes 5 --scopes=storage-rw
So what is the wrong with this? Have I done something wrong???

Ok found the answer. My project ID and registry name are not equal. I re-tag the docker image with new registry name providing my project id and push it. It works.

There may be a variety of reasons.
1) I'd recommend to start with check if full API access has been granted.
2) Update gcloud components gcloud components update
3) Use gsutil to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>

You are trying to push your image into your private registry on gcloud. Please verify if you can access your private registry:
gcloud container images list-tags gcr.io/"your-project"/"image"
all information about gcloud private registry you can find here:
Additional helpful information you can find here
Please notice that:
By default, project Owners and Editors have push and pull permissions
for that project's Container Registry bucket.
Project Viewers have pull permission only.

Related

Pulling images from private repository in kubernetes without using imagePullSecrets

I am new to kubernetes deployments so I wanted to know is it possible to pull images from private repo without using imagePullSecrets in the deployment yaml files or is it mandatory to create a docker registry secret and pass that secret in imagePullSecrets.
I also looked at adding imagePullSecrets to a service account but that is not the requirement I woul love to know that if I setup creds in variables can kubernetes use them to pull those images.
Also wanted to know how can it be achieved and reference to a document would work
Thanks in advance.
As long as you're using Docker on your Kubernetes nodes (please note that Docker support has itself recently been deprecated in Kubernetes), you can authenticate the Docker engine on your nodes itself against your private registry.
Essentially, this boils down to running docker login on your machine and then copying the resulting credentials JSON file directly onto your nodes. This, of course, only works if you have direct control over your node configuration.
See the documentation for more information:
If you run Docker on your nodes, you can configure the Docker container runtime to authenticate to a private container registry.
This approach is suitable if you can control node configuration.
Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If you put the same file in the search paths list below, kubelet uses it as the credential provider when pulling images.
{--root-dir:-/var/lib/kubelet}/config.json
{cwd of kubelet}/config.json
${HOME}/.docker/config.json
/.docker/config.json
{--root-dir:-/var/lib/kubelet}/.dockercfg
{cwd of kubelet}/.dockercfg
${HOME}/.dockercfg
/.dockercfg
Note: You may have to set HOME=/root explicitly in the environment of the kubelet process.
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json on your PC.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes; for example:
if you want the names: nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )
if you want to get the IP addresses: nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}' )
Copy your local .docker/config.json to one of the search paths list above.
for example, to test this out: for n in $nodes; do scp ~/.docker/config.json root#"$n":/var/lib/kubelet/config.json; done
Note: For production clusters, use a configuration management tool so that you can apply this setting to all the nodes where you need it.
If the Kubernetes cluster is private, you can deploy your own, private (and free) JFrog Container Registry using its Helm Chart in the same cluster.
Once it's running, you should allow anonymous access to the registry to avoid the need for a login in order to pull images.
If you prevent external access, you can still access the internal k8s service created and use it as your "private registry".
Read through the documentation and see the various options.
Another benefit is that JCR (JFrog Container Registry) is also a Helm repository and a generic file repository, so it can be used for more than just Docker images.

Kubernetes: Id or size of image "XXX.dkr.ecr.eu-west-1.amazonaws.com/msg/my_image:v1.0" is not set

Today I try to run a cent OS based container as second conatiner in my POD.
While deploying my deployment.yaml I've got this message.
ImageInspectError: Failed to inspect image "XXX.dkr.ecr.eu-west-1.amazonaws.com/msg/ym_image:v1.0":
Id or size of image "XXX.dkr.ecr.eu-west-1.amazonaws.com/msg/my_image:v1.0" is not set
Does somebody know how to set this ID or Size?
Kind regards
Markus
I am not familiar with aws repositories but at first look it seems you are trying to pull image with improper name:tag.
Example of well tagged repository:
docker tag hello-world aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-repository
Optionally you can add version "hello-repository:latest"
You can login to aws account or list your repositories and verify with settings in your deployment.
Could you please verify please if your repository doesn't start at: "msg"
XXX.dkr.ecr.eu-west-1.amazonaws.com/msg/ym_image:v1.0"
All information about repositories in aws you can find here:
https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html
Try to pull mentioned image using Docker and share with your findings.

GCP container push not working - "denied: Please enable Google Container Registry API in Cloud Console at ..."

I'm having trouble uploading my docker image to GCP Container registry. I was following the instructions here.
As you can see in the screenshot below, I've:
Logged into my google cloud shell and built a docker image via a dockerfile
Tagged my image correctly (I think)
Tried to push the image using the correct command (I think)
However, I'm getting this error:
denied:
Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=bentestproject-184220 before performing this operation.
When I follow that link, it takes me to the wrong project:
When I select my own project, I can see that "Google Container Registry API" is indeed enabled:
How do I upload my docker images?
I seems that you mistype you project ID. You project name is BensTestsProject but ID is bentestproject-184220.
I have the same issue and solved it. In my case the project name in the image tag was wrong. you must be re-check your "bentestproject-184220" in your image tag is correctly your projectID.

Sharing docker registry images among gcloud projects

We're hoping to use a google project to share docker images containing microservices across projects.
I was thinking I could do it using the kubernetes run command and pull an image from a project other than the current one:
kubectl run gdrive-service --image=us.gcr.io/foo/gdrive-service
My user credentials have access to both projects. However, it seems like the run command can only pull mages from the current project.
Is there an approach for doing this? It seems like an obvious use case.
There are a few options here.
Use _json_key auth described here with Kubernetes pull secrets.
This describes how to add robots across projects as well, still without needing pull secrets.
In my answer here I describe a way to do this by granting the GKE service account user Storage Object Viewer permission under the project that contains the registry.

Google Container Registry access denied when pushing docker container

I try to push my docker container to the google container registry, using this tutorial, but when I run
gcloud docker push b.gcr.io/my-bucket/image-name
I get the error :
The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1)
Sending image list
Error: Status 403 trying to push repository my-bucket/my-image: "Access denied."
I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both services.
Anything I'm missing ?
You need to make sure the VM instance has enough access rights. You can set these at the time of creating the instance, or if you have already created the instance, you can also edit it (but first, you'll need to stop the instance). There are two ways to manage this access:
Option 1
Under the Identity and API access, select Allow full access to all Cloud APIs.
Option 2 (recommended)
Under the Identity and API access, select Set access for each API and then choose Read Write for Storage.
Note that you can also change these settings even after you have already created the instance. To do this, you'll first need to stop the instance, and then edit the configuration as mentioned above.
Use gsutil to check the ACL to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You'll need to check which group the account you are using is in ('owners', 'editors', 'viewers' etc.)
EDIT: I have experienced a very similar problem to this myself recently and, as #lampis mentions in his post, it's because the correct permission scopes were not set when I created the VM I was trying to push the image from. Unfortunately there's currently no way of changing the scopes once a VM has been created, so you have to delete the VM (making sure the disks are set to auto-delete!) and recreate the VM with the correct scopes ('compute-rw', 'storage-rw' seems sufficient). It doesn't take long though ;-).
See the --scopes section here: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
I am seeing this but on an intermittent basis. e.g. I may get the error denied: Permission denied for "latest" from request "/v2/...."., but when trying again it will work.
Is anyone else experiencing this?
For me I forgot to prepend gcloud in the line (and I was wondering how docker would authenticate):
$ gcloud docker push <image>
In your terminal, run the code below
$ sudo docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://[HOSTNAME]
Where
-[HOSTNAME] is your container registry location (it is either gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io). Check your tagged images to be sure by running $ sudo docker images).
If this doesn't fix it, try reviewing the VM's access scopes.
If you are using Docker 1.7.0, there was a breaking change to how they handle authentication, which affects users who are using a mix of gcloud docker and docker login.
Be sure you are using the latest version of gcloud via: gcloud components update.
So far this seems to affect gcloud docker, docker-compose and other tools that were reading/writing the Docker auth file.
Hopefully this helps.
Same problem here, the troubleshooting section from https://cloud.google.com/tools/container-registry/#access_denied wasn't very helpful. I have Docker and GCloud full updated. Don't know what else to do.
BTW, I'm trying to push to "gcr.io".
Fixed. I was using a VM in compute engine as my development machine, and looks like I didn't give it enough rigths in Storage.
I had the same problem with access denied and I resolved it with creating new image using Tag:
docker tag IMAGE_WITH_ACCESS_DENIED gcr.io/my-project/my-new-image:test
After that I could PUSH It to Container registry:
gcloud docker -- push gcr.io/my-project/my-new-image:test
Today I also got this error inside Jenkins running on Google Kubernetes Engine when pushing the docker container. The reason was a node pool node version upgrade from 1.9.6-gke.1 to 1.9.7-gke.0 in gcp I did before. Worked again after the downgrade.
You need to login to gcloud from the machine you are:
gcloud auth login

Resources