Sharing docker registry images among gcloud projects - docker

We're hoping to use a google project to share docker images containing microservices across projects.
I was thinking I could do it using the kubernetes run command and pull an image from a project other than the current one:
kubectl run gdrive-service --image=us.gcr.io/foo/gdrive-service
My user credentials have access to both projects. However, it seems like the run command can only pull mages from the current project.
Is there an approach for doing this? It seems like an obvious use case.

There are a few options here.
Use _json_key auth described here with Kubernetes pull secrets.
This describes how to add robots across projects as well, still without needing pull secrets.

In my answer here I describe a way to do this by granting the GKE service account user Storage Object Viewer permission under the project that contains the registry.

Related

Prebuilding a docker image *within* a Github Codespace when the image relies on the organization's other private repositories?

I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.

Can't push the docker image to gcp-cluster

So I did a tutorial based on tensorflow-servings and Kubernetes. All steps are working fine except the docker image pushing to the cluster.
this is the tutorial that i have tried.
https://www.tensorflow.org/tfx/serving/serving_kubernetes
And when I'm trying to push the docker image it gives an error like this,
I have tried to create the cluster with scopes also. But the result is same as above.
The command I use to create a cluster with scopes:
gcloud container clusters create resnet-serving-cluster --num-nodes 5 --scopes=storage-rw
So what is the wrong with this? Have I done something wrong???
Ok found the answer. My project ID and registry name are not equal. I re-tag the docker image with new registry name providing my project id and push it. It works.
There may be a variety of reasons.
1) I'd recommend to start with check if full API access has been granted.
2) Update gcloud components gcloud components update
3) Use gsutil to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You are trying to push your image into your private registry on gcloud. Please verify if you can access your private registry:
gcloud container images list-tags gcr.io/"your-project"/"image"
all information about gcloud private registry you can find here:
Additional helpful information you can find here
Please notice that:
By default, project Owners and Editors have push and pull permissions
for that project's Container Registry bucket.
Project Viewers have pull permission only.

How to use private quay.io images with fleet and CoreOS

I've been trying to deploy containers with fleet on a CoreOS cluster. However, some of the docker images are privately stored on quay.io requiring a login.
Now I could add a docker login as a precondition to every relevant unit file, but that doesn't seem right. I'm sure there must be a way to store the respective registry credentials somewhere docker can find it when trying to download the image.
Any ideas?
The best way to do this is with a Quay "robot account", which is a separate set of credentials than your regular account. This is helpful for two reasons:
they can be revoked if needed
can be limited to a subset of your repositories
When you make a new robot account, if you click "view credentials", you will get the credentials pre-formatted for common use-cases, such as Docker and Kubernetes.
In this case, you want "Docker Configuration", which is placed at ~/.docker/config.json on the server(s). Docker will automatically use this to authenticate with Quay.io.

Google Container Registry access denied when pushing docker container

I try to push my docker container to the google container registry, using this tutorial, but when I run
gcloud docker push b.gcr.io/my-bucket/image-name
I get the error :
The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1)
Sending image list
Error: Status 403 trying to push repository my-bucket/my-image: "Access denied."
I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both services.
Anything I'm missing ?
You need to make sure the VM instance has enough access rights. You can set these at the time of creating the instance, or if you have already created the instance, you can also edit it (but first, you'll need to stop the instance). There are two ways to manage this access:
Option 1
Under the Identity and API access, select Allow full access to all Cloud APIs.
Option 2 (recommended)
Under the Identity and API access, select Set access for each API and then choose Read Write for Storage.
Note that you can also change these settings even after you have already created the instance. To do this, you'll first need to stop the instance, and then edit the configuration as mentioned above.
Use gsutil to check the ACL to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You'll need to check which group the account you are using is in ('owners', 'editors', 'viewers' etc.)
EDIT: I have experienced a very similar problem to this myself recently and, as #lampis mentions in his post, it's because the correct permission scopes were not set when I created the VM I was trying to push the image from. Unfortunately there's currently no way of changing the scopes once a VM has been created, so you have to delete the VM (making sure the disks are set to auto-delete!) and recreate the VM with the correct scopes ('compute-rw', 'storage-rw' seems sufficient). It doesn't take long though ;-).
See the --scopes section here: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
I am seeing this but on an intermittent basis. e.g. I may get the error denied: Permission denied for "latest" from request "/v2/...."., but when trying again it will work.
Is anyone else experiencing this?
For me I forgot to prepend gcloud in the line (and I was wondering how docker would authenticate):
$ gcloud docker push <image>
In your terminal, run the code below
$ sudo docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://[HOSTNAME]
Where
-[HOSTNAME] is your container registry location (it is either gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io). Check your tagged images to be sure by running $ sudo docker images).
If this doesn't fix it, try reviewing the VM's access scopes.
If you are using Docker 1.7.0, there was a breaking change to how they handle authentication, which affects users who are using a mix of gcloud docker and docker login.
Be sure you are using the latest version of gcloud via: gcloud components update.
So far this seems to affect gcloud docker, docker-compose and other tools that were reading/writing the Docker auth file.
Hopefully this helps.
Same problem here, the troubleshooting section from https://cloud.google.com/tools/container-registry/#access_denied wasn't very helpful. I have Docker and GCloud full updated. Don't know what else to do.
BTW, I'm trying to push to "gcr.io".
Fixed. I was using a VM in compute engine as my development machine, and looks like I didn't give it enough rigths in Storage.
I had the same problem with access denied and I resolved it with creating new image using Tag:
docker tag IMAGE_WITH_ACCESS_DENIED gcr.io/my-project/my-new-image:test
After that I could PUSH It to Container registry:
gcloud docker -- push gcr.io/my-project/my-new-image:test
Today I also got this error inside Jenkins running on Google Kubernetes Engine when pushing the docker container. The reason was a node pool node version upgrade from 1.9.6-gke.1 to 1.9.7-gke.0 in gcp I did before. Worked again after the downgrade.
You need to login to gcloud from the machine you are:
gcloud auth login

How to have login and access settings with a docker image registry

I am not new to lxc or docker. But I do not have much knowledge on the image registry.
So I decided to get started and followed up tutorials and installation instructions.
And things are working fine in terms of pushing and pulling from my custom registry.
My questions:
The registry does not seem to come with a login/access management system.
1st - What are the overall steps to follow to implement a login (and possibly access) management to a custom registry?
2nd - If this mechanism is implemented, is there a way to use docker login to use that mechanism instead of https://hub.docker.com 's?
To 2nd: By using docker login /yourregistry, you can use the login mechanism of docker to login to a specific registry. The credentials are saved as well,
dockerhub is just the default. Unfortunately I don't know how to set up an own registry, personal I'm just using it in my company to pull from our artifactory.

Categories

Resources