So I set up a minimal Google Cloud Compute instance with terraform and want to use a docker image on it. The desired image is pushed to an artifact repository in the same project.
Error
The issue is that whatever I do, when trying to pull with the pull command specified in the artifact repo, I get:
sudo docker pull europe-west3-docker.pkg.dev/[project]/[repo]/[image]:latest
Error response from daemon:
Head "https://europe-west3-docker.pkg.dev/v2/[project]/[repo]/api/manifests/latest": denied:
Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/[project]/locations/europe-west3/repositories/[repo]" (or it may not exist)
Debugging attempts
What I've tried:
The default service account should have access without any additional setup. To debug and make sure nothing goes wrong, I tried creating a service account with the necessary role myself.
Tried to debug access with the policy troubleshooting tool, access should be possible
Made sure to enable docker auth with gcloud auth configure-docker europe-west3-docker.pkg.dev and debugged the used account with gcloud auth list.
Tried the pull command on my local machine, works flawlessly
Tried to access the repo via gcloud artifacts docker images list [repo], works fine as well
Tried to run gcloud init
Tried pulling images from the official docker repo, also works flawlessly
Terraform code for reference
#
# INSTANCE
#
resource "google_compute_instance" "mono" {
name = "mono"
machine_type = "n1-standard-1"
allow_stopping_for_update = true # allows hard updating
service_account {
scopes = ["cloud-platform"]
}
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = "default"
access_config {
}
}
}
#
# REPO
#
resource "google_artifact_registry_repository" "repo" {
location = var.location
repository_id = var.project_name
format = "DOCKER"
}
Solution
After wasting too many hours on this, I found that after adding my user to the docker group and thus running docker without sudo, it suddenly works.
So as laid out here: https://docs.docker.com/engine/install/linux-postinstall/
sudo groupadd docker
sudo usermod -aG docker $USER
# exit and login again via ssh
Explanation
As per the docs:
Note: If you normally run Docker commands on Linux with sudo, Docker looks for Artifact Registry credentials in /root/.docker/config.json instead of $HOME/.docker/config.json. If you want to use sudo with docker commands instead of using the Docker security group, configure credentials with sudo gcloud auth configure-docker instead.
So if you use docker with sudo, you also need to run
sudo gcloud auth configure-docker
I thought I had tried this, but it is what it is...
Related
I want to check if a container on gitlab is built properly with the right content. As a first step, I'm trying to login to the registry by running the following command:
sudo docker login -u "ci-registry-user" -p "some-token" "registry.gitlab.com/some-registry:container"
However, I run into Get "https://registry.gitlab.com/v2/": unauthorized: HTTP Basic: Access denied errors.
My question is in two folds:
How do I access the hosted containers on gitlab? My goal is to access the container and run docker exec -it container_name bash && cat /some/path/to_container.py
Is there an alternative way to achieve this without logging in to the registry?
Check your GitLab PAT scope, to make sure it is API or at least read_registry.
Read-only (pull) for Container Registry images if a project is private and authorization is required.
And make sure you have access to that project with that token, if thesekyi/paalup is a private project.
Avoid sudo, as it changes your environment execution from your logged-in user to root.
I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.
I am trying to push docker image to GCP, but i am still getting this error:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I follow this https://cloud.google.com/container-registry/docs/quickstart step by step and everything works fine until docker push
It's clear GCP project
I've already tried:
use gcloud as a Docker credential helper:
gcloud auth configure-docker
reinstall Cloud SDK and gcloud init
add Storage Admin role to my account
What I am doing wrong?
Thanks for any suggestions
If it can help those in the same situation as me:
Docker 19.03
Google cloud SDK 288.0.0
Important: My user is not in a docker user group. I then have to prepend sudo before any docker command
When gcloud and docker are not using the same config.json
When I use gcloud credential helper:
gcloud auth configure-docker
it updates the JSON config file in my $HOME: [/home/{username}/.docker/config.json]. However, when logging out and login again from Docker CLI,
sudo docker login
The warning shows a different path, which makes sense as I sudo-ed:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
sudo everywhere
To fix it, I did the following steps:
# Clear everything
sudo docker logout
sudo rm /root/.docker/config.json
rm /home/{username}/.docker/config.json
# Re-login
sudo docker login
sudo gcloud auth login --no-launch-browser # --no-launch-browser is optional
# Check both Docker CLI and gcloud credential helper are here
sudo vim /root/.docker/config.json
# Just in case
sudo gcloud config set project {PROJECT_ID}
I can now push my Docker images to both GCR and Docker hub
So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest
I have been working with google's machine learning platform, cloudML.
Big picture:
I'm trying to figure out the cleanest way to get their docker environment up and running on google compute instances, have access to the cloudML API and my storage bucket.
Starting locally, I have my service account configured
C:\Program Files (x86)\Google\Cloud SDK>gcloud config list
Your active configuration is: [service]
[compute]
region = us-central1
zone = us-central1-a
[core]
account = 773889352370-compute#developer.gserviceaccount.com
disable_usage_reporting = False
project = api-project-773889352370
I boot a compute instance with the google container image family
gcloud compute instances create gci --image-family gci-stable --image-project google-containers --scopes 773889352370-compute#developer.gserviceaccount.com="https://www.googleapis.com/auth/cloud-platform"
EDIT: Need to explicitly set scope for communicating with cloudML.
I can then ssh into that instance (for debugging)
gcloud compute ssh benweinstein2010#gci
On the compute instance, I can pull the cloudML docker from GCR and run it
docker pull gcr.io/cloud-datalab/datalab:local
docker run -it --rm -p "127.0.0.1:8080:8080" \
--entrypoint=/bin/bash \
gcr.io/cloud-datalab/datalab:local
I can confirm I have access to my desired bucket. No credential problems there
root#cd6cc28a1c8a:/# gsutil ls gs://api-project-773889352370-ml
gs://api-project-773889352370-ml/Ben/
gs://api-project-773889352370-ml/Cameras/
gs://api-project-773889352370-ml/MeerkatReader/
gs://api-project-773889352370-ml/Prediction/
gs://api-project-773889352370-ml/TrainingData/
gs://api-project-773889352370-ml/cloudmldist/
But when I try to mount the bucket
root#139e775fcf6b:~# gcsfuse api-project-773889352370-ml /mnt/gcs-bucket
Using mount point: /mnt/gcs-bucket
Opening GCS connection...
Opening bucket...
Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: failed to open /dev/fuse: Operation not permitted
It must be that I am required to activate my service account from within the docker container? I have had similar (unsolved issues elsewhere)
gcloud auth activate-service-account
I could pass docker a credentials .json file, but i'm not sure where/if gcloud ssh passes those files to my instance?
I have access to cloud platform more broadly, for example I can post a request to the cloudML API.
gcloud beta ml predict --model ${MODEL_NAME} --json-instances images/request.json > images/${outfile}
which succeeds. So some credentials are being passed.I guess I could pass it to compute engine, and then from the compute engine to the docker instance? It feels like i'm not using the tools as intended. I thought gcloud would handle this once I authenticated locally.
This was a docker issue, not a gcloud permissions issue. Docker needs to be run as --privileged to allow fuse to mount.