I have made a Dockerfile for deploying my node.js application into google container engine .It looks like as below
FROM node:0.12
COPY google-cloud-sdk /google-cloud-sdk
RUN /google-cloud-sdk/bin/gcloud init
COPY bpe /bpe
CMD cd /bpe;npm start
I should use gcloud init inside Dockerfile because my node.js application is using gcloud-node module for creating buckets in GCS .
When i am using the above dockerfile and doing docker built it is failing with following errors
sudo docker build -t gcr.io/[PROJECT_ID]/test-node:v1 .
Sending build context to Docker daemon 489.3 MB
Sending build context to Docker daemon
Step 0 : FROM node:0.12
---> 57ef47f6c658
Step 1 : COPY google-cloud-sdk /google-cloud-sdk
---> f102b82812f5
Removing intermediate container 4433b0f3627f
Step 2 : RUN /google-cloud-sdk/bin/gcloud init
---> Running in 21aead97cf65
Welcome! This command will take you through the configuration of gcloud.
Your current configuration has been set to: [default]
To continue, you must log in. Would you like to log in (Y/n)?
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute&access_type=offline
ERROR: There was a problem with web authentication.
ERROR: (gcloud.auth.login) invalid_grant
ERROR: (gcloud.init) Failed command: [auth login --force --brief] with exit code [1]
I done it working by hard coding the authentication key inside google-cloud-sdk source code.Please let me know the proper way to solve this issue .
gcloud init is a wrapper command which runs
gcloud config configurations create MY_CONFIG
gcloud config configurations activate MY_CONFIG
gcloud auth login
gcloud config set project MY_PROJECT
which allows user to choose configuration, login (via browser) and choose a project.
For your use case you probably do not want to use gcloud init, instead you should download service account key file from https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=MY_PROJECT, make it accessible inside docker container and activate it via
gcloud auth activate-service-account --key-file my_service_account.json
gcloud config set project MY_PROJECT
Better way to use gcs from container engine is give permission to cluster.
For example, if you had created your VM with devstorage.read_only scope, trying to write to a bucket would fail, even if your service account has permission to write to the bucket. You would need devstorage.full_control or devstorage.read_write.
while creating cluster we can use following command
gcloud container clusters create catch-world \
--num-nodes 1 \
--machine-type n1-standard-1 \
--scopes https://www.googleapis.com/auth/devstorage.full_control
Related
I am trying to run a docker image from private GCR using KubernetesPodOperator in Cloud Composer, but getting the following error:
ERROR: Pod launching failed : Pod took too long to start
I have tried the following till now:
At first I tried increasing the "startup_timeout_seconds" but it didn't help.
Looking at the Composer created GKE cluster logs gave me the following error:
Failed to apply default image tag "docker pull us.gcr.io/my-proj-name/myimage-
name:latest": couldn't parse image reference "docker pull us.gcr.io/my-proj-
name/myimage-name:latest": invalid reference format: InvalidImageName
I tried pulling the same docker image on my local machine from my private GCR and it worked fine, not sure where is the issue.
This link https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod tells me that
"All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance’s Google service account. The service
account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only, so
it can pull from the project’s GCR, but not push"
which means the pod should be able to pull image from GCR. FYI, I am using a service account
to provision my composer env and it has sufficient permission to read from GCS bucket.
Also, I did the following steps to add secret :
gcloud container clusters get-credentials <cluster_name>
kubectl create secret generic gc-storage-rw-key --from-file=key.json=<path_to_serv_accnt_key>
secret_file = secret.Secret(
deploy_type='volume',
deploy_target='/tmp/secrets/google',
secret='gc-storage-rw-key',
key='<path of serv acct key file>.json')
Refer it as secrets=[secret_file] inside KubernetesPodOperator operator in DAG
I have added image_pull_policy='Always' in my DAG as well but not working...
For reference: my CircleCI config.yml contains following
- run: echo ${GOOGLE_AUTH} > ${HOME}/gcp-key.json
- run: docker build --rm=false -t us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest .
- run: gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
- run: gcloud --quiet config set project ${GCP_PROJECT}
- run: gcloud docker -- push us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest
Could anyone please guide me?
I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.
I've discovered a flow that works through GCP console but not through the gcloud CLI.
Minimal Repro
The following bash snippet creates a fresh GCP project and attempts to push an image to gcr.io, but fails with "access denied" even though the user is project owner:
gcloud auth login
PROJECT_ID="example-project-20181120"
gcloud projects create "$PROJECT_ID" --set-as-default
gcloud services enable containerregistry.googleapis.com
gcloud auth configure-docker --quiet
mkdir ~/docker-source && cd ~/docker-source
git clone https://github.com/mtlynch/docker-flask-upload-demo.git .
LOCAL_IMAGE_NAME="flask-demo-app"
GCR_IMAGE_PATH="gcr.io/${PROJECT_ID}/flask-demo-app"
docker build --tag "$LOCAL_IMAGE_NAME" .
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH"
docker push "$GCR_IMAGE_PATH"
Result
The push refers to repository [gcr.io/example-project-20181120/flask-demo-app]
02205dbcdc63: Preparing
06ade19a43a0: Preparing
38d9ac54a7b9: Preparing
f83363c693c0: Preparing
b0d071df1063: Preparing
90d1009ce6fe: Waiting
denied: Token exchange failed for project 'example-project-20181120'. Access denied.
The system is Ubuntu 16.04 with the latest version of gcloud 225.0.0, as of this writing. The account I auth'ed with has role roles/owner.
Inconsistency with GCP Console
I notice that if I follow the same flow through GCP Console, I can docker push successfully:
Create a new GCP project via GCP Console
Create a service account with roles/owner via GCP Console
Download JSON key for service account
Enable container registry API via GCP Console
gcloud auth activate-service-account --key-file key.json
gcloud config set project $PROJECT_ID
gcloud auth configure-docker --quiet
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH" && docker push "$GCR_IMAGE_PATH"
Result: Works as expected. Successfully pushes docker image to gcr.io.
Other attempts
I also tried using gcloud auth login as my #gmail.com account, then using that account to create a service account with gcloud, but that gets the same denied error:
SERVICE_ACCOUNT_NAME=test-service-account
gcloud iam service-accounts create "$SERVICE_ACCOUNT_NAME"
KEY_FILE="${HOME}/key.json"
gcloud iam service-accounts keys create "$KEY_FILE" \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/owner
gcloud auth activate-service-account --key-file="${HOME}/key.json"
docker push "$GCR_IMAGE_PATH"
Result: denied: Token exchange failed for project 'example-project-20181120'. Access denied.
I tried to reproduce the same error using bash snippet you provided, however it successfully built the ‘flask-demo-app’ container registry image for me. I used below steps to reproduce the issue:
Step 1: Use account which have ‘role: roles/owner’ and ‘role: roles/editor’
Step 2: Created bash script using your given snippet
Step 3: Added ‘gcloud auth activate-service-account --key-file skey.json’ in script to authenticate the account
Step 4: Run the bash script
Result : It created the ‘flask-demo-app’ container registry image
This leads me to believe that there might be an issue with your environment which is causing this error for you. To troubleshoot this you could try running your code on a different machine, a different network or even on the Cloud Shell.
In my case project IAM permission was the issue. Make sure proper permission given and also cloud resource/container registry API enabled.
GCP Access Control
So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest
I am a bit confused about how I can authenticate the gcloud sdk on a docker container. Right now, my docker file includes the following:
#Install the google SDK
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
RUN /usr/local/gcloud/google-cloud-sdk/bin/gcloud init
However, I am confused how I would authenticate? When I run gcloud auth application-default login on my machine, it opens a new tab in chrome which prompts me to login. How would I input my credentials on the docker container if it opens a new tab in google chrome in the container?
You might consider using deb packages when setting up your docker container as it is done on docker hub.
That said you should NOT run gcloud init or gcloud auth application-default login or gcloud auth login... those are interactive commands which launch browser. To provide credentials to the container supply it with service account key file.
You can download one from cloud console: https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=YOUR_PROJECT or create it with gcloud command
gcloud iam service-accounts keys create
see reference guide.
Either way once you have the key file ADD it to your container and run
gcloud auth activate-service-account --key-file=MY_KEY_FILE.json
You should be now set, but if you want to use it as Application Default Credentials (ADC), that is in the context of other libraries and tools, you need to set the following environment variable to point to the key file:
export GOOGLE_APPLICATION_CREDENTIALS=/the/path/to/MY_KEY_FILE.json
One thing to point out here is that gcloud tool does not use ADC, so later if you change your account to something else, for example via
gcloud config set core/account my_other_login#gmail.com
other tools and libraries will continue using old account via ADC key file but gcloud will now use different account.
You can map your local Google SDK credentials into the image. [Source].
Begin by signing in using:
$ gcloud auth application-default login
Then add the following to your docker-compose.yaml:
volumes:
- ~/.config/gcloud:/root/.config/gcloud