I am trying to run a docker image from private GCR using KubernetesPodOperator in Cloud Composer, but getting the following error:
ERROR: Pod launching failed : Pod took too long to start
I have tried the following till now:
At first I tried increasing the "startup_timeout_seconds" but it didn't help.
Looking at the Composer created GKE cluster logs gave me the following error:
Failed to apply default image tag "docker pull us.gcr.io/my-proj-name/myimage-
name:latest": couldn't parse image reference "docker pull us.gcr.io/my-proj-
name/myimage-name:latest": invalid reference format: InvalidImageName
I tried pulling the same docker image on my local machine from my private GCR and it worked fine, not sure where is the issue.
This link https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod tells me that
"All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance’s Google service account. The service
account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only, so
it can pull from the project’s GCR, but not push"
which means the pod should be able to pull image from GCR. FYI, I am using a service account
to provision my composer env and it has sufficient permission to read from GCS bucket.
Also, I did the following steps to add secret :
gcloud container clusters get-credentials <cluster_name>
kubectl create secret generic gc-storage-rw-key --from-file=key.json=<path_to_serv_accnt_key>
secret_file = secret.Secret(
deploy_type='volume',
deploy_target='/tmp/secrets/google',
secret='gc-storage-rw-key',
key='<path of serv acct key file>.json')
Refer it as secrets=[secret_file] inside KubernetesPodOperator operator in DAG
I have added image_pull_policy='Always' in my DAG as well but not working...
For reference: my CircleCI config.yml contains following
- run: echo ${GOOGLE_AUTH} > ${HOME}/gcp-key.json
- run: docker build --rm=false -t us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest .
- run: gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
- run: gcloud --quiet config set project ${GCP_PROJECT}
- run: gcloud docker -- push us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest
Could anyone please guide me?
Related
I am very lost on the steps with gcloud verse docker. I have some gradle code that built a docker image and I see it in images like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/prod-stock-bot/stockstuff latest b041e2925ee5 27 minutes ago 254MB
I am unclear if I need to run docker push or not or if I can go strait to gcloud run deploy so I try a docker push like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker push gcr.io/prod-stockstuff-bot/stockstuff
Using default tag: latest
The push refers to repository [gcr.io/prod-stockstuff-bot/stockstuff]
An image does not exist locally with the tag: gcr.io/prod-stockstuff-bot/stockstuff
I have no idea why it says the image doesn't exist locally when I keep listing the image. I move on to just trying gcloud run deploy like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed
Deploying container to Cloud Run service [stockstuff] in project [prod-stock-bot] region [us-west1]
X Deploying... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
X Creating Revision... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
I am doing this all as a playground project and can't seem to even get a cloud run deploy up and running.
I tried the spring example but that didn't even create a docker image and it failed anyways with
ERROR: (gcloud.run.deploy) Missing required argument [--image]: Requires a container image to deploy (e.g. `gcr.io/cloudrun/hello:latest`) if no build source is provided.
This error occurs when an image is not tagged locally/correctly. Steps you can try on your side.
Create image locally with name stockstuff (do not prefix it with gcr and project name while creating).
Tag image with gcr repo detail
$ docker tag stockstuff:latest gcr.io/prod-stockstuff-bot/stockstuff:latest
Check if your image is available in GCR (must see your image here, before deploying on cloudrun).
$ gcloud container images list --repository gcr.io/prod-stockstuff-bot
If you can see your image in list, next you can try to deploy gcloud run with below command (as yours).
gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed
There are 3 contexts that you need to be aware.
Your local station, with your own docker.
The cloud based Google Container Registry: https://console.cloud.google.com/gcr/
Cloud Run product from GCP
So the steps would be:
Build your container either locally or using Cloud Build
Push the container to the GCR registry,
if you built locally
docker tag busybox gcr.io/my-project/busybox
docker push gcr.io/my-project/busybox
Deploy to Cloud Run a container from Google Cloud Repository.
I don't see this image gcr.io/prod-stockstuff-bot/stockstuff when you list images in the local system. You can create a new image by tagging that image with this image gcr.io/prod-stock-bot/stockstuff and re-run the gcloud run command.
for the context I am using Flask (python)
I solved this by
update gcloud-sdk to the latest version
gcloud components update
add .dockerignore, I'm guessing because of the python cache
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
expose the port to env $PORT
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 app:app
I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.
I've discovered a flow that works through GCP console but not through the gcloud CLI.
Minimal Repro
The following bash snippet creates a fresh GCP project and attempts to push an image to gcr.io, but fails with "access denied" even though the user is project owner:
gcloud auth login
PROJECT_ID="example-project-20181120"
gcloud projects create "$PROJECT_ID" --set-as-default
gcloud services enable containerregistry.googleapis.com
gcloud auth configure-docker --quiet
mkdir ~/docker-source && cd ~/docker-source
git clone https://github.com/mtlynch/docker-flask-upload-demo.git .
LOCAL_IMAGE_NAME="flask-demo-app"
GCR_IMAGE_PATH="gcr.io/${PROJECT_ID}/flask-demo-app"
docker build --tag "$LOCAL_IMAGE_NAME" .
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH"
docker push "$GCR_IMAGE_PATH"
Result
The push refers to repository [gcr.io/example-project-20181120/flask-demo-app]
02205dbcdc63: Preparing
06ade19a43a0: Preparing
38d9ac54a7b9: Preparing
f83363c693c0: Preparing
b0d071df1063: Preparing
90d1009ce6fe: Waiting
denied: Token exchange failed for project 'example-project-20181120'. Access denied.
The system is Ubuntu 16.04 with the latest version of gcloud 225.0.0, as of this writing. The account I auth'ed with has role roles/owner.
Inconsistency with GCP Console
I notice that if I follow the same flow through GCP Console, I can docker push successfully:
Create a new GCP project via GCP Console
Create a service account with roles/owner via GCP Console
Download JSON key for service account
Enable container registry API via GCP Console
gcloud auth activate-service-account --key-file key.json
gcloud config set project $PROJECT_ID
gcloud auth configure-docker --quiet
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH" && docker push "$GCR_IMAGE_PATH"
Result: Works as expected. Successfully pushes docker image to gcr.io.
Other attempts
I also tried using gcloud auth login as my #gmail.com account, then using that account to create a service account with gcloud, but that gets the same denied error:
SERVICE_ACCOUNT_NAME=test-service-account
gcloud iam service-accounts create "$SERVICE_ACCOUNT_NAME"
KEY_FILE="${HOME}/key.json"
gcloud iam service-accounts keys create "$KEY_FILE" \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/owner
gcloud auth activate-service-account --key-file="${HOME}/key.json"
docker push "$GCR_IMAGE_PATH"
Result: denied: Token exchange failed for project 'example-project-20181120'. Access denied.
I tried to reproduce the same error using bash snippet you provided, however it successfully built the ‘flask-demo-app’ container registry image for me. I used below steps to reproduce the issue:
Step 1: Use account which have ‘role: roles/owner’ and ‘role: roles/editor’
Step 2: Created bash script using your given snippet
Step 3: Added ‘gcloud auth activate-service-account --key-file skey.json’ in script to authenticate the account
Step 4: Run the bash script
Result : It created the ‘flask-demo-app’ container registry image
This leads me to believe that there might be an issue with your environment which is causing this error for you. To troubleshoot this you could try running your code on a different machine, a different network or even on the Cloud Shell.
In my case project IAM permission was the issue. Make sure proper permission given and also cloud resource/container registry API enabled.
GCP Access Control
So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest
I am trying to push a Docker image to Google Container Registry from a CircleCI build, as per their instructions. However, pushing to GCR fails due to an apparent authentication error:
Using 'push eu.gcr.io/realtimemusic-147914/realtimemusic-test/realtimemusic-test' for DOCKER_ARGS.
The push refers to a repository [eu.gcr.io/realtimemusic-147914/realtimemusic-test/realtimemusic-test] (len: 1)
Post https://eu.gcr.io/v2/realtimemusic-147914/realtimemusic-test/realtimemusic-test/blobs/uploads/: token auth attempt for registry: https://eu.gcr.io/v2/token?account=oauth2accesstoken&scope=repository%3Arealtimemusic-147914%2Frealtimemusic-test%2Frealtimemusic-test%3Apush%2Cpull&service=eu.gcr.io request failed with status: 403 Forbidden
I've prior to pushing the Docker image authenticated the service account against Google Cloud:
echo $GCLOUD_KEY | base64 --decode > ${HOME}/client-secret.json
gcloud auth activate-service-account --key-file ${HOME}/client-secret.json
gcloud config set project $GCLOUD_PROJECT_ID
Then I build the image and push it to GCR:
docker build -t $EXTERNAL_REGISTRY_ENDPOINT/realtimemusic-test -f docker/test/Dockerfile .
gcloud docker push -- $EXTERNAL_REGISTRY_ENDPOINT/realtimemusic-test
What am I doing wrong here?
Have you tried using the _json_key method for authenticating with Docker?
https://cloud.google.com/container-registry/docs/advanced-authentication
After that, please use naked 'docker' (without 'gcloud').
If you are pushing docker image using google cloud sdk. You can use temporary authorization with the following command:
gcloud docker --authorize-only
The above command gives you a temporary authorization for pushing and pulling images using docker.
You can refer this link for details Gcloud docker.
Hope it helps to solve your issue.
After many retries... I solved using access token:
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://[HOSTNAME]
The service account requires permission to write to the Cloud Storage bucket containing the container registry. Granting the service account either the project editor role or write access to the bucket (via ACL) solves the issue. The latter should be preferable since the account doesn't receive wider permissions than it needs.