Access Docker Container from project registry - docker

So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?

VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper

The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.

after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest

Related

Google Cloud Composer KubernetesPodOperator InvalidImage error

I am trying to run a docker image from private GCR using KubernetesPodOperator in Cloud Composer, but getting the following error:
ERROR: Pod launching failed : Pod took too long to start
I have tried the following till now:
At first I tried increasing the "startup_timeout_seconds" but it didn't help.
Looking at the Composer created GKE cluster logs gave me the following error:
Failed to apply default image tag "docker pull us.gcr.io/my-proj-name/myimage-
name:latest": couldn't parse image reference "docker pull us.gcr.io/my-proj-
name/myimage-name:latest": invalid reference format: InvalidImageName
I tried pulling the same docker image on my local machine from my private GCR and it worked fine, not sure where is the issue.
This link https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod tells me that
"All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance’s Google service account. The service
account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only, so
it can pull from the project’s GCR, but not push"
which means the pod should be able to pull image from GCR. FYI, I am using a service account
to provision my composer env and it has sufficient permission to read from GCS bucket.
Also, I did the following steps to add secret :
gcloud container clusters get-credentials <cluster_name>
kubectl create secret generic gc-storage-rw-key --from-file=key.json=<path_to_serv_accnt_key>
secret_file = secret.Secret(
deploy_type='volume',
deploy_target='/tmp/secrets/google',
secret='gc-storage-rw-key',
key='<path of serv acct key file>.json')
Refer it as secrets=[secret_file] inside KubernetesPodOperator operator in DAG
I have added image_pull_policy='Always' in my DAG as well but not working...
For reference: my CircleCI config.yml contains following
- run: echo ${GOOGLE_AUTH} > ${HOME}/gcp-key.json
- run: docker build --rm=false -t us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest .
- run: gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
- run: gcloud --quiet config set project ${GCP_PROJECT}
- run: gcloud docker -- push us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest
Could anyone please guide me?

GCP: Unable to pull docker images from our GCP private container registry on ubuntu/debian VM instances

I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.

Cannot access Google Container Registry from Google Compute Engine

I'm at my wits trying to get Docker images from Google Container Registry onto a Google Compute Engine instance. (The images I need have been successfully uploaded to GCR.)
I've logged in using gcloud auth login and then tried gcloud docker pull -- us.gcr.io/app-999/app which results in ERROR: (gcloud.docker) Docker is not installed..
I've tried to authenticate using oauth and pulling via a normal docker call. I see my credentials when I look at the file at .docker/config.json. Doing that, it looks like it's going to work, but ultimatly ends like this:
mbname#instance-1 ~ $ docker pull -- us.gcr.io/app-999/app
Using default tag: latest
latest: Pulling from app-999/app
b7f33cc0b48e: Pulling fs layer
43a564ae36a3: Pulling fs layer
b294f0e7874b: Pulling fs layer
eb34a236f836: Waiting
error pulling image configuration: unauthorized: authentication required
which looks like progress, because at least it attempted to download something.
I've tried both of these things on my local machine as well and both methods were successful.
Am I missing something?
Thanks for your help.
P.S. I've also tried loading a container from another registry (
Docker Hub) and that worked fine, but I need more than one container and want to keep expenses down.
After contacting Google support they informed me that there is a bug in the CoreOS gcloud alias. This bug is fixed by overwriting the alias in the shell as follows:
alias gcloud='(docker images google/cloud-sdk || docker pull google/cloud-sdk) > /dev/null;docker run -t -i --net=host -v $HOME/.config:/.config -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker google/cloud-sdk gcloud'
I've tried this and it works now.
docker should be included in the latest versions of gcloud. You can update to the latest version of gcloud by running gcloud components update

Unable to push my docker to Bluemix - Login error

I created my dockers using a makefile, and checked if it was correct. In fact, I was able to run it and even upload to Docker Hub without problems. I then followed the steps suggested to upload the docker to Bluemix, and was unable to do it. I am getting an error telling me that my credentials are incorrect, although I am sure they are not (in fact, I was able to login on the Bluemix website using the same credential without problems).
See below the steps I did and the error obtained, any suggestion to solve them will be welcomed:
$ cf login
API endpoint: https://api.eu-gb.bluemix.net
Email> agorostidi
Password>
Autenticando...
OK
Org seleccionada agorostidi
Space seleccionado dev
Endpoint API: https://api.eu-gb.bluemix.net (version de API: 2.40.0)
Usuario: andres.gorostidi#gmail.com
Org: agorostidi
Space: dev
MacBook-Pro-de-Andres:apache-docker andres$ cf ic login
Client certificates are being retrieved from IBM Containers...
Client certificates are being stored in /Users/andres/.ice/certs/...
Client certificates are being stored in /Users/andres/.ice/certs/containers-api.eu-gb.bluemix.net/504cc61c-47e2-4528-914a-3def71277eea...
OK
Client certificates were retrieved.
Deleting old configuration file...
Checking local Docker configuration...
OK
Authenticating with registry at host name registry.eu-gb.bluemix.net
OK
Your container was authenticated with the IBM Containers registry.
Your private Bluemix repository is URL: registry.eu-gb.bluemix.net/goros
You can choose from two ways to use the Docker CLI with IBM Containers:
Option 1: This option allows you to use "cf ic" for managing containers on IBM Containers while still using the Docker CLI directly to manage your local Docker host.
Use this Cloud Foundry IBM Containers plug-in without affecting the local Docker environment:
Example Usage:
cf ic ps
cf ic images
Option 2: Use the Docker CLI directly. In this shell, override the local Docker environment to connect to IBM Containers by setting these variables. Copy and paste the following commands:
Note: Only Docker commands followed by (Docker) are supported with this option.
export DOCKER_HOST=tcp://containers-api.eu-gb.bluemix.net:8443
export DOCKER_CERT_PATH=/Users/andres/.ice/certs/containers-api.eu-gb.bluemix.net/504cc61c-47e2-4528-914a-3def71277eea
export DOCKER_TLS_VERIFY=1
Example Usage:
docker ps
docker images
MacBook-Pro-de-Andres:apache-docker andres$ docker push registry.ng.bluemix.net/eci_test/chargeback:latest
The push refers to a repository [registry.ng.bluemix.net/eci_test/chargeback] (len: 1)
Sending image list
Please login prior to push:
Username: agorostidi
Password:
Email: andres.gorostidi#gmail.com
Error response from daemon: Wrong login/password, please try again
You logged in to the Bluemix London region and are trying to push an image to the Bluemix US South region, that's why docker push command is asking for your credentials again.
If you want to push your images to the Bluemix US South region you have to login to that region first.
Please point your API to the Bluemix US South region with the following command:
$ cf api https://api.ng.bluemix.net
Then proceed again with the commands you run before, i.e.:
$ cf login
$ cf ic login
$ docker push registry.ng.bluemix.net/eci_test/chargeback:latest
Otherwise, if you want to push your image to the Bluemix London region, then you have to re-tag the image name to match the London region:
$ docker tag chargeback:latest registry.eu-gb.bluemix.net/eci_test/chargeback:latest
Then you can run the docker push command specifying the new tagged image.

Creating first image on bluemix docker

I'm trying to create a simple ubuntu image on docker within Bluemix.
I have the cli setup (at the latest version) but keep getting a login prompt when trying to push the image.
My dockerfile is trivial:
FROM docker.io/ubuntu:latest
MAINTAINER My Name
RUN echo "Imaged" > /tmp/image.txt
I build it with
sudo docker build -t ubuntu
then tag it with
sudo docker tag ubuntu registry.eu-gb.bluemix.net/MYNAMESPACE/ubuntu
I login with
cf login
Then push with
[ibmcloud#analyticsadmin docker]$ sudo docker push registry.ng.bluemix.net/MYNAMESPACE/ubuntu
The push refers to a repository [registry.ng.bluemix.net/MYNAMESPACE/ubuntu] (len: 1)
Sending image list
Please login prior to push:
Username:
I'm new to bluemix/docker so user error is highly likely. Can you spot my error? My DOCKER* environment variables are set as appropriate for my bluemix container service.
It seems you missed the step to login to the IBM Containers registry, that's why docker push is asking you for the username.
After cf login you have to run the following command as well:
$ cf ic login
This will authenticate you to the IBM Containers registry so you can push your images.
Please note that ic is a plugin you have to install for the cf command line interface. If you have not installed it yet please see instructions in the following link:
https://www.ng.bluemix.net/docs/containers/container_cli_cfic.html#container_cli_cfic_install
For example to install plugin in Linux system run the following command:
$ cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x64
A typo I see in your commands is that you tag your container for sending it to the UK data center (eu-gb) then try to push it to the south US one (ng), that's why I think the second command asks you to login.

Resources