Gcloud and docker confusion - docker

I am very lost on the steps with gcloud verse docker. I have some gradle code that built a docker image and I see it in images like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/prod-stock-bot/stockstuff latest b041e2925ee5 27 minutes ago 254MB
I am unclear if I need to run docker push or not or if I can go strait to gcloud run deploy so I try a docker push like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker push gcr.io/prod-stockstuff-bot/stockstuff
Using default tag: latest
The push refers to repository [gcr.io/prod-stockstuff-bot/stockstuff]
An image does not exist locally with the tag: gcr.io/prod-stockstuff-bot/stockstuff
I have no idea why it says the image doesn't exist locally when I keep listing the image. I move on to just trying gcloud run deploy like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed
Deploying container to Cloud Run service [stockstuff] in project [prod-stock-bot] region [us-west1]
X Deploying... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
X Creating Revision... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
I am doing this all as a playground project and can't seem to even get a cloud run deploy up and running.
I tried the spring example but that didn't even create a docker image and it failed anyways with
ERROR: (gcloud.run.deploy) Missing required argument [--image]: Requires a container image to deploy (e.g. `gcr.io/cloudrun/hello:latest`) if no build source is provided.

This error occurs when an image is not tagged locally/correctly. Steps you can try on your side.
Create image locally with name stockstuff (do not prefix it with gcr and project name while creating).
Tag image with gcr repo detail
$ docker tag stockstuff:latest gcr.io/prod-stockstuff-bot/stockstuff:latest
Check if your image is available in GCR (must see your image here, before deploying on cloudrun).
$ gcloud container images list --repository gcr.io/prod-stockstuff-bot
If you can see your image in list, next you can try to deploy gcloud run with below command (as yours).
gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed

There are 3 contexts that you need to be aware.
Your local station, with your own docker.
The cloud based Google Container Registry: https://console.cloud.google.com/gcr/
Cloud Run product from GCP
So the steps would be:
Build your container either locally or using Cloud Build
Push the container to the GCR registry,
if you built locally
docker tag busybox gcr.io/my-project/busybox
docker push gcr.io/my-project/busybox
Deploy to Cloud Run a container from Google Cloud Repository.

I don't see this image gcr.io/prod-stockstuff-bot/stockstuff when you list images in the local system. You can create a new image by tagging that image with this image gcr.io/prod-stock-bot/stockstuff and re-run the gcloud run command.

for the context I am using Flask (python)
I solved this by
update gcloud-sdk to the latest version
gcloud components update
add .dockerignore, I'm guessing because of the python cache
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
expose the port to env $PORT
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 app:app

Related

GCP: Unable to pull docker images from our GCP private container registry on ubuntu/debian VM instances

I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.

Access Docker Container from project registry

So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest

kubectl run-ing a tarball'd image

I am trying to run a Nix-built Docker image in tarball form. With docker, docker load -i <path> followed by a docker run works fine. Now I've uploaded the tarball to Artifactory and am trying to run the image on K8s with something like:
$ kubectl run foo-service --image=<internal Artifactory>/foo-service/foo-service-latest.tar.gz
However all I see is:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
foo-service 1 1 1 0 2h
Is it possible to load an image from a (remote) tarball in K8s? If yes, what is the command to do so?
There is no way to do that directly in Kubernetes.
You can do docker load and then docker push to a registry (you can host a private registry in Kubernetes or use a public one) and after that kubectl run.
Minikube also has a registry addon for local development.

Jenkins docker setup

I am using Jenkins to make build of project, but now my client wants to make builds inside of a Docker image. i have installed docker on server and its running on 172.0.0.1:PORT. I have installed Docker plugin and assigned this TCP URL to Docker URL. I have also created an image with the name jenkins-1
In configure project I use Build environment Build with Docker Container and provide image name. and then in Build in put Execute Shell and then Build it
But it gives the Error:
Pull Docker image jenkins-1 from repository ...`
$ docker pull jenkins-1`
Failed to pull Docker image jenkins-1`
FATAL: Failed to pull Docker image jenkins-1`
java.io.IOException: Failed to pull Docker image jenkins-1``
at com.cloudbees.jenkins.plugins.docker_build_env.PullDockerImageSelector.prepare DockerImage(PullDockerImageSelector.java:34)`
at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerB`uildWrapper.java:169)`
at hudson.model.Build$BuildExecution.doRun(Build.java:156)`
at `hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)`
at hudson.model.Run.execute(Run.java:1720)`
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)`
at hudson.model.ResourceController.execute(ResourceController.java:98)`
at hudson.model.Executor.run(Executor.java:404)`
Finished: FAILURE`
I just have run into the same issue. There is a 'Verbose' check-box in the configuration of build environment after selecting 'Advanced...' link to expand on the error details:
CloudBees plug-in Verbose option
In my case I ran out of space downloading the build Docker images. Expanding ec2 volume has resolved the issue.
But there is an ongoing trouble with space as docker does not auto cleans images and I have ended up adding a manual cleanup step into the build:
docker volume ls -qf dangling=true | xargs -r docker volume rm
Complete build script:
https://bitbucket.org/vk-smith/dotnetcore-api/src/master/ci-build.sh?fileviewer=file-view-default

Unable to start Docker Container in Bluemix

I have created a simple docker file to deploy a war in tomcat
FROM tomcat
ADD your.war /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
I tested locally and works. I tried building the image and pushed it to Bluemix. The docker image got pushed and I can see it in the dashboard, but while trying to start it up (By Clicking on it ), the Vulnerability assessment kicks off and it gives a warning symbol with an Incomplete text and I am unable to proceed further. Need pointers on how to go about this.
Steps followed -
From my local m/c where I have docker running
# docker tag testimagetom registry.ng.bluemix.net/shabsctr/testimagetom:image_tag
# docker push registry.ng.bluemix.net/shabsctr/testimagetom:image_tag
# cf ic run --name shabstrydocker registry.ng.bluemix.net/shabsctr/testimagetom:image_tag
You can build images with DockerFile by cf command. It'll create your image on your bluemix registory.
cf ic build [--tag|-t TAG_NAME] [--no-cache] [--pull] [--quiet] LOCAL_DOCKER_FILE_PATH

Resources