How to rollback after deploying to Google Cloud Run - rollback

I started playing with Google Cloud Run and, at least on the surface, it looks like a fantastic tool. One thing I can't figure out is how to do rollback efficiently.
I deploy my service via command line
gcloud beta run deploy my-service --image my-image
and ideally I'd like to have the option to rollback to the previous revision if I find a problem with my new deployment.
Is there a way to rollback or migrate the traffic to a specific revision?

This is coming feature on the managed platform! Be patient!
For now, simply deploy a new revision with the previous image. You can browse the image with the CLI of through the UI. Get the image with the digest and deploy it.
To list the revision use gcloud beta run revisions list --filter <service name> --platform managed
To get the image of your revision gcloud beta run revisions describe <revision name> --platform managed --region <region> --format 'value(status.imageDigest)'
Take care of env var if you change the between version (you can also see this on the GUI or with the CLI)
For listing the variable of a revision gcloud beta run revisions describe <revision name> --platform managed --region <region> --format 'default(spec.containers)'
For Cloud Run on GKE, you can update the route by using YAML. Start by extracting the route from CLoud Run
gcloud beta run routes describe <service name> > route.yaml
Change the revision pointed at the end of the description:
traffic:
- percent: 100
revisionName: <revision Name>
Then perform a kubectl apply -f route.yaml

Related

Gcloud and docker confusion

I am very lost on the steps with gcloud verse docker. I have some gradle code that built a docker image and I see it in images like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/prod-stock-bot/stockstuff latest b041e2925ee5 27 minutes ago 254MB
I am unclear if I need to run docker push or not or if I can go strait to gcloud run deploy so I try a docker push like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker push gcr.io/prod-stockstuff-bot/stockstuff
Using default tag: latest
The push refers to repository [gcr.io/prod-stockstuff-bot/stockstuff]
An image does not exist locally with the tag: gcr.io/prod-stockstuff-bot/stockstuff
I have no idea why it says the image doesn't exist locally when I keep listing the image. I move on to just trying gcloud run deploy like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed
Deploying container to Cloud Run service [stockstuff] in project [prod-stock-bot] region [us-west1]
X Deploying... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
X Creating Revision... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
I am doing this all as a playground project and can't seem to even get a cloud run deploy up and running.
I tried the spring example but that didn't even create a docker image and it failed anyways with
ERROR: (gcloud.run.deploy) Missing required argument [--image]: Requires a container image to deploy (e.g. `gcr.io/cloudrun/hello:latest`) if no build source is provided.
This error occurs when an image is not tagged locally/correctly. Steps you can try on your side.
Create image locally with name stockstuff (do not prefix it with gcr and project name while creating).
Tag image with gcr repo detail
$ docker tag stockstuff:latest gcr.io/prod-stockstuff-bot/stockstuff:latest
Check if your image is available in GCR (must see your image here, before deploying on cloudrun).
$ gcloud container images list --repository gcr.io/prod-stockstuff-bot
If you can see your image in list, next you can try to deploy gcloud run with below command (as yours).
gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed
There are 3 contexts that you need to be aware.
Your local station, with your own docker.
The cloud based Google Container Registry: https://console.cloud.google.com/gcr/
Cloud Run product from GCP
So the steps would be:
Build your container either locally or using Cloud Build
Push the container to the GCR registry,
if you built locally
docker tag busybox gcr.io/my-project/busybox
docker push gcr.io/my-project/busybox
Deploy to Cloud Run a container from Google Cloud Repository.
I don't see this image gcr.io/prod-stockstuff-bot/stockstuff when you list images in the local system. You can create a new image by tagging that image with this image gcr.io/prod-stock-bot/stockstuff and re-run the gcloud run command.
for the context I am using Flask (python)
I solved this by
update gcloud-sdk to the latest version
gcloud components update
add .dockerignore, I'm guessing because of the python cache
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
expose the port to env $PORT
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 app:app

Facing a Problem While Installing Acumos using One click Deploy method(Kubernetes)

I have Followed below process for installing Acumos in an Ubuntu 18 Server.
Open a shell session (bash recommended) on the host on which (for single AIO deployment) or from which (for peer-test deployment) you want to install Acumos, and clone the system-integration repo:
> git clone https://gerrit.acumos.org/r/system-integration
If you are deploying a single AIO instance, run the following command, selecting docker or kubernetes as the target environment. Further instructions for running the script are included at the top of the script.
> bash oneclick_deploy.sh
I have done it using k8s as below
> bash oneclick_deploy.sh k8s
Everything was running smoothly but at the end i am facing the below issue .
as docker API is not ready
Can anyone help me on this Please?
Note: I have checked in the kubernetes console everything is fine . A service file is created and also namespace is also created sucessfully in the name of acumos .
I'm the developer of that toolset. I'll be happy to help you thru this. Note that it's actively being developed, and will be evolving a lot. But there are some easy things you can do to provide more details so I can debug your situation.
First, start with a clean env:
$ bash clean.sh
Then reattempt the deployment, piping the console log to a file:
$ bash oneclick_deploy.sh k8s 2>&1 | tee deploy.log
Review that file to be sure that there's nothing sensitive in it (e.g. passwords or other private info about the deployment that you don't want to share), and if possible attach it here so I can review it. That will be the quickest way to debug.
Also you can let me know some more about your deployment context:
Did you ensure the Prerequisites:
Ubuntu Xenial (16.04), Bionic (18.04), or Centos 7 hosts
All hostnames specified in acumos-env.sh must be DNS-resolvable on all hosts (entries in /etc/hosts or in an actual DNS server)
Did you customize acumos-env.sh, or use the default values
Send the output of
$ kubectl get svc -n acumos
$ kubectl get pods -n acumos
$ kubectl describe pods -n acumos

Access Docker Container from project registry

So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest

Using GCloud CLI with Docker image

I'm using the official Google Cloud SDK Docker image (https://hub.docker.com/r/google/cloud-sdk/) to use the GCloud CLI on my workstation, since I have some restrictions on directly installing things on this machine. One of my main issues is that whenever I SSH into my instance, the SSH key generation process is repeated. I followed the instructions listed in the info section of the docker image. The command I'm using to login is -
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute --project "dummy" ssh --zone "asia-southeast1-a" "test"
How do I make the SSH login persist as would be the case if I was using the GCLoud CLI on my host machine?

Docker commit doesn't make changes to containers union file system

This probably has been asked at some point, but I can't find it anywhere. I can't seem to be able (or can't figure out how) to commit changes to a docker image without losing file changes in the container that i'm committing. Here's my use case.
I use Boot2Docker on Windows,
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8
I pull the newest version of jenkins from dockerHub.
docker pull jenkins
I run it, rerouting it's web interface port
docker run -dt -p 8010:8080 jenkins
I go to the interface to install some plugins. I install the plugins and then restart Jenkins. I then try to commit my changes.
docker commit $(docker ps -lq) <my_user_name_on_docker_hub>/<name_of_the_new_image>
docker ps -lq returns the id of the last running container. Since i'm running only this container at this moment, i'm sure that this returns the correct id (i also checked it by doing docker ps and actually looking up the container)
Then I push the changes.
docker push <my_user_name_on_docker_hub>/<name_of_the_new_image>
The push goes through all of the revisions that already exist on the dockerHub and skips them all until it hits the last one and uploads four megabytes to the registry. And yet, when i try to run this new image, it's just the base image of jenkins without any changes. Without the plugins that i installed. As far as i understand, the changes to the union file system of the image (jenkins plugins are installed as binaries there) should be commited. I need to have a new image with my changes on it.
What am I doing wrong?
EDIT: i created a couple of test jobs, ran them, walked around the file system using docker exec -it bash, and jenkins creates a new directory for each job under /var/jenkins_home/jobs, but when i do docker diff, it shows that only temp files have been created. And after committing, pushing, stopping the container and running a new one from the image that just got pushed, the job folders disappear together with everything else.
EDIT2: i tried creating files in other folders and docker diff seems to be seeing the changes everywhere else except for /var/jenkins_home/ directory.
EDIT3: this seems to be related -- from Jenkins DockerHub page
How to use this image
docker run -p 8080:8080 jenkins This will store the workspace in
/var/jenkins_home. All Jenkins data lives in there - including plugins
and configuration. You will probably want to make that a persistent
volume:
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
The volume for the “myjenkins” named container will then be
persistent.
You can also bind mount in a volume from the host:
First, ensure that /your/home is accessible by the jenkins user in
container (jenkins user - uid 102 normally - or use -u root), then:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins
I tried running the command with the -v toggle, but that didn't really make my commit any more persistent.
It was my fault for not looking at the docs for the Jenkins docker image
How to use this image
docker run -p 8080:8080 jenkins This will store the workspace in
/var/jenkins_home. All Jenkins data lives in there - including plugins
and configuration. You will probably want to make that a persistent
volume:
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
The volume for the “myjenkins” named container will then be
persistent.
https://registry.hub.docker.com/_/jenkins/

Resources