Automatically deploy new container to Google Cloud Compute Engine from Google Container Registry - docker

I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button

I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.

Related

Deploying Multiple Microservices in different repositories to a single VM using bitbucket pipelines and docker-compose

I have a total of 8 Nodejs services in 8 different repositories on bitbucket. The services share some common code which is present in another repository called Brain. I want to create bitbucket pipelines in all of these repositories so that I can do the following things:
Build the docker image for each service and store it in Google Container Registry
Use ssh-run or a similar runner to ssh into my GCE VM and run a docker-compose pull and docker-compose up to deploy the latest versions.
Perform zero downtime update, i.e, keep the old containers running until the new containers are ready.
What would be the best way of doing this? Currently I'm facing the following problems:
When I push changes to the Brain repository, I need to build images for all the different services. Most of the time I'm pushing both to my brain repository and some other service repository. So multiple images are being built.
When I push changes to the Brain repository, as I build the different images for the services, all of them try to deploy using ssh-run. I don't know if this is sustainable and may crash my VM?
Any suggestions would be appreciated. Thanks in advance!

Jenkins CI/CD deployment to AWS EKS without Docker registry

We are trying to setup a development CI/CD pipeline with Jenkins that builds the Docker Images and deploy that Directly to AWS EKS cluster. Is this even possible??
Our Existing system
Jenkins as CI to pick the Code from GitLab and Build Docker Image
After Build, Jenkins push the Image to Jfrog Artifactory(Professional)
We use Harness for CD, that picks the Image from Artifactory and deploy that
to AWS
Here, Artifiactory and Harness Incurs cost for us and we don't want to use that for Development builds. So, we have setup a Docker Registry with Soantype Nexus3 OSS(open source version).
I would like to know two Options here:
if I can use Jenkins to Build Docker Image and Push that to Nexus Docker Registry and Use Jenkins Itself for CD to deploy that to AWS EKS?
Build Docker images with Jenkins and directly deploy that to AWS EKS without even having to store it in a docker registry?
Any suggestions and help is highly appreciated!
the first option much better.
because one day may need roll-back docker image on Kubernetes. (even development environment)
or you can use AWS ECR. it's easier to use on EKS.
and I think ECR is cheaper than Nexus operation cost.
You may be happy to know that Harness has created a free software version of it's CD service, called Harness Continuous Delivery Community Edition, which should work nicely for your development builds.

How to manage my application in a container and deploy with no downtime on gcloud

I have a monolithic application that I am hosting on google cloud.
I am using cloud build that builds my docker image when I push to my repository.
Other than using Kubernetes, what other options do I have to push my latest docker image to my web instances in a rolling update to not bring my website down?
I cant' seem to find any documentation other than Kubernetes related.
I believe I should be building a instance template that has my latest docker image. Not sure how to make this happen in an automated fashion.

Using docker the way Openshift does?

I read this How does docker compare to openshift?
But I have a question :
This is an extremely simplified description of what usually devs do with Openshift :
Select a "pod" (let's say a JBoss/Wildfly container)
From within Openshift you point to your github repo
Openshift would clone the repo, build it and deploy it
Openshift present you with a web URL to access this repo port 8080
There's of course a lot more going on but that's as simple as it gets
Is this setup doable in my own linux box, VM or a cloud instance (Docker Container --> clone, build and deploy from git repo)? What would I need without messing too much with networking and domains etc?
from my research I see the following tools:
Kubernetes
Dokku : I see it described as "Your own Heroko"
I also keep hearing about CaaS (Containers as a Service)
I understand I would be needing another tool or process to the build (CI/CD) capability, and to triggering builds with git push.

How to use VSTS Build/Release to continuously integrate/deploy Docker containers to Azure Service Fabric?

I'm asking this question here because Azure's documentation says a sample for Linux Containers is 'coming soon'. Anyone has any insight on when this tutorial might be available?
Meanwhile, I'm hoping someone can shed some light on how to effectively do this.
My use case is:
a microservices based application (say Microservices A, B, and C); each microservice should run in its own Docker container
use Visual Studio Team Services Build capability to build container images and push them to Docker Hub
use VSTS Release capability to individually deploy the microservices (containers) to a Service Fabric cluster as microservices are independently developed, that is, I don't want to update the entire application in Service Fabric, but only redeploy the changed microservice/container to the respective node(s)
There could be a custom solution for this where one can add Tasks to the Build and Release in VSTS (like Docker Build and Shell Script tasks), call some scripts to update the Application Manifest and Service Manifest to kick off the updates to the Service Fabric cluster, and so on.
Whether your containers are services in the same app or different app, you can still deploy them independently. Only changes are being applied at deployment, you don't even have to have the non-changed services in the deployment package. Look here to see an example for Service Fabric service (not in containers), but deploying containers using service manifest is conceptually the same: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-set-up-continuous-integration

Resources