Is there any way in google cloud to automatically erase a past docker image the moment you create a new one? - docker

I made a google cloud run trigger that creates an image the moment you make a git push into the repo. This creates the image that is stored in gc, but in the long run that will be full of docker images. I´m lookin for a way to do it automatically, i know it can be done manually but that is not what I want. Anything is helpful at this moment

You can integrate your code in Cloud Source Repositories. Then, Cloud Build will create the desired Docker image than will be pushed to Google Cloud Container Registry. As soon as you commit code, the whole pipeline will run, updating the image.
Click the Cloud Run instance / Set up continuous deployment, enter the GitHub repo and enter Build configuration.
Then you attach the existing Cloud Build trigger to Cloud Run service
GCP Documentation:
https://cloud.google.com/run/docs/continuous-deployment-with-cloud-build

Related

Automatically deploy new container to Google Cloud Compute Engine from Google Container Registry

I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button
I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.

How to deploy the built docker image built by Cloud Build on Cloud Run automatically

Currently I trigger a Cloud Build each time a pull request is completed.
The image is built correctly, but we have to manually go to Edit and Deploy New Revision and select the most recent docker image to deploy.
How can we automate this process and have the a container deployed from the image automatically?
You can do it with a pretty simple GitHub Action. I have followed this article:
https://towardsdatascience.com/deploy-to-google-cloud-run-using-github-actions-590ecf957af0
Cloud Run also natively integrates with Cloud Build. You can import a GitHub repository and it sets up a GCB trigger for your repository (on the specified branch or tag filter).
You can use GitLab CI to automate your Cloud Run deployment.
Here are the tutorial if you want to automate your deployment with GitLab CI Link

CI/CD integration problem when using google-cloud-build with github push as trigger for Cloud Run

I am trying to set up a CI/CD pipeline using one of my public GitHub repositories as the source for Cloud Run (fully-managed) service using Cloud Build. I am using a Dockerfile initialized in root folder of the repository with source configuration parameter initialized as /Dockerfile when setting up the cloud build trigger. (to continuously deploy new revisions from source repository)
When, I initialize the cloud run instance, I face the following error:
Moreover, when I try to run my cloud build trigger manually, it shows the following error:
I also tried editing continuous deployment settings by setting it to automatically detect Dockerfile/cloudbuild.yaml. After that, build process becomes successful but the revision are not getting updated. I've also tried deploying a new revision and then triggering cloud build trigger but it isn't still able to pick the latest build from container registry.
I am positive that my Dockerfile and application code are working properly since I've previously submitted the build on Container registry using Google Cloud Shell and have tested it manually after deploying it to cloud run.
Need help to fix the issue.
UPPERCASE letters in the image path aren't allowed. Chnage Toxicity-Detector to toxicity-detector

Can I have an OpenShift build trigger a sync of an ImageStream from an external Docker Repository?

I have an OpenShift BuildConfig that creates a Docker image stored in a remote repository. The DeploymentConfig uses an OpenShift ImageStream to trigger a deployment when a new image is available. I've managed to push to the remote repo and sync the ImageStream from the remote repo. However, when I run a fresh build, the ImageStream does not automatically sync up with the remote repo. New deployments use the old image until that occurs. I have found that I can not have multiple spec.output.to fields in the BuildConfig or I'd just push it to both locations. I have set the importPolicy.scheduled: true field, but by default this only imports every 15 minutes and at that point the deployment trigger fires and the new version is deployed. I'd like to not wait for that time. I'd prefer not to make a global config change for this. I'd be willing to push to the local docker repo and push that to the external if there is a way to accomplish this.
How can I be sure at build time that the ImageStream is sync'd so a deployment is triggered?
How about try to adjust Image Policy Configuration
in /etc/origin/master/master-config.yaml ?
This post is also helpful for your case, refer Automatically Update Red Hat Container Images on OpenShift 3.11 for more details.
Or when you images are pushed by CI/CD, you can consider to configure post trigger to execute oc import-image <imagestream name>:<tag> --confirm for manual sync.
I've managed to push to the remote repo and sync the ImageStream from the remote repo
In order to give a good answer some more information on how you did this will help greatly.
One way to solve your problem is to tag your docker registry with several tags and listen to one of these tags as a scheduled imageStreamTag in your ImageStream. That will make it (eventually) get deployed. You can do this in your CI pipeline.
If you want to deploy it right away after you have created a build you need to do something like
oc tag my.regisstry/group/name:<myBuildTag> isName:isTagName

Trigger VSTS Build after docker hub image update

I run a docker image for data processing on a windows server 2016 (single vm, On Premises). My image is stored in a Azure Container Registry. The code does not change often. To get security Updates I like to get a rebuild and release after the microsoft/windowsservercoreis updated.
Is there a Best Practice Way to do this?
I thought about 3 ways of solving this:
Run a scheduled build every 24h, pull the microsoft/windowsservercore, pull my custom image, run powershell to get the build dates and compare then (or use some of the histroy ids). If a rebuild is needed, build the new image and tag the build. Configure the Release to run only on this tag.
Run a Job to check the update time of the docker image and trigger the build with a REST request.
Put a basic Dockerfile on github. Set up automated Build with a trigger to microsoft/windowsservercore and configure the webhook to a WebService, which start the Build with REST.
But I really like non of these Ideas. Is there a better option?
You can use Azure Container Registry webhooks directly, the simple workflow:
Build a Web Api project to queue build per to detail request (webhook request) through Queue a build Rest API
Create an Azure Container Registry webhook to call Web API (step1)
I choose option three. Therefore I set up a github repository with a one line Dockerfile:
FROM alpine
I used the alpine image and not the windowsservercore, because automated build does currently does not support windows images. I configured a automated build in the docker hub and add a Linked Repositories to microsoft/windowsservercore.
Then I set up a MS Flow with a HTTP Request Trigger to start the Build. Add the Flow URL to a new webhook on the automated build.
For me this are to many moving parts that has to configured and work together, but I know no better way.

Resources