How do you extract/reuse Docker BuildKit caches with CI - docker

Docker introduces RUN --mount=type=cache which I can work well locally, but I want to be able to leverage it in a CI specifically Azure Devops.
But I can't find a way of save and load the cache between builds. Is there an option to do this?

Please refer to this doc:
In the current design of Microsoft-hosted agents, every job is dispatched to a newly provisioned virtual machine (based on the image generated from azure-pipelines-image-generation repository templates). These virtual machines are cleaned up after the job reaches completion, not persisted and thus not reusable for subsequent jobs. The ephemeral nature of virtual machines prevents the reuse of cached Docker layers.
Therefore, the local docker cache on VM cannot be used by another build when you use Microsoft-Hosted agents.
Here are some alternative methods:
You could use self-hosted agent to execute the docker build process. Multiple builds can share the local cache.
You can also you use Cache task and docker save/load commonds to upload the saved docker layer to azure devops server and restore it on the future run.
Use docker pull to pull the image from remote repository. Use using --cache-from to point the image. You could push the build image to remote repository for next build.
You could refer to this blog and this ticket for more detailed info.

Related

Automatically deploy new container to Google Cloud Compute Engine from Google Container Registry

I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button
I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.

How to create a pipeline to build and release a Docker compose, with Azure Devops using the graphical interface (GUI)

Well, how can I create a pipeline to build and release a Docker compose, with Azure Devops through the graphical interface (GUI) I am not an expert in devops but I have this challenge in my work.
I would point you toward a great guide by microsoft, it's for java applications but you can get what you need out of it.
Solution in general:
Open the Azure Portal. Select + Create a resource and search for
Container Registry. Select Create. In the Create Container Registry
dialog, enter a name for the service, select the resource group,
location and click Review + Create. Once the validation is success
click Create.
In your CI build you need to have 2 tasks, 1 for the build/compose where you provide and another to publish the image to your Azure Container Registry. You will use the "same task" for this.
This container registry is where you store the outputs of your builds, similar to artifacts in traditional CI builds. This is where you publish your application from during a release to on-prem or cloud.
You can read more about the parameters you need to provide and the settings in details in the guide.
P.S. Here is an example on how to dockerize and existing .NETCore application.
How do you build and release your Docker compose on local?
Normally, you can copy the related docker-compose CLI and Docker CLI that you execute on local to the shell script tasks (such as Bash, PowerShell, etc.) in the pipeline you set up on Azure DevOps.
Of course, there are also the available Docker Compose task and Docker task.

Docker trigger jenkins job when image is pushed

I am trying to build a jenkins job(trigger builds remotely) on docker image build, build all I am getting on docker hub is following:
HISTORY
ID Status Date & Time
7345... ! ERROR 10/12/17 10:03
Reason (I assume): Docker is not authenticated to post to the jenkins url.
Question: How can I trigger the job automatically when an image gets pushed to docker hub?
Pull and run Watchtower docker image to poll any third-party public Docker image on Docker Hub or Quay that you need (typically as a base image of your own containers). Here's how. "Polling" here does not imply crudely pulling the whole image every 5 minutes or so - we are monitoring periodically for changes in the image, downloading only the checksum (SHA digest) most of the time (when there are no changes in the locally cached image).
Install the Build Token Root Plugin in your Jenkins server and set it up to receive Slack-formatted notifications secured with a token to trigger builds remotely or - safer - locally (those triggers will be coming from Watchtower container, not Slack). Here's how.
Set up Watchtower to post Slack messages to your Jenkins endpoint upon every change in the image(s) (tags) that you want. Here's how.
Optionally, if your scale is so large that you could end up overloading and bringing down the entire Docker Hub with a flood HTTP GET requests (should the time triggers go wrong and turn into a tight loop) make sure to build in some safety checks on top of Watchtower to "watch the watchman".
You can try the following plugin: https://wiki.jenkins.io/display/JENKINS/CloudBees+Docker+Hub+Notification
Which claims to do what you're looking for.
You can configure a WebHook in DockerHub wich will trigger the Jenkins-Build.
Docker Hub webhooks targeting your Jenkings server endpoint require making periodic copies of the image to another repo that you own [see my other answer with Docker Hub -> Watchman -> Jenkins integration through Slack notifications].
More details
You need to set up a cron job with periodic polling (docker pull) of the source repo to [docker] pull its `latest' tag, and if a change is detected, re-tag it as your own and [docker] push to a repo you own (e.g. a "clone" of the source Docker Hub repo) where you have set up a webhook targeting your Jenkings build endpoint.
Then and only then (in a repo you own) will Jenkins plugins such as Docker Hub Notification Trigger work for you.
Polling for Dockerfile / release changes
As a substitute of polling the registry for image changes (which need not generate much network traffic thanks to the local cache of docker images) you can also poll the source Dockerfile on Github using wget. For instance Dockerfiles of the official Docker Hub images are here. In case when the Github repo makes releases, you can get push notifications of them using Github Watch > Releases Only feature and if they have CI docker builds. Docker images will usually be available with a delay after code releases, even with complete automation, so image polling is more reliable.
Other projects
There was also a proposal for a 2019 Google Summer of Code project called Polling Docker Registries for Image Changes that tried to solve this problem for Jenkins users (incl. apparently Google), but sadly it was not taken up by participants.
Run a cron job with a periodic docker search to list all tags in the docker image of interest (here's the script). Note that this script requires the substitution of the jannis/jq image with an existing image (e.g. docker run --rm -i imega/jq).
Save resulting tags list to a file, and monitor it for changes (e.g. with inotifywait).
Fire a POST request using curl to your Jenkins server's endpoint using Generic Webhook Trigger plugin.
Cautions:
for efficiency reasons this tags listing script should be limited to a few (say, 3) top pages or simple repos with a few tags,
image tag monitoring relies on tags being updated correctly (automatically) after each image change, rather than being stuck in the past, like say Ubuntu tags (e.g. trusty-20190515 was updated a few days ago - late November, without the change in its mid-May tag).

Trigger VSTS Build after docker hub image update

I run a docker image for data processing on a windows server 2016 (single vm, On Premises). My image is stored in a Azure Container Registry. The code does not change often. To get security Updates I like to get a rebuild and release after the microsoft/windowsservercoreis updated.
Is there a Best Practice Way to do this?
I thought about 3 ways of solving this:
Run a scheduled build every 24h, pull the microsoft/windowsservercore, pull my custom image, run powershell to get the build dates and compare then (or use some of the histroy ids). If a rebuild is needed, build the new image and tag the build. Configure the Release to run only on this tag.
Run a Job to check the update time of the docker image and trigger the build with a REST request.
Put a basic Dockerfile on github. Set up automated Build with a trigger to microsoft/windowsservercore and configure the webhook to a WebService, which start the Build with REST.
But I really like non of these Ideas. Is there a better option?
You can use Azure Container Registry webhooks directly, the simple workflow:
Build a Web Api project to queue build per to detail request (webhook request) through Queue a build Rest API
Create an Azure Container Registry webhook to call Web API (step1)
I choose option three. Therefore I set up a github repository with a one line Dockerfile:
FROM alpine
I used the alpine image and not the windowsservercore, because automated build does currently does not support windows images. I configured a automated build in the docker hub and add a Linked Repositories to microsoft/windowsservercore.
Then I set up a MS Flow with a HTTP Request Trigger to start the Build. Add the Flow URL to a new webhook on the automated build.
For me this are to many moving parts that has to configured and work together, but I know no better way.

Speed up Gitlab CI reusing docker machine for stages

Gitlab CI pulls docker image every time for every task (stage). This operation wastes much time. I want to optimize if possible.
I see two places to work with:
1. explicitly configure CI stages to reuse the same docker machine.
2. use the docker machine from previous commit when building next commit? (If no changes in configuration file was done).
This kind of configuration can be specified trough the pull_policy on the runner itself.
As Jakub highlighted in the comments to the question, on shared runners on Gitlab.com the policy is set to always, therefore it will always download a new copy of the image, also if there is the same copy locally.
This due security reasons.
You can have a confirmation of that in the doc.
This pull policy should be used if your Runner is publicly available
and configured as a shared Runner in your GitLab instance. It is the
only pull policy that can be considered as secure when the Runner will
be used with private images.
The security implication is that if the runner checks first a local image, a non authorized user can get a private docker image guessing its name

Resources