So many CICD tools use a git trigger to start the pipeline, but I want to use a new image upload to Docker registry. I have my own self-hosted Docker registry. Whenever a new image is pushed to the registry, I want to then automatically deploy that image into a workload in Kubernetes. It seems like this would be simple enough, but so far I'm coming up short.
It seems like it might be possible, but I'd like to know whether it is before I spend too much time on it.
The sequence of events would be:
A new image is pushed to the Docker registry
Either the registry calls a webhook or some external process polls the registry and discovers the image
Based on the base image name, the CICD pipeline updates a specific workload in Kubernetes to pull the new image
A couple of other conditions: the CICD tool has to be self-hosted. I want everything to be self-contained within a VPC, so there would be no traffic leaving the network containing the registry, the CD tool, and the Kubernetes cluster.
Has anyone set up something like this or has actual knowledge of how to do so?
Sounds like the perfect job for Flux.
There are a handful of other tools you can try:
Werf
Skaffold
Faros
JenkinxX
✌️
Related
I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button
I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.
We have a requirement to build custom docker images from base docker images with some additional packages/customization. These custom docker images need to be then deployment into kubernetes. We are exploring various tools to figure out on how docker build can be done in kubernetes cluster (without direct access to docker daemon). Open source tools like kaniko provides the capability to build docker images within a container (hence in a kubernetes cluster).
Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed? Are there any obvious challenges with kaniko?
Should separate dedicated VMs be created to manage the build process?
1. Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed?
Are there any obvious challenges with kaniko?
Yes, it is possible to build images inside Kubernetes containers, but it could be a bit of a challenge.
Some users use it to build a workflow for CI/CD with Jenkins. In fact, it is better to use tools to simplify the process.
Kubernetes also have rules to prepare containers development kit, they are described here
Another way is to use Kaniko, this tool builds container images from a Dockerfile inside a container or Kubernetes cluster.
I found this article interesting to read on this topic.
On the other hand, there was a successful attempt to build images without Docker daemon running. You may be interested in Bazel project and story how to use it.
2. Should separate dedicated VMs be created to manage the build process?
Regarding your second question: It is not necessary to set up dedicated VM to run Docker images creation workflow.
Finally, it may be interesting to have a private registry in Kubernetes cluster and use it for building purposes.
It's possible to build images on kubernetes nodes. But i wouldn't recommend it. The reason being, a application build process is memory and compute intensive, frequent image builds could cause disruption to services being scheduled by that kubernetes node.
Use a dedicated Jenkins server(s) instead, create pipelines according to your requirements and delivery.
You can get started here!
Hope that helps!
I am trying to build a jenkins job(trigger builds remotely) on docker image build, build all I am getting on docker hub is following:
HISTORY
ID Status Date & Time
7345... ! ERROR 10/12/17 10:03
Reason (I assume): Docker is not authenticated to post to the jenkins url.
Question: How can I trigger the job automatically when an image gets pushed to docker hub?
Pull and run Watchtower docker image to poll any third-party public Docker image on Docker Hub or Quay that you need (typically as a base image of your own containers). Here's how. "Polling" here does not imply crudely pulling the whole image every 5 minutes or so - we are monitoring periodically for changes in the image, downloading only the checksum (SHA digest) most of the time (when there are no changes in the locally cached image).
Install the Build Token Root Plugin in your Jenkins server and set it up to receive Slack-formatted notifications secured with a token to trigger builds remotely or - safer - locally (those triggers will be coming from Watchtower container, not Slack). Here's how.
Set up Watchtower to post Slack messages to your Jenkins endpoint upon every change in the image(s) (tags) that you want. Here's how.
Optionally, if your scale is so large that you could end up overloading and bringing down the entire Docker Hub with a flood HTTP GET requests (should the time triggers go wrong and turn into a tight loop) make sure to build in some safety checks on top of Watchtower to "watch the watchman".
You can try the following plugin: https://wiki.jenkins.io/display/JENKINS/CloudBees+Docker+Hub+Notification
Which claims to do what you're looking for.
You can configure a WebHook in DockerHub wich will trigger the Jenkins-Build.
Docker Hub webhooks targeting your Jenkings server endpoint require making periodic copies of the image to another repo that you own [see my other answer with Docker Hub -> Watchman -> Jenkins integration through Slack notifications].
More details
You need to set up a cron job with periodic polling (docker pull) of the source repo to [docker] pull its `latest' tag, and if a change is detected, re-tag it as your own and [docker] push to a repo you own (e.g. a "clone" of the source Docker Hub repo) where you have set up a webhook targeting your Jenkings build endpoint.
Then and only then (in a repo you own) will Jenkins plugins such as Docker Hub Notification Trigger work for you.
Polling for Dockerfile / release changes
As a substitute of polling the registry for image changes (which need not generate much network traffic thanks to the local cache of docker images) you can also poll the source Dockerfile on Github using wget. For instance Dockerfiles of the official Docker Hub images are here. In case when the Github repo makes releases, you can get push notifications of them using Github Watch > Releases Only feature and if they have CI docker builds. Docker images will usually be available with a delay after code releases, even with complete automation, so image polling is more reliable.
Other projects
There was also a proposal for a 2019 Google Summer of Code project called Polling Docker Registries for Image Changes that tried to solve this problem for Jenkins users (incl. apparently Google), but sadly it was not taken up by participants.
Run a cron job with a periodic docker search to list all tags in the docker image of interest (here's the script). Note that this script requires the substitution of the jannis/jq image with an existing image (e.g. docker run --rm -i imega/jq).
Save resulting tags list to a file, and monitor it for changes (e.g. with inotifywait).
Fire a POST request using curl to your Jenkins server's endpoint using Generic Webhook Trigger plugin.
Cautions:
for efficiency reasons this tags listing script should be limited to a few (say, 3) top pages or simple repos with a few tags,
image tag monitoring relies on tags being updated correctly (automatically) after each image change, rather than being stuck in the past, like say Ubuntu tags (e.g. trusty-20190515 was updated a few days ago - late November, without the change in its mid-May tag).
For the use case where a Dockerfile needs to be built for each platform it's on (a bit niche I know), is there a way possible for it to push itself to the registry, i.e. calling docker push from within the Dockerfile?
Currently, this is done:
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
Could the login and push steps be directly or cleverly built into the Dockerfile being built or with a combination of others?
Note: This would operate in a secure environment of trustworthy users (so all users being able to push to the registry is fine).
Note: This is an irregular use of Docker, not a good idea for building/packaging software in general, rather I am using Docker to share environments between developers.
I am wondering why can't you have a wrapper script file (say shell or bat) around the "Dockerfile" to do these steps
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
What is it so specific about "Dockerfile". I know, this is not addressing the question that you asked, I might have totally misunderstood your usecase, but I am just curious.
As others pointed out, this can be easily achieved using a CD systems like Drone.io/Travis/Jenkins etc.
At first this sounds to me like the decently-circulated "Nasa's Space pen Myth". But as I said earlier, you may have a proper valid use case which I am not aware of yet.
Docker build creates image using recipe provided in Dockerfile. Each line in Dockerfile creates new temporary image of file system with different checksum. Image after execution of last line of Dockerfile is your final image of build process and is tagged with provided name.
So it is not possible to put docker push command inside Dockerfile because image creation is not finished yet.
Having a Dockefile push it's own image will never work.
To explain a bit more:
What happens when you build and image: Docker will spawn a container and do everything the Dockerfile specifies. You can even see this when running docker ps during the build. If the exit status of the container is 0 (no errors), an image is created from the container.
We don't really have much control over this process other than the build parameters. It's definitely a chicken and egg problem.
Build systems should to this stuff
It's even fairly easy to do this in Jenkins. The Jenkins setup I have uses a docker plugin and executes build commands to a remote docker hosts.. so the Jenkins nodes only pull the repo, runs a build, then pushes the image to a private repo properly tagged (then deletes the local image). You can run unit tests in docker also by making a separate Dockerfile (gets a bit more complicated when you need external mock services)
Builds per branch/architecture is not too hard to set up. With remote hosts doing the build work we can boost up the job limit in Jenkins fairly high and it can run on cheap hardware.
You can also run Jenkins in docker and make it build images in the docker engine it runs in. I just do that through TLS or the old trick of mapping the socket file into the Jenkins container might still work.
I think I started with the CloudBees Docker Build and Publish plugin and it was fairly easy to use, but now I use a custom plugin so I have no idea of the alternatives.
Gitlab CI pulls docker image every time for every task (stage). This operation wastes much time. I want to optimize if possible.
I see two places to work with:
1. explicitly configure CI stages to reuse the same docker machine.
2. use the docker machine from previous commit when building next commit? (If no changes in configuration file was done).
This kind of configuration can be specified trough the pull_policy on the runner itself.
As Jakub highlighted in the comments to the question, on shared runners on Gitlab.com the policy is set to always, therefore it will always download a new copy of the image, also if there is the same copy locally.
This due security reasons.
You can have a confirmation of that in the doc.
This pull policy should be used if your Runner is publicly available
and configured as a shared Runner in your GitLab instance. It is the
only pull policy that can be considered as secure when the Runner will
be used with private images.
The security implication is that if the runner checks first a local image, a non authorized user can get a private docker image guessing its name