We have a requirement to build custom docker images from base docker images with some additional packages/customization. These custom docker images need to be then deployment into kubernetes. We are exploring various tools to figure out on how docker build can be done in kubernetes cluster (without direct access to docker daemon). Open source tools like kaniko provides the capability to build docker images within a container (hence in a kubernetes cluster).
Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed? Are there any obvious challenges with kaniko?
Should separate dedicated VMs be created to manage the build process?
1. Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed?
Are there any obvious challenges with kaniko?
Yes, it is possible to build images inside Kubernetes containers, but it could be a bit of a challenge.
Some users use it to build a workflow for CI/CD with Jenkins. In fact, it is better to use tools to simplify the process.
Kubernetes also have rules to prepare containers development kit, they are described here
Another way is to use Kaniko, this tool builds container images from a Dockerfile inside a container or Kubernetes cluster.
I found this article interesting to read on this topic.
On the other hand, there was a successful attempt to build images without Docker daemon running. You may be interested in Bazel project and story how to use it.
2. Should separate dedicated VMs be created to manage the build process?
Regarding your second question: It is not necessary to set up dedicated VM to run Docker images creation workflow.
Finally, it may be interesting to have a private registry in Kubernetes cluster and use it for building purposes.
It's possible to build images on kubernetes nodes. But i wouldn't recommend it. The reason being, a application build process is memory and compute intensive, frequent image builds could cause disruption to services being scheduled by that kubernetes node.
Use a dedicated Jenkins server(s) instead, create pipelines according to your requirements and delivery.
You can get started here!
Hope that helps!
Related
I was following the [document][1] to run the azuredevops buildagents in containers. I have created the vsts docker image by following the MS docs. but after that microsoft document is not clear on some parts where i am stuck.
Is Microsoft is providing any officila image for vsts,based on linux.
is it possible to create redhat based vsts custom image than the default ubuntu image.
Also We need to run these containers in AzureContainer instance. But what are steps to achieve that?
If we run the vsts agents in AzureContaineInstances , will the on-demand autoscaling will work as per the number of pipeline executions are triggered at a time? how the scaling behaviour of AzureContaine instance?
Which is better option to select AzureContainer Instance or AKS ?
Is Microsoft is providing any officila image for vsts,based on linux.
Here's a Dockerfile for running containerized Azure DevOps agent in the official documentation. The Dockerfile is based on Ubuntu 18.
is it possible to create RedHat based vsts custom image then the
default ubuntu image.
This can be possible by replacing the base image with Redhat and making necessary changes to the Dockerfile to avoid any errors.
I don't have much experience with ACI, but this should provide a reasonable set of guidelines for you to start running your ADO agents on ACI.
Which is better option to select AzureContainer Instance or AKS ?
If all you want to run the containerized ADO agents for your pipelines, then ACI can be a better choice. However, if you already have an AKS cluster for your application, then its better to deploy your agents in a separate namespace within the same cluster. For auto-scaling of your agents based on the demand, a CRD can be used. Here are some useful blogs that you can find helpful.
https://moimhossain.com/2021/04/24/elastic-self-hosted-pool-for-azure-devops/
https://keda.sh/blog/2021-05-27-azure-pipelines-scaler/
I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button
I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.
So many CICD tools use a git trigger to start the pipeline, but I want to use a new image upload to Docker registry. I have my own self-hosted Docker registry. Whenever a new image is pushed to the registry, I want to then automatically deploy that image into a workload in Kubernetes. It seems like this would be simple enough, but so far I'm coming up short.
It seems like it might be possible, but I'd like to know whether it is before I spend too much time on it.
The sequence of events would be:
A new image is pushed to the Docker registry
Either the registry calls a webhook or some external process polls the registry and discovers the image
Based on the base image name, the CICD pipeline updates a specific workload in Kubernetes to pull the new image
A couple of other conditions: the CICD tool has to be self-hosted. I want everything to be self-contained within a VPC, so there would be no traffic leaving the network containing the registry, the CD tool, and the Kubernetes cluster.
Has anyone set up something like this or has actual knowledge of how to do so?
Sounds like the perfect job for Flux.
There are a handful of other tools you can try:
Werf
Skaffold
Faros
JenkinxX
✌️
Our team is developing Kafka Connect Source Connector-plugins.
Do you have any ideas on how to install/upgrade the plugins? How is the flow (git -> Jenkins -> running Source Connector) supposed to look on-prem?
We use Confluent on Kubernetes which complicates things further.
PS. We are required by law to not use cloud solutions.
To store custom connectors, use Nexus, Artifactory, S3, or some plain HTTP/file server.
If you are using Kubernetes, then you probably have a release policy around your Docker images.
Therefore, you can extend the Confluent Connect Docker images by adding additional RUN statements to the Dockerfile, then build and tag your images with Jenkins, and upgrade your Kubernetes services to use the new image tag.
The answer I would give for a bare-metal (or cloud) installation of managing Kafka Connect would be to use Ansible or other orchestration tool to push out the new files, and restart the services
Having built,run and executed tests against a docker image on a CI build server(TeamCity2017), how should we deploy it to further machines?
How, for example, if we push it to a Docker registry, would our CI server instruct the target machine to pull and run the image? I.e. where it an application we would use Octopus for this deployment step, but our Octopus server doesn't support Docker deployments as yet.
Any guidance appreciated.
Michael McD.
I would use Octo to deploy your images onto target machines. You'd need to use powershell scripts to have your machines run the images. Or you can use something like Rancher, which is a docker swarm manager. There is no feasible way to have TeamCity deploy your images. The software simply isn't built to be able to do deploys.
The Rancher solution would not be automated, at least not to my knowledge. You would have to trigger upgrades when a new image is pushed to the docker registry.