How to run Azuredevops buildagent in azure container instance - docker

I was following the [document][1] to run the azuredevops buildagents in containers. I have created the vsts docker image by following the MS docs. but after that microsoft document is not clear on some parts where i am stuck.
Is Microsoft is providing any officila image for vsts,based on linux.
is it possible to create redhat based vsts custom image than the default ubuntu image.
Also We need to run these containers in AzureContainer instance. But what are steps to achieve that?
If we run the vsts agents in AzureContaineInstances , will the on-demand autoscaling will work as per the number of pipeline executions are triggered at a time? how the scaling behaviour of AzureContaine instance?
Which is better option to select AzureContainer Instance or AKS ?

Is Microsoft is providing any officila image for vsts,based on linux.
Here's a Dockerfile for running containerized Azure DevOps agent in the official documentation. The Dockerfile is based on Ubuntu 18.
is it possible to create RedHat based vsts custom image then the
default ubuntu image.
This can be possible by replacing the base image with Redhat and making necessary changes to the Dockerfile to avoid any errors.
I don't have much experience with ACI, but this should provide a reasonable set of guidelines for you to start running your ADO agents on ACI.
Which is better option to select AzureContainer Instance or AKS ?
If all you want to run the containerized ADO agents for your pipelines, then ACI can be a better choice. However, if you already have an AKS cluster for your application, then its better to deploy your agents in a separate namespace within the same cluster. For auto-scaling of your agents based on the demand, a CRD can be used. Here are some useful blogs that you can find helpful.
https://moimhossain.com/2021/04/24/elastic-self-hosted-pool-for-azure-devops/
https://keda.sh/blog/2021-05-27-azure-pipelines-scaler/

Related

How to create a pipeline to build and release a Docker compose, with Azure Devops using the graphical interface (GUI)

Well, how can I create a pipeline to build and release a Docker compose, with Azure Devops through the graphical interface (GUI) I am not an expert in devops but I have this challenge in my work.
I would point you toward a great guide by microsoft, it's for java applications but you can get what you need out of it.
Solution in general:
Open the Azure Portal. Select + Create a resource and search for
Container Registry. Select Create. In the Create Container Registry
dialog, enter a name for the service, select the resource group,
location and click Review + Create. Once the validation is success
click Create.
In your CI build you need to have 2 tasks, 1 for the build/compose where you provide and another to publish the image to your Azure Container Registry. You will use the "same task" for this.
This container registry is where you store the outputs of your builds, similar to artifacts in traditional CI builds. This is where you publish your application from during a release to on-prem or cloud.
You can read more about the parameters you need to provide and the settings in details in the guide.
P.S. Here is an example on how to dockerize and existing .NETCore application.
How do you build and release your Docker compose on local?
Normally, you can copy the related docker-compose CLI and Docker CLI that you execute on local to the shell script tasks (such as Bash, PowerShell, etc.) in the pipeline you set up on Azure DevOps.
Of course, there are also the available Docker Compose task and Docker task.

Best Practices for Installing Jenkins Instance / Pre-configured Jenkins from scratch

The Jenkins landscape is vast and new progress is difficult to keep track especially if you are not a regular DevOps.
I am currently in process of setting up a Jenkins CI system from scratch. I am looking for the best possible ways to get the Jenkins instance up and running. I have looked at options such as running from the JAR, setting it up a service, docker, blue ocean, etc.
I was wondering if you can please share your experience if there is a pre-configured setup or a scalable Jenkins solution already available in the market which is ready to be configured/deployed.
One of the key tenant on this Jenkins instance would be test automation guys running their Selenium tests (or I am ideally looking at Windows server installation although CentOS is an option) and would like to make it working for them as easy as possible.
I'm a Jenkins admin. In my company I've set up Jenkins on our Kubernetes cluster using the Helm chart with a custom docker image preloaded with plugins (you don't want to rely on the plugin update site during startup). All configuration is done with the Configuration as Code Plugin. We're using the Kubernetes plugin to do horizontally scaling. No builds are allowed on the build controller, everything is done within agents, which is custom docker images inspired by these images. and we don't allow no builds to run on the build controller. This works very well, and I'm very happy with the setup. There is also a Jenkins Kubernetes Operator which looks promising, but I havent tried it myself.
If you're not on Kubernetes, you can take a look at the Jenkins Evergreen project.
PS: The Blue Ocean project is dead, but the folks over at Cloudbees are currently in the process of overhauling the UX. They just released a weekly version where they got rid of all tables so the design is slowly becoming more and more responsive, and also a new set of icons is also coming up.
Maybe the nearest you can get to a pre-configurated Jenkins Instance is using the Docker Image (https://hub.docker.com/r/jenkins/jenkins). But also with the docker image, you have to selected plugins and so on. Maybe you want to raise an issue as purposal in the Jenkins Docker repository to make it possible to pre-configure Jenkins (Github Repo: https://github.com/jenkinsci/docker/issues)?

Building docker images in kuberentes cluster

We have a requirement to build custom docker images from base docker images with some additional packages/customization. These custom docker images need to be then deployment into kubernetes. We are exploring various tools to figure out on how docker build can be done in kubernetes cluster (without direct access to docker daemon). Open source tools like kaniko provides the capability to build docker images within a container (hence in a kubernetes cluster).
Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed? Are there any obvious challenges with kaniko?
Should separate dedicated VMs be created to manage the build process?
1. Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed?
Are there any obvious challenges with kaniko?
Yes, it is possible to build images inside Kubernetes containers, but it could be a bit of a challenge.
Some users use it to build a workflow for CI/CD with Jenkins. In fact, it is better to use tools to simplify the process.
Kubernetes also have rules to prepare containers development kit, they are described here
Another way is to use Kaniko, this tool builds container images from a Dockerfile inside a container or Kubernetes cluster.
I found this article interesting to read on this topic.
On the other hand, there was a successful attempt to build images without Docker daemon running. You may be interested in Bazel project and story how to use it.
2. Should separate dedicated VMs be created to manage the build process?
Regarding your second question: It is not necessary to set up dedicated VM to run Docker images creation workflow.
Finally, it may be interesting to have a private registry in Kubernetes cluster and use it for building purposes.
It's possible to build images on kubernetes nodes. But i wouldn't recommend it. The reason being, a application build process is memory and compute intensive, frequent image builds could cause disruption to services being scheduled by that kubernetes node.
Use a dedicated Jenkins server(s) instead, create pipelines according to your requirements and delivery.
You can get started here!
Hope that helps!

Building tenant specific docker images using Jenkins & Deployment in kubernetes

My Application Structure
I am developing a tenant based application in service oriented architecture with deployment using kubernetes and Jenkins. In my application , It contains 15-20 number of microservices developing using spring boot. The each microservice need to be deploy separately for different customers. If I have 5 customer, I need to deploy 15 microservice for these 5 customer.This is the description about my tenancy model.
Deployment Planning
For this application I am planning to use kubernetes and Jenkins for deployment and implementing CI/CD pipeline.
My Findings
The nature of my application is building the images for different customer from same code by using spring cloud config server active profile functionality. Means In my docker file , I am launching the particular image by defining which is the active profile. Like the following,
java -jar -Dspring.profiles.active=<Profile_Name> dbdata-0.0.1-SNAPSHOT.jar
Here I am configuring profile in config server. So here I am using same code for creating multiple images which belongs to each customer.
Confusion
If I am following this style, how I can create and launch different images from same code repository using Jenkins? Is possible to launch multiple images using Jenkins from same code repository?
In summary, how can I understand multiple image creating and deployment as per above application structure?
As you have several microservices, it's better to use tools like Helm + Chartmuseum to simplify management of these services. In this case you will have individual release (and Kubernetes namespace) per tenant. You can use different docker images tags if different docker images per tenant is required.
As for Jenkins part, I don't see any problems (you can build any number of docker images from 1 repo):
create job to produce & upload docker image(s)
create job to produce & upload Helm chart(s)
create job(s) to deploy/update releases in Kubernetes
It's not required to build different docker images if they differ only in command line. This command line (or env variable) can be overridden in a Kubernetes resource description.

How to use VSTS Build/Release to continuously integrate/deploy Docker containers to Azure Service Fabric?

I'm asking this question here because Azure's documentation says a sample for Linux Containers is 'coming soon'. Anyone has any insight on when this tutorial might be available?
Meanwhile, I'm hoping someone can shed some light on how to effectively do this.
My use case is:
a microservices based application (say Microservices A, B, and C); each microservice should run in its own Docker container
use Visual Studio Team Services Build capability to build container images and push them to Docker Hub
use VSTS Release capability to individually deploy the microservices (containers) to a Service Fabric cluster as microservices are independently developed, that is, I don't want to update the entire application in Service Fabric, but only redeploy the changed microservice/container to the respective node(s)
There could be a custom solution for this where one can add Tasks to the Build and Release in VSTS (like Docker Build and Shell Script tasks), call some scripts to update the Application Manifest and Service Manifest to kick off the updates to the Service Fabric cluster, and so on.
Whether your containers are services in the same app or different app, you can still deploy them independently. Only changes are being applied at deployment, you don't even have to have the non-changed services in the deployment package. Look here to see an example for Service Fabric service (not in containers), but deploying containers using service manifest is conceptually the same: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-set-up-continuous-integration

Resources