My Application Structure
I am developing a tenant based application in service oriented architecture with deployment using kubernetes and Jenkins. In my application , It contains 15-20 number of microservices developing using spring boot. The each microservice need to be deploy separately for different customers. If I have 5 customer, I need to deploy 15 microservice for these 5 customer.This is the description about my tenancy model.
Deployment Planning
For this application I am planning to use kubernetes and Jenkins for deployment and implementing CI/CD pipeline.
My Findings
The nature of my application is building the images for different customer from same code by using spring cloud config server active profile functionality. Means In my docker file , I am launching the particular image by defining which is the active profile. Like the following,
java -jar -Dspring.profiles.active=<Profile_Name> dbdata-0.0.1-SNAPSHOT.jar
Here I am configuring profile in config server. So here I am using same code for creating multiple images which belongs to each customer.
Confusion
If I am following this style, how I can create and launch different images from same code repository using Jenkins? Is possible to launch multiple images using Jenkins from same code repository?
In summary, how can I understand multiple image creating and deployment as per above application structure?
As you have several microservices, it's better to use tools like Helm + Chartmuseum to simplify management of these services. In this case you will have individual release (and Kubernetes namespace) per tenant. You can use different docker images tags if different docker images per tenant is required.
As for Jenkins part, I don't see any problems (you can build any number of docker images from 1 repo):
create job to produce & upload docker image(s)
create job to produce & upload Helm chart(s)
create job(s) to deploy/update releases in Kubernetes
It's not required to build different docker images if they differ only in command line. This command line (or env variable) can be overridden in a Kubernetes resource description.
Related
I was following the [document][1] to run the azuredevops buildagents in containers. I have created the vsts docker image by following the MS docs. but after that microsoft document is not clear on some parts where i am stuck.
Is Microsoft is providing any officila image for vsts,based on linux.
is it possible to create redhat based vsts custom image than the default ubuntu image.
Also We need to run these containers in AzureContainer instance. But what are steps to achieve that?
If we run the vsts agents in AzureContaineInstances , will the on-demand autoscaling will work as per the number of pipeline executions are triggered at a time? how the scaling behaviour of AzureContaine instance?
Which is better option to select AzureContainer Instance or AKS ?
Is Microsoft is providing any officila image for vsts,based on linux.
Here's a Dockerfile for running containerized Azure DevOps agent in the official documentation. The Dockerfile is based on Ubuntu 18.
is it possible to create RedHat based vsts custom image then the
default ubuntu image.
This can be possible by replacing the base image with Redhat and making necessary changes to the Dockerfile to avoid any errors.
I don't have much experience with ACI, but this should provide a reasonable set of guidelines for you to start running your ADO agents on ACI.
Which is better option to select AzureContainer Instance or AKS ?
If all you want to run the containerized ADO agents for your pipelines, then ACI can be a better choice. However, if you already have an AKS cluster for your application, then its better to deploy your agents in a separate namespace within the same cluster. For auto-scaling of your agents based on the demand, a CRD can be used. Here are some useful blogs that you can find helpful.
https://moimhossain.com/2021/04/24/elastic-self-hosted-pool-for-azure-devops/
https://keda.sh/blog/2021-05-27-azure-pipelines-scaler/
I'm currently working on setting up a CI/CD pipeline to my client's new Kubernetes clusters that they host on Harbor using Helm and Azure DevOps Server, and I'm a bit stuck on how to deal with application variables (appsettings in .NET Core) in a streamlined way. Currently those variables are stored in DevOps as Release Variables, and some of the Prod variables are managed by another organization.
My thinking has been that I want separate Build and Deploy pipes, with the Build being responsible for restoring, building, triggering unit tests, and finally pushing a Docker image to the on-prem registry. Then, have multiple Deploy pipelines representing the various environments (Dev, Test, Stage, Prod) and applications that may want to use that Docker image, utilizing their own configuration.
However, I haven't been able to find a way that allows me to inject the variables from the release pipeline into the dockerized application on the release steps, since by then what I have is not my raw application but rather the Docker image of it. I've previously used ConfigMaps to resolve this, but since I can't have files locally to represent the Prod environment, I would need some way to override variables, or generate a ConfigMap, from Azure Devops' release variables.
I somehow feel like this must be a common scenario, and yet most solutions I find are either related to maintaining environment-oriented configmaps or values files in your application's repo, or using functions seemingly unique to Azure Cloud.
One solution would be to move the docker build/push step into the release pipelines, and injecting variables into the application before being dockerized. However this feels like a hack that will just result in a myriad of Docker images of the same application but with a tweaked appsettings file, that somehow will have to be versioned across multiple environments.
You could try a way to handle this through Kubernetes manifest task.
Use a Kubernetes manifest task in a build or release pipeline to bake
and deploy manifests to Kubernetes clusters.
If your builds and deployments all run in Azure Pipelines so you do have a previous layer where we can do these replacements before applying the manifests to the cluster.
Variables can be defined at several scope levels where the more immediate levels will override the farthest.
It's also able to apply the same manifest to two environments (staging and production) but with different settings
You could also take a look at below blogs:
How to inject variables in Kubernetes manifest with Azure
Pipelines?
Deploying to Kubernetes with Azure DevOps: A first pass
According to 12 factor, the best way to get environment dependent config into a cluster, is to use environment variables. There is better security and all sorts of tools to get config into environment vars, if your app can get them from there rather than file, then this is better.
However if your app cannot, then this is what I do to inject customizable config and manifests into a release using Azure DevOps:
In the build pipeline publish tokenized config and manifests into folder(s).
In the release use Replace Token task(s) on folder(s) to swap tokens for variables.
Use a kubectl task to create a configmap from file that will hold the replaced config files, and make this a volume in your deployment manifest.
Your application should either expect files to be at this location, or copy these files to the expected location. I do this in the dockerfile, with a script for my entrypoint.
I use a kube deploy step to add a tag to the end of my image in the deployment manifest, and deploy to cluster.
When the pod starts up, the configmap is created as a volume and the application starts with the latest config for that environment.
We are developing an web application which had a multiple micro-services in same repository.
For our existing monolithic application, we are deploying our application to Kubernetes using Helm and Jenkins. When micro-services in question, we are struggling to define our CI/CD Pipeline strategies. Belows are the unclear issues:
For monolithic application I have one Dockerfile, one Jenkinsfile and one Helm Chart. For building stage I am building image/s using following command.
docker.build("registry/myrepo/image:${env.BUILD_NUMBER}"
For monolithic application, I have one chart with multiple values.yaml such as values.dev.yaml, values.prod.yaml which I configured for multiple environments.
So our questions are;
1.How should we build and push multiple containers for multiple Dockerfiles in Jenkinsfile for micro-services? At the present every micro-services have their own Dockerfiles in their own root.
2.Is that possible for Jenkins to distinguish the micro-services that we would like to deploy? I mean,sometimes probably we made some changes only for specific service and we would like to deploy this changes. So should we organize independent pipeline or is there any way to handle this same pipeline?
3.How should we organize our Helm chart to deploy micro-services Kubernetes? Should we create multiple charts per services or create multiple templates that refers only one values.yaml?
Looks like you are almost there.
Have separate pipelines for micro services, this would build, verify and deploy docker images to a docker registry. Have a separate pipeline for verifying and deploying the whole stack using helm.
I assume you would be using git events to identify the changes? When there is change in a microservice, I assume it would be committed to a single repository. This would trigger a git event using which you can trigger the pipeline of the respective microservice.
As the helm represents your whole application stack, I would suggest to have it as a single chart. If the complexity of the micro service is getting increased split it as subcharts.
Multiple charts can be a future maturity level when teams associated with each microservice can deploy the upgrades independently without affecting availability of the whole stack.
Have separate job in jenkins for each microservice.
Have a separate job to deploy the whole application using Helm chart
I am using Jenkins to build the binaries to be deployed on my production server. The source code is being managed in SVN and in Jenkins, I am using the parameterized plugin to allow the team members to select the tags they want to deploy.
Currently, the production setup is that multiple instances running behind the ELB. So in order to deploy the build, what I need to do that take out (deregister) the instances one by one and deploy the build on that server (in order to prevent downtime).
I am looking for a Jenkins plugin (if available) which could help me in automating that task which could take out one instance from ELB, deploy the latest build and then again register that instance to the ELB and repeat the same steps for all the instances.
NOTE: Instances can be dynamic in count as autoscaling can increase or decrease the instances behind the ELB.
If your build produces a docker container then you can use EKS or ECS to automate the deployment. Theses services will take care of deployment of your new version of service side by side without any downtime. Additionally you can also set the scaling policies to increase or decrease the no of service instances based on the load.
I am trying to implement CI/CD pipeline for my Spring Boot micro service deployment. I am planned to use Jenkins and Kubernetes for Making CI/CD pipeline. And I have one SVN code repository for version control.
Nature Of Application
Nature of my application is, one microservice need to deploy for multiple tenant. Actually code is same but database configuration is different for different tenant. And I am managing the configuration using Spring cloud config server.
My Requirement
My requirement is that, when I am committing code into my SVN code repository, then Jenkins need to pull my code, build project (Maven), And need to create Docker Image for multiple tenant. And need to deploy.
Here the thing is that, commit to one code repository need to build multiple docker image from same code repo. Means one code repo - multiple docker image building process. Actually, Dockerfile containing different config for different docker image ie. for different tenant. So here my requirement is that I need to build multiple docker images for different tenant with different configuration added in Dockerfile from one code repo by using Jenkins
My Analysis
I am currently planning to do this by adding multiple Jenkins pipeline job connect to same code repo. And within Jenkins pipeline job, I can add different configuration. Because Image name for different tenant need to keepdifferent and need to push image into Dockerhub.
My Confusion
Here my confusion is that,
Can I add multiple pipeline job from same code repository using Jenkins?
If I can add multiple pipeline job from same code repo, How I can deploy image for every tenant to kubernetes ? Do I need to add jobs for deployment? Or one single job is enough to deploy?
You seem to be going about it a bit wrong.
Since your code is same for all the tenants and only difference is config, you should better create a single docker image and deploy it along with tenant specific configuration when deploying to Kubernetes.
So, your changes in your repository will trigger one Jenkins build and produce one docker image. Then you can have either multiple Jenkins jobs or multiple steps in pipeline which deploy the docker image with tenant specific config to Kubernetes.
If you don't want to heed to above, here are the answers to your questions:
You can create multiple pipelines from same repository in Jenkins. (Select New item > pipeline multiple times).
You can keep a list of tenants and just loop through OR run all deployments in parallel in a single pipeline stage.