Jenkins migration to new server - jenkins

For some organizational reasons we have to move Jenkins to new servers. As we are on a old version so a updated is also need on same time. What are things we should consider. Also not sure if we need to configure all jobs in new instance manually or there is faster way to clone them from existing instance. We have around 300 jobs, one master and 7 slaves. We need to set up three masters, one with four slaves and two with three slaves. 300 jobs will split between three masters depending upon there category.
Thanks !!

If I wanted to move Jenkins jobs to 3 different servers with their own plugins - I would:
Create those 3 instances of Jenkins and configure them separately. Make sure that new/resetup slaves are ready to handle new requirements.
Create 3 separate lists of jobs (split from the original list).
Determine which jobs should be run by which Jenkins
Install all the common plugins used by all/most jobs on all 3 Jenkins instances.
Go to the original ${JENKINS_HOME}/jobs and
tar cvfz < jobs_list > jobs.tgz
3 times, separately for each new Jenkins instance
Finally unpack the job archives to corresponding new ${JENKINS_HOME}/jobs directories.
Run tests and install missing plugins after that, if needed. In my opinion, access permissions should be set separately on each Jenkins instance.

Related

Azure Devops : YAML Pipeline for independent Deployment of Single Tenant .Net MVC App for different clients

I need suggestions for creating a YAML pipeline for the independent deployment of the Single Tenant .Net MVC App for different clients.
Application Type: .Net MVC Web API
Web Server: IIS 10
Database: MS SQL Server
Environment: Private Data Center
Number of Clients/tenant: 150+
Deployment: For each client/tenant, a separate IIS Web App is created. Also, a separate database is created for each client.
Expected Deployment: Manual mode (Not considering CD because CI and test suite are not available yet).
If you can guide me about the following points.
How pipeline should be created in such a way that I can use different configuration parameters per client/tenant? (e.g. Database name and connection string) But at the same time, the common script for the deployment of generated release?
Should I create a single pipeline or there should be multiple?
How I should use release, stage, jobs effectively for such a scenario?
If I get some good articles for manual independent deployment for each client, I would like to study.
Generally, if you want to deploy to different environments, you can set up a stage for each environment in a pipeline. However, considering that you have 150+ different configurations, it is so torturous to set up 150+ stages in the pipeline.
If all the deployments have the same deployment steps (same scripts, same input parameters), but different values of the input parameters, you can try using Multi-job configuration (the Matrix) in the pipeline.
With this way, you do not need to set up a stage or a job for each configurations, you just need to set up a stage or a job with all the common deployment steps. But you need to enumerate all the configurations (150+) you require. When running the pipeline, it will generate 150+ matrix jobs with the same deployment steps but different values of the input parameters.
[UPDATE]
Just curious, in this case of multi-job configuration, all the 150+ installations will be triggered in one go, right?
After the pipeline run is triggered, all the 105+ matrix jobs will be triggered and be in queue. However, normally not all the 150+ jobs will be started up to run in parallel at the same time. It depends on the maxParallel you set and how many available agents can be assigned to the run.
I can't select the way, where deployment is started for let's say 5 of the client only.
If you want that the deployment can be executed firstly for some clients and then for other clients, you can try using stages.
For example, in stage_1, execute the deployment job (multi-job configuration) for the first 5 clients. After stage_1, start up stage_2 for another several clients, then stage_3 for other clients, etc..
You can use the dependsOn key to set the execution order of the stages, and use condition key to set a stage only runs when the specified condition is met.
To view more details, you can see "Add stages, dependencies, & conditions".

Run multiple Jenkins jobs from the single Jenkinsfile?

I have two jobs that kind of upload the file to the server. Both jobs are the same except they upload to different remote URLs.
Currently, I am using two Jenkins files in the same git repository for these jobs. But these files are almost the same except server URLs are different.
I tried to use the single Jenkins file and passing the server URL as a parameter from a new parent job. These two children's jobs are running concurrently and one of them is succeeding while another one is failing.
So, My question is can we run multiple jobs pointing to the single Jenkinsfile?
My approach works fine but I was making silly mistakes of not passing the workspace name as a parameter and thus both child jobs were sharing a workspace.
After setting the workspace name through a parameter, this approach works fine.

Jenkins Load Balancing - Automatic routing on other free system

Now I would like to describe the following problem on my part here.
We have several test systems that previously had the problem that start the Jenkins Jobs simultaneously. I would like to avoid this by providing some kind of recognition. It's about distributing the started Jenkins jobs on our test machines.
Example:
Test 1 runs at the customer - Test 2 should recognize this
For example, if test-1 is occupied by Job1, it should be recognized at the start of Job 2 and then automatically routed to one of the free test machines.
Manage Jenkins> Manage Nodes > Node > Configure
You must set same label names for different nodes.
Restrict where this project can be run = new label name
(You must install 'Least Load plugin')

one version number to unite them all

I have multiple build jobs for a project. ie:
projectA is built with different parameters, for SIT, UAT, Staging and Prod DC1, Prod DC2
I use the build ID within the code for cache busting JS and CSS files.
However, there is a little problem here.
I have multiple build IDs for Prod DC1 and DC2.
for example:
DC1: apple.com/me.js?v=45
DC2: apple.com/me.js?v=78
I need one id to unite them all. so that my apple.js?v=blah wont be different in DC1 and DC2. I am also using CDN so this might become a bigger problem.
How can I do this on jenkins?
If all Jobs are connected as Upstream/Downstream way, create a version label parameter in the first Job and pass this label as parameter to the next downstream job till the last Job.
You can use this as the Unique label from starting Job to last Job.
Use Build Name Setter Plugin to set the build name with the unique label for all the Jobs. So that it will be easy to identify the which build belongs to which label.
To have a full visibility of the Jobs use Delivery Pipeline Plugin

versioning and deployment of application configuration files to server

We use visual svn for version control. I have few cloud web servers where my websites are running.
I would like to create some repositories for the websites content. I checkout them in local editors (notepad ++), edit them and checkin to SVN. But when check-in to visualSVN, I would like them to get deployed to the webservers docroot. In some cases I would like to restart the webserver too.
Is it possible using Jenkins+deployment plugins. I am very new to jenkins, can somebody help me with some information how we can achieve this.
It is one of the scenarios Jenkins is designed for (Continuous Delivery, aka. CD). Your perfect plan might look like this:
Get a new instance of Jenkins up/running (for experiments) (if you're familiar with Docker it is one of the best ways to experiment with Jenkins);
Configure Subversion Plugin in Jenkins (integration with SVN);
Setup your first FreeStyle job in Jenkins that polls your Visual SVN server for changes (things you check-in to SVN) and learn how that works (* * * * * <~-
this polls changes from your source control an every minute, great for experiments);
Setup your second FreeStyle job that connects to one of your webservers (probably via SSH) and creates a file (simple "touch hello_world.log" is great to start with) in a special folder dedicated for that kind of tests (DO NOT MESS WITH YOUR PRODUCTUION CONTENT FOLDER(s));
Setup your third FreeStyle job that combines your experiences acquired in #1 and #2, and still writes to a test folder;
Compare results of the job output with your production deployment expectations (eq. files in place, content is processed the right way, configuration files are looking good and etc.);
Try it out on one of the production web servers, one folder/site at a time;
Apply your newly crafted delivery pipeline to the rest of servers/sites;
Learn how backup your Jenkins instance and actually make your first backup;
Try to restore your Jenkins instance from the backup made in the previous step;
Decide whether it is okay for you to maintain your own Jenkins instance or you will be better off with a hosted version of it (CloudBees Inc.);
Learn more about Pipeline in Jenkins and possibly (because it is not immediately obvious) migrate your FreeStyle job(s) to Pipeline DSL
and/or Jenkinsfile;
At times you might need to get back to "Get Started with Jenkins" manual and look up for the ideas or answers, it is okay - do not give up and feel free to post your questions here, at SO.
Hope these ideas will help you to get started.

Resources