How to replicate jenkins setup via automation - jenkins

I have a Jenkins setup running in production, I want to automate jenkins setup(installation) along with all the jobs that are setup in jenkins.
One crude way I can think of is to copy the whole jobs directory to the new Jenkins setup.
I want to know how other people in industry do deal with this problem.

I have used the plugin Thinbackup to move jobs, users, and plugins. You can make a full backup and restore it to the new server. The plugin is not perfect and is up for adoption. I had issues with the restore. I ended up using the plugin only for creating the archive, but then I copied manually the folders (users, jobs, plugins, nodes, email-templates, secrets, JENKINS_HOME files) from the archive to the new server.
Before creating the archive or copying the jobs, ensure that no more than 30 builds per job are kept, this will keep your archive small. I have seen 5000+ builds per job, which were totally unnecessary and were blocking the creation of the archive.
When you create or restore the archive, or copy files, the server should be in quiet mode, no builds should be executed.
http://<jenkins.server>/quietDown
After you copy the files or restore the archive, you should restart Jenkins or even better, restart the server.
Another option is to use RSync as mentioned here. I am not sure what is the OS of your Jenkins server. If it is Linux you can check out this guide that I have written.

Related

DevOps Build and Pipeline Design Pattern - Need some advice on deploying many individual files

We have a PoC on deploying a file to an old mainframe. There are many types of deployments that we do but this question focuses on individual files. We are able to SSH into the mainframe and we have a deployment pipeline with the steps needed to get one file into the correct location.
The problem is we have over 54,000 of these individual files. During a release we may deploy as little as 1-5 files or large deployment may be 250 files. Each of them will have a different source and target destination. Some of them may be sources from the same folder and deployed to the same folder but that is not guaranteed.
We can make the assumption that the files are immutable. There are issues on both build and release to consider:
Build - what is the artifact? Do we use one artifact for each release that could contain 1-250 files? We don't want to have 250 build scripts for a release, that we know.
Release - How do we use the pipelines. If you batch them together then is it a one click deploy to that environment? How would you determine if someone added a file to the release? I guess we would need a new build that would create a new pipeline?
There are a few other things that come up like we need to check the status in our change management system to confirm that the ticket for that File is in a status that is approvable. That is a deployment step currently.
I'm not sure this is the "answer" or not but this is our take on it so far:
The Artifact
We are going to create a "release" data file. In this file there will be a list of files going with each deployment. We will organize the files by product line and create a branch of all files for a specific product. Then the build will read the files and create the artifact from the list of files related to that release. We will also include the data file in the artifact.
Deployment
We will create a Parent/Child release process. The Parent script will loop through the data file and call the child script. The Child script will deploy an individual file which will be represented by a row in the data file. To deploy to Production the Parent will be deployed only. The child will not every be deployed individually.
Multiple Deployment Times/Dependencies
We have a requirement to Deploy certain files at certain times. One production file deployment may be at 1 PM and another at 7 PM in the same release. To accommodate this
we will include deployment time in the data file. After each file is deployed we will some how keep track that this file has been deployed.
Change Management
We will do our change management system check in each child script to make sure the file is ready to deploy. If the individual file is not approved we will not stop processing, we will finish the deployment for any other files in the list that are approved and then as the last step in the deployment we will fail the deploy. We need to make the "tracking" available to the teams to see what caused the deploy to fail.
Making some assumptions here and is happy path, but perhaps this will help get you to the ultimate solution.
Have a master branch that has a products folder. This folder would then have subfolders for each product, which has the files:
master/
products/
productA
productB
productN
Dev Team would work on files in separate fix branches then merge into master via pull requests. You can setup policies and gates for audit
Create a build pipeline with powershell script task that checks for deltas (possible example) in master and copy/publish only those changes to an artifact destination folder with the same product subfolder layout
Create a release pipeline that has a stage for each product and/or destination path on the mainframe. Each stage would have a custom task that copies the files from the appropriate product folder to the destination via SSH. You could even create a task group that gets re-used then just use variables for folder paths, etc. NOTE: The will be quite a few stages, but that's what release pipelines are for :)
Schedule the release pipeline to run at the desired times. You can setup notifications on failures so someone or process can investigate/retry etc.

Jenkins job build history lost after migrating workspace to a new hard drive

I am currently running Jenkins ver. 2.121.2. I am in charge to migrate all jenkins builds and workspace to a new location since we have many projects filling our system's SSD. And we decided to have the workspace and build location to be on a HDD. Migrating all the data wasn't hard but the unexpected side effect is, that all jenkins jobs have lost their build history. I thought it would be as simple as copying all files over to the new location.
I changed only 2 parameters in the /var/lib/jenkins/config.xml file. These are from:
<workspaceDir>${ITEM_ROOTDIR}/workspace</workspaceDir>
<buildsDir>${ITEM_ROOTDIR}/builds</buildsDir>
To...
<workspaceDir>/wd-red1/jenkins_workspace/${ITEM_FULLNAME}</workspaceDir>
<buildsDir>/wd-red1/jenkins_builds/${ITEM_FULL_NAME}</buildsDir>
Then I have copied all files from /var/lib/jenkins/workspace and /var/lib/jenkins/jobs respectively to the locations above. All went well except for the job build histories.
My question is: Can I somehow import the job build history too?
If your old jenkins instance is still running, install "thinbackup" and take a full backup. Copying the backup into the new server and restoring the backup will give you all the configurations as it is in old server. I tried this someday back and it worked liked a charm.
Note : Thinbackup plug in supports both backup and restore.

Sharing Directory between builds aka sharing node_modules

yarn takes a lot of time on vsts hosted agent due to more than few dependencies .
Our monorepo contains three somewhat identical but totally different apps which share lot of node dependencies.
Each app is very huge and takes considerable time to build. So we build individual app based on path filter
Release contains artifacts from all three builds
What I need
download node modules once
use same downloaded dependencies in three different conditional builds
release app after all or any build with artifacts latest for each build
any pointers how to configure this
There isn't any way to do this with Hosted Agent. The Hosted Agent is a group of virtual machines hosted on Azure. Every time you queue a new build, it will initialize an available agent from these machines with a clean environment. So the build machine you used may different for every build. And when the build is finished, the files downloaded/generated during the build will also be cleared. So there isn't any way to share the files between them.

Jenkins Upgrade: What configuration should I be concerned about in the Jenkins WAR directory?

I am trying to automate Jenkins Upgrades so they do not have to be hands on. Some documentation recommends creating a batch file with instructions on the machine running Jenkins, and create a scheduled task to run the batch job. The site I found with a batch file is here, where it says:
It does delete the complete exploded war file from the deployment location, so be careful if you save any configuration files to that directory.
What configuration file would I have to worry about? No one I've talked to at my company knows of any configuration files held there, and they seem to think we have a pretty default setup, so what could I look for manually that would tell me whether or not I should be concerned?
We are running Jenkins on a Windows virtual box, I believe with Jenkins running as a service.
Alternatively, if the above method is not the easiest or best way to automate Jenkins upgrades, does anyone know a better way?
You can ignore this warning. I've never seen anything storing configuration files in that directory. It is intended to be used as a cache only.
If unsure, check your existing war directory for any files with timestamps newer than the installation time.
Here, on a busy Jenkins master, no files have been added or modified there over a period of several months (since initial war file explosion at installation time).

Restore Shift+Deleted builds in Jenkins

Is there a way to recover builds that were shift+deleted in Jenkins?
I saw the other thread but I think it was only for delete and not for shift+delete
Please help. I lost a few builds from my production job
Even if the build was deleted from disk, you should have the build log and you can get the Changelist at which the build was made and rebuild it.
Assuming you're running Jenkins on Windows and you performed the Shift+Delete on the job folder itself in Windows Explorer:
No.* Jenkins stores all job data in that directory and Windows deletes the files immediately using that method. You will need to restore the jobs from backup if you have them.
*Note: Unless you use a file system level recovery tool, which is out of scope for this exchange. You could ask over on Superuser.

Resources