Now I would like to describe the following problem on my part here.
We have several test systems that previously had the problem that start the Jenkins Jobs simultaneously. I would like to avoid this by providing some kind of recognition. It's about distributing the started Jenkins jobs on our test machines.
Example:
Test 1 runs at the customer - Test 2 should recognize this
For example, if test-1 is occupied by Job1, it should be recognized at the start of Job 2 and then automatically routed to one of the free test machines.
Manage Jenkins> Manage Nodes > Node > Configure
You must set same label names for different nodes.
Restrict where this project can be run = new label name
(You must install 'Least Load plugin')
Related
I need suggestions for creating a YAML pipeline for the independent deployment of the Single Tenant .Net MVC App for different clients.
Application Type: .Net MVC Web API
Web Server: IIS 10
Database: MS SQL Server
Environment: Private Data Center
Number of Clients/tenant: 150+
Deployment: For each client/tenant, a separate IIS Web App is created. Also, a separate database is created for each client.
Expected Deployment: Manual mode (Not considering CD because CI and test suite are not available yet).
If you can guide me about the following points.
How pipeline should be created in such a way that I can use different configuration parameters per client/tenant? (e.g. Database name and connection string) But at the same time, the common script for the deployment of generated release?
Should I create a single pipeline or there should be multiple?
How I should use release, stage, jobs effectively for such a scenario?
If I get some good articles for manual independent deployment for each client, I would like to study.
Generally, if you want to deploy to different environments, you can set up a stage for each environment in a pipeline. However, considering that you have 150+ different configurations, it is so torturous to set up 150+ stages in the pipeline.
If all the deployments have the same deployment steps (same scripts, same input parameters), but different values of the input parameters, you can try using Multi-job configuration (the Matrix) in the pipeline.
With this way, you do not need to set up a stage or a job for each configurations, you just need to set up a stage or a job with all the common deployment steps. But you need to enumerate all the configurations (150+) you require. When running the pipeline, it will generate 150+ matrix jobs with the same deployment steps but different values of the input parameters.
[UPDATE]
Just curious, in this case of multi-job configuration, all the 150+ installations will be triggered in one go, right?
After the pipeline run is triggered, all the 105+ matrix jobs will be triggered and be in queue. However, normally not all the 150+ jobs will be started up to run in parallel at the same time. It depends on the maxParallel you set and how many available agents can be assigned to the run.
I can't select the way, where deployment is started for let's say 5 of the client only.
If you want that the deployment can be executed firstly for some clients and then for other clients, you can try using stages.
For example, in stage_1, execute the deployment job (multi-job configuration) for the first 5 clients. After stage_1, start up stage_2 for another several clients, then stage_3 for other clients, etc..
You can use the dependsOn key to set the execution order of the stages, and use condition key to set a stage only runs when the specified condition is met.
To view more details, you can see "Add stages, dependencies, & conditions".
I'm trying to deploy to a node using jenkins, and even though the job recognises the node, when attempting to run the job turns to pending and tries to look through every environment for the node.
I've recently set up a new jenkins job to deploy a spring batch project onto a server. We already have a job for another project to deploy to the same node, so the node is recognised, and when viewing that build it does list three jobs.
However, when trying to run this new job, it attempts to find the node against all of our existing labels (see code below for example output), but doesn't find the actual node it should be running on.
The example I've used is NEW_BATCH_DEPLOYMENT, this is listed with having 3 jobs on the environment, two are new jobs that haven't been run, one is a job that ran just before attempting the batch job and succeeded.
For debugging, we've attempted to deploy with NEW_BATCH_DEPLOYMENT_2, which gives us an error for "can't find node with label NEW_BATCH_DEPLOYMENT_2", and if we remove the node name, it simply runs with one of our default nodes.
Has anyone seen something similar to this, or have any idea of a solution? I've compared the new job against the working job and the only differences are the file paths for where to deploy to, and the git url to pull the projects down.
Jenkins version : 2.181
(pending—; ‘Env_1’ doesn’t have label ‘NEW_BATCH_DEPLOYMENT’; ‘Env_2’ doesn’t have label ‘NEW_BATCH_DEPLOYMENT’; ‘Env_2’ doesn’t have label ‘NEW_BATCH_DEPLOYMENT’;
I'd expect it to deploy to the node, but it just hangs with pending and doesn't reach the stage where it would output to the jenkins console.
As mentioned the other job with similar configuration works.
So just to confirm, that is the exact label you are using? "NEW_BATCH_DEPLOYMENT"? Or is that the name of the node? THe Label should be set in the Node configuration, under the "Labels" section, with no extra characters other than the label name.
I've had issues where it can't find the node label if there are spaces in the label (either on the job side or on the configuration/set-up side)
If the labels are correctly set up, it could also be that the node assigned to "NEW_BATCH_DEPLOYMENT" is offline.
Okay, we fixed it.
It seems that on a node level you can set restrictions on jobs, so when the node was set up, it was restricted to only run the one job. The issue is, the only way you can see this is using an admin login.
If anyone else has this issue I'd highly recommending checking the settings on the node to see if the node itself has any restrictions, rather than the job. You will need a Jenkins Admin to do this.
I have multiple build jobs for a project. ie:
projectA is built with different parameters, for SIT, UAT, Staging and Prod DC1, Prod DC2
I use the build ID within the code for cache busting JS and CSS files.
However, there is a little problem here.
I have multiple build IDs for Prod DC1 and DC2.
for example:
DC1: apple.com/me.js?v=45
DC2: apple.com/me.js?v=78
I need one id to unite them all. so that my apple.js?v=blah wont be different in DC1 and DC2. I am also using CDN so this might become a bigger problem.
How can I do this on jenkins?
If all Jobs are connected as Upstream/Downstream way, create a version label parameter in the first Job and pass this label as parameter to the next downstream job till the last Job.
You can use this as the Unique label from starting Job to last Job.
Use Build Name Setter Plugin to set the build name with the unique label for all the Jobs. So that it will be easy to identify the which build belongs to which label.
To have a full visibility of the Jobs use Delivery Pipeline Plugin
I'd like to run several builds concurrently in Jenkins for the same job. I run at maximum 3 builds concurrently. I want each build to run with a parameter that must be unique from a pool for parameters. For instance, pool=[1, 2, 3]: The 1st build picks "1", 2nd picks "2" and the 3rd picks "3".
I must ensure that different builds can't pick the same parameter.
After building, the parameter is available again.
How can I do it?
Alternative: How can I count the number of builds running in this project and pass it as parameter?
At first, select the checkbox button named build-concurrently-if-neccesary to ensure the same job could build concurrently. you'd better read the help-html seriously before
The isolated environments for building different jobs make that data could not be shared each other in a simple way.
Here is a solution that trigger the buildWithParameters link by jenkins rest api to control the pool in the program procedure of your own.
add a string-parameter in job's config.
post the string parameter to http://$JENKINS_SERVER_URL/job/$JOB_NAME/buildWithParameters
Maybe it's the most convenient way if no available plugin found.
I found a plugin in github and asked the author to publish it. It works well and solves my problem.
Jenkins Parameter Pool Plugin
For some organizational reasons we have to move Jenkins to new servers. As we are on a old version so a updated is also need on same time. What are things we should consider. Also not sure if we need to configure all jobs in new instance manually or there is faster way to clone them from existing instance. We have around 300 jobs, one master and 7 slaves. We need to set up three masters, one with four slaves and two with three slaves. 300 jobs will split between three masters depending upon there category.
Thanks !!
If I wanted to move Jenkins jobs to 3 different servers with their own plugins - I would:
Create those 3 instances of Jenkins and configure them separately. Make sure that new/resetup slaves are ready to handle new requirements.
Create 3 separate lists of jobs (split from the original list).
Determine which jobs should be run by which Jenkins
Install all the common plugins used by all/most jobs on all 3 Jenkins instances.
Go to the original ${JENKINS_HOME}/jobs and
tar cvfz < jobs_list > jobs.tgz
3 times, separately for each new Jenkins instance
Finally unpack the job archives to corresponding new ${JENKINS_HOME}/jobs directories.
Run tests and install missing plugins after that, if needed. In my opinion, access permissions should be set separately on each Jenkins instance.