:)
I would like to share my problem and ask for help.
Using an existing script I have to add separation of the launch by content dependency.
Based on the info from the files, the script must read either the 'master' or 'track' argument.
One part of Jenkins is run on Monday for the 'master' argument. I use:
H H(0-5) * * 1
For the 'track' it will be launched on Wednesdays.
Here is my question:
how to add the next run for the same jenkins but launched another day (separate for 'master' and 'track'?
how I can add something like a splitter in the script for start the same JenkinsJob?
Related
I would like to execute my job in a remote node passing the domain name as node arg.
Someone knows how to build this jenkinsfile?
I can't execute on the below way
node('jenkins.mydomain.com') {
build 'remote_exec'
}
There are actually two major issues within your two lines of code :-)
node('jenkins.mydomain.com') {
This will build on a build agent with the label jenkins.mydomain.com. If you have only one build agent with this label given, this should work. But it's not the hostname! (Note: I'm not entirely sure if dots are allowed, but you could call it also whateverserver).
So this would allocate an executor slot (to run the code within the closure) on a build agent matching the given label...
build 'remote_exec'
and then trigger yet another build for the job called remote_exec. This job (assuming it exists and you don't have this as a third issue^^) will then be built on an agent matching its own labels, ignoring the one given in the node(label) step.
If you want that the remote_exec job runs on a specific build agent only, then add the node step there!
We are using Jenkins Pipeline Multibranch Plugin with Blue Ocean.
Through my reading, I believe it is quite common to tie your project's build number to the Jenkins run, as this allows traceability from an installed application through to the CI system, then to the change in source control, and then onto the issue that prompted the change.
The problem is that for each branch, the run number begins at 0. For a project with multiple branches, it seems impossible to guarantee a unique build number.
You can get the Git branch name from $GIT_BRANCH and add this to $BUILD_NUMBER to make an ID that's unique across branches (as long as your company doesn't do something like get themselves taken over by a large corporation that migrates you to another Jenkins server and resets all the build numbers: to protect against that, you might want to use $BUILD_URL).
Only snag is $GIT_BRANCH contains the / character, plus any characters you used when naming the branch, and these may or may not be permitted in all the places where you want an ID. ($BUILD_URL is also going to contain characters like : and /) If this is an issue, one workaround would be to delete unwanted characters with tr:
export MY_ID=$(echo $GIT_BRANCH-$BUILD_NUMBER | tr -dc [A-Za-z0-9-])
(-dc means delete the complement of these characters, so A-Z, a-z, 0-9 and - are the characters you want to keep.)
Maybe instead of a unique (global numeric) build number you might want to try a unique (global) build display name?
According to "pipeline syntax: global variables reference" currentBuild.displayName is a writable property. So you could e.g. add additional information to the build number (in order to make it globally unique) and use that string in subsequent artifact/application build steps (to incorporate that in the application's version output for your desired traceability), e.g. something like:
currentBuild.displayName = "${env.BRANCH_NAME}-${currentBuild.id}"
Using the build's schedule or start time formatted (currentBuild.timeInMillis) as a readable date, or using the SCM revision might be also useful, e.g. resulting in "20180119-091439-rev149923".
See also:
https://groups.google.com/forum/#!msg/jenkinsci-users/CDuWAYLz2zI/NLxwOku4AwAJ
https://support.cloudbees.com/hc/en-us/articles/220860347-How-to-set-build-name-in-Pipeline-job
One way is to have a Job that is being called from all branches and using it's build number. That job can be just a normal pipeline job with a dummy Jenkinsfile like echo "hello". Then just call it like this
def job = build job: 'build number generator', quietPeriod: 0, parameters: [
string(value: "${BRANCH_NAME}-${BUILD_NUMBER}", name: 'UID')
]
def BNUMBER = job.getNumber().toString()
currentBuild.displayName = "build #"+BNUMBER
echo BNUMBER
Not sure if that UID parameter is needed but it forces all calls into "build number generator" job to be unique so Jenkins wouldn't optimize builds that happen at same time to use same "build number generator" job.
You can use an external service to manage a unique build number for your project. It helps to get unique build numbers across branches and across CI servers too.
https://www.nextbuildnumber.net/
I need to know if there is a plugin of some sort that you can select a node from a jenkins job and use that node name as a parameter to be passed to a Windows batch command
I have played with the Configuration Matrix using an Elastic-Axis or Slaves (Screenshot below where you can tick the names) plugins
But these all go and execute the Windows batch command on that selected node.
I don't want to execute it on that server but rather on the main node and only pass the value of the slave/label to the windows batch command.
I were able to do it as described here but that involves 2 jobs and a groovy scripts to interrogate the slaves/nodes config. Write it to a properties file and pass the properties file to the next job.
Jenkins: How to get node name from label to use as a parameter
I need to do about 30 jobs of these and hence would like to try to do all in one job - if I used my solution in the link above, 30 jobs would double in 60 jobs and maintenance would be kind of a nightmare.
I also would not like to have a string parameter and hard code the name of the slave/node as that will not ensure the use of only the available slaves/nodes but any server name can be entered and that would can be a problem where someone can mistype a server name for example pointing to a Production server instead of a test server.
https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin
After installing this plugin you will have an option to add a Node parameter to Jenkins job. It will have a list of all slaves available on current master.
You can use the active choice parameter plugin
https://plugins.jenkins.io/uno-choice/
with a little groovy script to list node names in a parameter selection box.
I've had a dig around but can't find an elegant solution for what I want to do, so I hope some of you may be able to offer some suggestions. I've also asked this question on a jenkins forum, but no takers.
I want to be able to run a jenkins parent job with parameters that will feed down to triggered jobs, and then group all the job run results in a view dynamically.
The use case I'm trying to cover is: We have 10+ different jenkins jobs that run suites of tests, I want to simply manage a run of all those jobs to run against a specific code branch, on a specific test environment, and see the results (in one view) for only that run. The complication is the same Jenkin job may be run against another release or test environment and I don't want to see those results.
We already have the parent job triggering children with parameters, but I can't figure out how best to group the results.
I know I can create filters for views, but the name of jenkins jobs is static, and I want the view created at runtime, without having to build it myself. We do use the 'Set Build description' Plugin, so I could create a view that filters for a unique build descriptor, or something similar. But there doesn't seem to be a way to create views with filter programmatically.
Other considerations would be clean up. I wouldn't want a years worth of views clogging the views, so I need a way to clear out old runs too.
Any ideas to kick me off?
For groupping of reports you can just use a simple logic instead of finding a Jenkins plugin. You can place all the result files (preferably XMLs) in a common folder/ file server and at the end of execution of all the suites (jobs) you can trigger a common job which will process all the XML files and generate a common report. By this you can have " consolidated + individual reports ".
I have done it using Perf Publisher plugin which process XMLs and gives a beautiful aggregated report.
Job1 ----> Report1 ----> Move report of report folder
Job2 ----> Report2 ----> Move report of report folder
Job3 ----> Report3 ----> Move report of report folder
.
.
.
Job n ----> Report n ----> Move report of report folder
So after completion of job n, trigger Report job which will operate on "report" folder containing all the reports!
Hope it helps!
I have a partial solution:
All jobs accept a parameter called VIEW_IDENTIFIER.
Parent job is kicked off with a unique VIEW_IDENTIFIER being set, and all the child jobs have that passed into them when run.
After all jobs are run I edit a Jenkins View that has a 'Job Filter - > Parameterized Jobs Filter - > Name = VIEW_IDENTIFIER, Value = my unique ID set for the run'
This results in all jobs run with that unique ID being grouped in one single view for review.
The shame is I have to do the manual edit of the Job Filter.
I have 2 Jobs which checks out the code from 2 different repository ( A and B resp ).
How can I have a single job which checks out the code from either A or B depending on the parameter so that i want to reduce the number of jobs in jenkins.I tried the sub version release plugin,but this did not perform what is required by me
Thanks in Advance
Why don't you write a script and run it. use parameterized build and configure the parameter. then based on that parameter, clone the repository with the use of script