I have tried below things to make wait between parallel job in jenkins using groovy script but its not waiting.
def jobs = [:]
for(int i=1;i<=5;i++)
{
def component = i
jobs[component] =
{
sleep(i) {
echo "Waiting for ${i} seconds"
}
}
}
parallel jobs
am i missing something or its totally wrong i couldn't figure out
Thanks
All parallel blocks will start in the same time but each one will sleep "i" seconds before the actual logic execution (Echo command will be executed "i" seconds after the "parallel" call).
Please try:
def jobs = [:]
for(int i=1;i<=5;i++)
{
def component = i
jobs[component] =
{
sleep(i)
echo "Waiting for ${i} seconds"
}
}
parallel jobs
sleep step does not get Closue as parameter...
Related
I'd like to start a pipeline job manually. This job should then run daily and after seven days stop automatically. Is there any way to do this?
AFAIK There is no OOB solution for this. But you can implement something with Groovy to achieve what you need. For example, check the following pipeline. In the below Pipeline, I'm adding a Cron expression to run every day if manually triggered and then removing the corn expression after a predefined number of runs elapse. You should be able to fine-tune the below and achieve what you need.
def expression = getCron()
pipeline {
agent any
triggers{ cron(expression) }
stages {
stage('Example') {
steps {
script {
echo "Build"
}
}
}
}
}
def getCron() {
def runEveryDayCron = "0 9 * * *" //Runs everyday at 9
def numberOfRunsToCheck = 7 // Will run 7 times
def currentBuildNumber = currentBuild.getNumber()
def job = Jenkins.getInstance().getItemByFullName(env.JOB_NAME)
for(int i=currentBuildNumber; i > currentBuildNumber - numberOfRunsToCheck; i--) {
def build = job.getBuildByNumber(i)
if(build.getCause(hudson.model.Cause$UserIdCause) != null) { //This is a manually triggered Build
return runEveryDayCron
}
}
return ""
}
I have a dynamic scripted pipeline in Jenkins that has many parallel stages, but within each stage, there are multiple serial steps. I have wasted several days trying to make it work: no matter what I try, all serial substages are lumped into one stage!
Here is what I have now:
node () {
stage("Parallel Demo") {
// Canonical example to run steps in parallel
// The map we'll store the steps in
def stepsToRun = [:]
for (int i = 1; i < 5; i++) {
stepsToRun["Step${i}"] = { node {
echo "start"
sleep 1
echo "done"
}}
}
// Actually run the steps in parallel
// parallel takes a map as an argument
parallel stepsToRun
}
}
It gets me this beautiful parallel pipeline:
However, the moment I add a serial stage, aka:
node () {
stage("Parallel Demo") {
// Run steps in parallel
// The map we'll store the steps in
def stepsToRun = [:]
for (int i = 1; i < 5; i++) {
stepsToRun["Step${i}"] = { node {
stage("1") {
echo "start 1"
sleep 1
echo "done 1"
}
stage("2") {
echo "start 2"
sleep 1
echo "done 2"
}
}}
}
// Actually run the steps in parallel
// parallel takes a map as an argument
parallel stepsToRun
}
}
I get this ugly thing, which looks exactly the same:
To add to the offense, I see the sub-steps executed. How can I get my sub-steps show up as stages?
Also, if there is a way to have dynamic stages (sequential and parallel) with the declarative pipeline, I'm all for it. I found you can do static sequential stages but have little clue how to make it dynamic without going back to scripted pipelines.
Here is how you can do something like you want
def stepsToRun = [:]
pipeline {
agent none
stages {
stage ("Prepare Stages"){
steps {
script {
for (int i = 1; i < 5; i++) {
stepsToRun["Step${i}"] = prepareStage("Step${i}")
}
parallel stepsToRun
}
}
}
}
}
def prepareStage(def name) {
return {
stage (name) {
stage("1") {
echo "start 1"
sleep 1
echo "done 1"
}
stage("2") {
echo "start 2"
sleep 1
echo "done 2"
}
}
}
}
The output of this python eval looks like it could be stages in a jenkins pipeline
$ python3 -c 'print("\n".join(["stage({val}) {{ do something with {val} }}".format(val=i) for i in range(3)]))'
stage(0) { do something with 0 }
stage(1) { do something with 1 }
stage(2) { do something with 2 }
Is it possible for jenkins to use output like this to create steps or stages in a pipeline so the running python script is able to update jenkins ? The point of this would be to have Blue Ocean pipeline have a stage dot that was made by an external script running separate jobs.
To elaborate on the example ... if this demo.py script which outputs the uptime in a stage
#!/bin/env python3.6
import subprocess, time
def uptime():
return (subprocess.run('uptime', stdout=subprocess.PIPE, encoding='utf8')).stdout.strip()
for i in range(3):
print("stage({val}) {{\n echo \"{output}\" \n}}".format(val=i, output=uptime()))
time.sleep(1)
where to be setup in a jenkins pipeline
node {
stage("start demo"){
sh "/tmp/demo.py"
}
}
As is this demo just outputs the text and does not create any stages in blue ocean
[Pipeline] sh
+ /tmp/demo.py
stage(0) {
echo "03:17:16 up 182 days, 12:17, 8 users, load average: 0.00, 0.03, 0.05"
}
stage(1) {
echo "03:17:17 up 182 days, 12:17, 8 users, load average: 0.00, 0.03, 0.05"
}
stage(2) {
echo "03:17:18 up 182 days, 12:17, 8 users, load average: 0.00, 0.03, 0.05"
}
Again the point of this would be to have Blue Ocean pipeline have a stage dot with a log
You can evaluate an expression and then call it.
node(''){
Closure x = evaluate("{it -> evaluate(it)}" )
x(" stage('test'){ script { echo 'hi'}}")
}
Since Jenkins converts your Groovy script into Java, compiles it and then executes the result, it would be quite hard to use an external program to generate more Groovy to execute, since that additional groovy code would need to be converted. But the generated code is a result of running, which means that the conversion is already done.
Instead, you may want to programmatically build your stages in Groovy.
some_array = ["/tmp/demo.py", "sleep 10", "uptime"]
def getBuilders()
{
def builders = [:]
some_array.eachWithIndex { it, index ->
// name the stage
def name = 'Stage #' + (index + 1)
builders[name] = {
stage (name) {
def my_label = "jenkins_label" // can choose programmatically if needed
node(my_label) {
try {
doSomething(it)
}
catch (err) { println "Failed to run ${it}"; throw err }
finally { }
}
}
}
};
return builders
}
def doSomething(something) {
sh "${something}"
}
And later in your main pipeline
stage('Do it all') {
steps {
script {
def builders = getBuilders()
parallel builders
}
}
This will run three parallel stages, where one would be running /tmp/demo.py, the second sleep 10, and the third uptime.
I want to trigger several different pipeline jobs, depending on the input parameters of a Controller Pipeline job.
Within this job I build the names of the other pipelines, I want to trigger from a list, given back from a python script.
node {
stage('Get_Clusters_to_Build') {
copyArtifacts filter: params.file_name_var_mapping, fingerprintArtifacts: true, projectName: 'UpdateConfig', selector: lastSuccessful()
script {
cmd_string = 'determine_ci_builds --jobname ' + env.JOB_NAME
clusters = bat(script: cmd_string, returnStdout: true)
output_array = clusters.split('\n')
cluster_array = output_array[2].split(',')
}
echo "${clusters}"
}
jobs = Hudson.instance.getAllItems(AbstractProject.class)
echo "$jobs"
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
stage ("TriggerTests") {
echo "Done"
}
}
My problem is, it might be the case, that a couple of jobs with the names I get from the Stage Get_Clusters_to_Build do not exist. Therefore they cannot be triggered and my job fails.
Now to my question, is there a way to get the names of all pipeline jobs, and how can I use them to check if I can trigger a build?
I tried by jobs = Hudson.instance.getAllItems(AbstractProject.class) but this gives me only the "normal" FreeStyleProject-Jobs.
I want to do something like this in the loop:
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
// This part I only want to be executed if job_to_build is found in the jobs list, somehow like:
if job_to_build in jobs: // I know, this is not proper groovy syntax
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
All pipeline jobs are instantces of org.jenkinsci.plugins.workflow.job.WorkflowJob. So you can get names of all Pipeline jobs using the following function
#NonCPS
def getPipelineJobNames() {
Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)*.fullName
}
Then you can use it this way
//...
def jobs = getPipelineJobNames()
if (job_to_build in jobs) {
//....
}
try this syntax to get standard and pipeline jobs:
def jobs = Hudson.instance.getAllItems(hudson.model.Job.class)
As #Vitalii Vitrenko wrote, that is working fine
for (job in Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)) {
println job.fullName
}
I run in a Jenkins pipeline, a series of stages.
Each stage represents a test.
Even if a stage (test) fails, I'd like to continue with the following stages (tests), but I don't know how.
The only solution I know is to enclose the stage steps with a try/catch clause, but in this way I don't know easily if a test has failed or succeeded.
They cannot run in parallel, they must be run sequentially.
Is there a better solution?
Related question: Jenkins continue pipeline on failed stage
This isn't optimal, because Jenkins doesn't know which stages have failed, but we collect this data manually:
def success = 0
def failed = []
for (int i = 0; i < tests.size(); i++) {
def test = tests[i]
stage(test) {
try {
sh "...."
success++
} catch (e) {
failed << test
}
}
}
stage('Report') {
def failCount = tests.size()-success
if (failCount == 0)
echo "Executed ${success} tests successfully."
else
error """Executed ${success} tests successfully and ${failCount} failed:
${failed.join(', ')}"""
}