I have a DSL script that creates a job. As soon as I run the job the config.xml is changed. Because of this, the job doesn't get an update when I run the seed job again.
I suspect some plugins do this. Can you tell me the best way to find out what changes the config when the job is run?
[
[name: "Sonar/co", repo: "repo.git", pomPath: "pom.xml", branch: "development", mvnGoal: "-am -P dev -pl project clean test"]
].each { Map config ->
mavenJob(config.name) {
description "Sonar job für ${config.name}"
logRotator {
numToKeep(1)
}
label "sonar"
scm {
git {
branch "*/${config.branch}"
remote {
url "git#repository:${config.repo}"
}
browser {
gitLab("https://gitlab.DOMAIN.de/", "9.0")
}
}
}
mavenInstallation "maven339"
goals config.mvnGoal
rootPOM config.pomPath
configure { node ->
node / settings (class: 'jenkins.mvn.DefaultSettingsProvider') {
}
node / globalSettings (class: 'jenkins.mvn.DefaultGlobalSettingsProvider') {
}
}
}
}
Related
Please bear with me the description might be long but it might give a clean picture of the intent and issue.
I have used Job DSL Plugin to create a seeder job, which in turns creates two new Jobs. I have 2 separate repositories
For maintaining jenkins pipeline scripts.
For actual code to build.
First I have created a pipeline job in jenkins which in turns creates view and 2 jobs. Config shown below:
The Jenkinsfile given below uses Job DSL plugin api, reads the groovy script and creates the required 2 jobs.
node('master') {
checkout scm
jobDsl targets: ['dsl/seedJobBuilder.groovy'].join('\n'),
removedJobAction: 'IGNORE',
removedViewAction: 'IGNORE',
lookupStrategy: 'SEED_JOB'
}
seedJobBuilder.groovy creates a dsl pipeline job whose task would be to build the actual codebase.
listView('Build Pipelines') {
description('All build and deploy jobs')
jobs {
names(
'build',
'deploy',
)
}
columns {
status()
weather()
name()
lastSuccess()
lastFailure()
lastDuration()
buildButton()
}
}
def buildCommerce = pipelineJob('build') {
properties {
githubProjectUrl("${projectRepo}") // url of actual code repo not the jenkins script repo
}
definition {
cpsScm {
scm {
git {
remote {
url("${pipelineRepo}") // jenkins script repo url
credentials("somecredentials")
}
branch('${JENKINS_SCRIPT_BRANCH}')
}
scriptPath('pipelines/pipelineBuildEveryDay.groovy')
lightweight(false)
}
}
}
triggers {
githubPush()
}
}
Config of the above job created by Job DSL:
This job reads the pipelineBuildEveryDay groovy script, checkout the actual codebase and build and deploy.
The place where I am struggling is how do we trigger build on this second job through github hook or through ghprb. Since I don't want to manipulate manually the second job and the git url of the job is the script repo URL not the codebase URL. Is it possible to do this even? If yes what am I missing?
I have the webhook configured
pipelineBuildEveryDay.groovy
pipeline {
libraries {
lib("shared-library#${params.JENKINS_SCRIPT_BRANCH}")
}
agent {
node {
label 'master'
}
}
options {
skipDefaultCheckout(true) // No more 'Declarative: Checkout' stage
}
stages {
stage('Crazy Build Pipeline') {
tools {
jdk 'java11'
}
stages {
stage('Prepare build name') {
steps{
script{
currentBuild.displayName = "${currentBuild.number}-build"
}
}
}
stage('Checkout') {
steps {
cleanWs()
script {
checkoutRepository("${projectDir}", "${params.PROJECT_TAG}", "${params.PROJECT_REPO}")
}
}
}
stage('Run Tests') {
steps {
echo "Running test coming soon..."
}
}
}
}
}
// post build actions
post {
success {
echo "success"
}
failure {
echo "failure"
}
}
}
Well the suffering comes to an end. Posting this answer for anyone struggling with similar sort of issues.
Make sure you uncheck all other types of trigger, the only checked one should be pull request builder.
The part which screwed me was the Project URL. In my case in SCM part the github url was of the Jenkins-scripts repository URL not the URL of the codebase I want to build. So I tried to use my codebase repository URL in Github Project URL textbox.
But the real problem was using repository URL in the format 'https://code-base-repo-url.git' instead it should be 'https://code-base-repo-url'. Sounds stupid? Yeah I know!
Finally the complete Job config pipeline script if it helps:
def pipelineRepo = 'https://jenkins-script-repo'
def projectRepo = 'https://code-base-repo-url'
def projectTag = '${GIT_BRANCH}'
def buildCommerce = pipelineJob('build') {
properties {
githubProjectUrl("${projectRepo}")
}
definition {
cpsScm {
scm {
git {
remote {
url("${pipelineRepo}")
credentials("use-your-own-user-pass-cred")
}
branch('${JENKINS_SCRIPT_BRANCH}')
}
scriptPath('pipelines/pipelineBuildEveryDay.groovy')
lightweight(false)
}
}
}
triggers {
githubPullRequest {
admin('use_your_own_admin')
triggerPhrase('build please')
useGitHubHooks()
permitAll()
displayBuildErrorsOnDownstreamBuilds()
extensions {
commitStatus {
context('Jenkins')
completedStatus('SUCCESS', 'All is well')
completedStatus('FAILURE', 'Something went wrong. Investigate!')
completedStatus('ERROR', 'Something went really wrong. Investigate!')
}
}
}
}
}
I have a parallel stage setup, and would like to know if it's possible to run a script prior to the nested stages, so something like this:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
I know the above is wrong, I get the error Only one of "matrix", "parallel", "stages", or "steps" allowed for stage "E2E-PR-CYPRESS"
I already have the stash script in a setup stage at the beginning of my pipeline, but I'd like to be able to restart from this stage above on Jenkins, and so need the stash part in this stage as the parallel stages need to unstash the contents.
Updated Answer:
After playing a bit with the Restart from a Stage option there is seems to be a nice feature designed exactly for your needs called Preserving stashes for Use with Restarted Stages:
Normally, when you run the stash step in your Pipeline, the resulting
stash of artifacts is cleared when the Pipeline completes, regardless
of the result of the Pipeline. Since stash artifacts aren’t accessible
outside of the Pipeline run that created them, this has not created
any limitations on usage. But with Declarative stage restarting, you
may want to be able to unstash artifacts from a stage which ran before
the stage you’re restarting from.
To enable this, there is a job property that allows you to configure a
maximum number of completed runs whose stash artifacts should be
preserved for reuse in a restarted run. You can specify anywhere from
1 to 50 as the number of runs to preserve.
This job property can be configured in your Declarative Pipeline’s options section, as below:
options {
preserveStashes()
// or
preserveStashes(buildCount: 5)
}
This built in feature is exactly what you need to solve your issue without any special modifications to your code, as it will allow you to rerun the pipeline from any stage and still use the existing file that were previously stashed.
Original Answer:
You can actually achieve this quite simply using the scripted syntax for the parallel command, and it will also allow you to avoid the duplicate code in the parallel stages.
parallel: Execute in parallel
Takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
In your case it can look like:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
// Define the parallel execution stages
def stages = ['Cypress Tests 1', 'Cypress Tests 2']
// Create the parallel executions and run them
parallel stages.collectEntries {
["Running ${it}": {
node('aws_micro_slave_e2e') {
skipDefaultCheckout()
runE2eTests()
}
}]
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
This way you can easily add more parallel steps by updating the stages list, or even receive it as an input parameter. In addition you can create the parallel executions by different labels or tests suits, instead of the stage name.
You can add a Prepare stage at the top like this:
stages{
stage('Preperation'){
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
}
stage('E2E-PR-CYPRESS') {
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
An out of the box concept
Propose splitting the job into 2 parts taking the following into consideration:
Currently use an EC2 plugin, as the current agents are EC2
Running the parallel stages with the same stashed content ready to unstash
Create jenkins pipeline job 1:
This job will checkout the workspace with any type of agent
Create a packer json to create a customised AMI for the EC2
The customised AMI will stash the contents and move to a directory that will appear on the EC2 when the agent is built
Output the AMI ID, run a groovy job to update the EC2 plugin AMI ID with the customised AMI ID to temporarily set the AMI in memory on Jenkins
pipeline {
agent {
docker {
test-container
}
}
options {
buildDiscarder(
logRotator(
numToKeepStr: '10',
artifactNumToKeepStr: '10'
)
)
ansiColor('xterm')
gitConnection("git")
}
stages {
stage('Run Stash Cypress Functional Test') {
steps {
dir('functional-test') {
// develop branch is canary build, all other branches are stable builds
script {
sh """
# script to stash cypress tests
"""
}
}
}
}
stage('Functional Test AMI Build') {
steps {
dir('functional-test/packer') {
withAWS(role: 'PackerBuild', roleAccount: '123456789012', roleSessionName: 'Jenkins-Workflow-FunctionalTest-Packer') {
script {
sh """
# packer json script will require to copy contents from workspace, run the script to stash content
# packer json script will require to capture new AMI ID
# https://discuss.devopscube.com/t/how-to-get-the-ami-id-after-a-packer-build/36
# https://www.packer.io/docs/post-processors/manifest
packer validate FunctionalTestPacker.json
packer build -debug FunctionalTestPacker.json
# grab AMI ID and export as jenkins env variable
"""
}
}
}
}
}
stage('run groovy script to update AMI ID on EC2 plugin') {
steps {
dir(groovy job dir) {
script {
sh """
# run groovy job to update AMI on Jenkins EC2 plugin
# https://gist.github.com/vrivellino/97954495938e38421ba4504049fd44ea
"""
}
}
}
}
stage('Kickoff Functional Test Deploy') {
// pipeline checkbox parameter, when ticked it will automatically kick off the functional test pipeline
when {
expression {params.RUN_TESTS.toBoolean()}
}
steps {
script{
env.branch = params.BRANCH
sh """
echo "Branch is ${branch}"
"""
}
build job: 'workflow/CypressFunctionaTestDeployAndRun',
parameters: [
string(name: 'BRANCH', value: env.branch)
],
wait : false
}
}
}
post {
always {
cleanWs()
}
}
}
Create jenkins pipeline job 2:
This job will create the EC2 agents via the plugin from the customised AMI from pipeline job 1
This means your agents will have the same workspace ready to unstash - so you can execute a parallel run
Also you could move a lot of your user data script that is in the EC2 plugin as part of the customised AMI build, thus cut down the time for each EC2 agent to get ready to carry out execution
pipeline {
stages {
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
This question ties in with one of my earlier questions here
Tl;dr of the linked question:
Basically I want a generic pipeline to generate distributable bundles (zip files etc) of any of my applications. An application can have multiple components (almost all components are Java/Spring or NodeJS projects).
The plan was to store a pipeline descriptor of each application in a JSON file like such:
{
"app": "MyApp",
"components": [
{
"name": "Component1",
"scmUrl": "https://git.mycompany.com/app/component1.git",
"buildCmd": "mvn clean install"
},
{
"name": "Component2",
"scmUrl": "https://git.mycompany.com/app/component2.git",
"buildCmd": "npm run build"
},
]
}
There will be a descriptor for each application and will be checked into separate repository.
When the pipeline is run the required application name will be an input parameter and the above repo will be cloned and the respective JSON descriptor will be loaded.
This is where things start to get tricky. All components will have some common stages (Checkout, Build, Docker Build). So I am trying to loop the components array and run the stages in parallel:
def parallelCheckoutStages = components.collectEntries {
[ "Checkout ${it.name}", generateCheckoutStage(it) ]
}
generateCheckoutStage(component) {
return {
stage("Stage: ${component.name}") {
script {
git(url: component.scmUrl, branch: component.branch)
}
}
}
}
def parallelBuildStages = components.collectEntries {
[ "Build ${it.name}", generateBuildStage(it) ]
}
generateBuildStage(component) {
return {
stage("Stage: ${component.name}") {
script {
sh script "${component.buildCmd}"
}
}
}
}
pipeline {
agent any
stages {
.
. // clone repo and load json
.
stage("Checkout Components") {
steps {
script {
parallel parallelCheckoutStages
}
}
}
stage("Build Components") {
steps {
script {
parallel parallelBuildStages
}
}
}
}
}
Sometimes I need run the build command inside a docker container (only for some components). To accomplish this I want to edit the generateBuildStage method to something like this:
generateBuildStage(component) {
if(component.requiresDocker) {
return {
stage("Stage: ${component.name}") {
agent {
docker {
image 'jdk11-mvn3.6'
}
}
script {
sh script "${component.buildCmd}"
}
}
}
} else {
return {
stage("Stage: ${component.name}") {
script {
sh script "${component.buildCmd}"
}
}
}
}
}
When I run the above code I get an error java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps
Sort of a second part to my question, does my pipeline seem hacky? I could replace the entire parallel stages by creating individual jobs for each component and calling them from the pipeline. Which approach is better?
I'm trying to use the configuration as code (JCasC) plug-in to create a pipeline job that takes in build parameters but I can't find the syntax for this anywhere online. I'm writing the configuration in YAML.
On the GUI, the field is called "This build is paramertized" and it is under the 'General' heading. I need to define two string parameters: CLUSTER_ID=cluster_id and OPENSHIFT_ADMINSTRATION_BRANCH=develop.
This is the yaml file that I am trying to edit:
jobs:
- script: >
folder('test1'){
pipelineJob('test1/seedJobTest') {
description 'seedJobTest'
logRotator {
daysToKeep 10
}
definition {
cpsScm {
scm {
git {
remote {
credentials "xxx"
url 'xxx'
}
branches 'refs/head/master'
scriptPath 'Jenkinsfile'
extensions { }
}
}
}
}
configure { project ->
project / 'properties' / 'EnvInjectJobProperty' {
'on'('true')
}
project / 'properties' / 'org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty' {}
}
}
}
Thanks for your help!
Solution
jobs:
- script: >
folder('test1'){
pipelineJob('test1/seedJobTest') {
description 'seedJobTest'
logRotator {
daysToKeep 10
}
parameters {
stringParam("CLUSTER_ID", "cluster_id", "your description here")
stringParam("OPENSHIFT_ADMINSTRATION_BRANCH", "develop", "your description here")
}
definition {
cpsScm {
scm {
git {
remote {
credentials "xxx"
url 'xxx'
}
branches 'refs/head/master'
scriptPath 'Jenkinsfile'
extensions { }
}
}
}
}
configure { project ->
project / 'properties' / 'EnvInjectJobProperty' {
'on'('true')
}
project / 'properties' / 'org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty' {}
}
}
}
How to Figure this Stuff Out in the Future - XML Job To DSL (Jenkins Plugin)
Here's how I would go about figuring this kind of thing:
Manually create a temporary pipeline job with the things you want in your seed job (the one you want to automate).
Install (if only temporarily) the "XML Job To DSL" Jenkins plugin.
Go to the main Jenkins Dashboard
In the left navigation, you'll find "XML Job To DSL." Click it.
Select the temporary job you created and click "Convert selected to DSL"
When I went about getting the params snippet for this answer, I did as I described above, but simply created two parameters. I ended up with this:
pipelineJob("test") {
description()
keepDependencies(false)
parameters {
stringParam("CLUSTER_ID", "cluster_id", "your description here")
stringParam("OPENSHIFT_ADMINSTRATION_BRANCH", "develop", "your description here")
}
definition {
cpsScm {
"" }
}
disabled(false)
}
Read-Only Parameter Option
One more thing, in case it's useful to you (as it was to me). If you want to create a parameterized seed job but you don't want those to be editable at build time, you can install the "Readonly Parameter" Jenkins plugin; then, you'll be able to do this kind of thing:
jobs:
- script: >
pipelineJob("Param Example") {
description()
keepDependencies(false)
parameters {
wHideParameterDefinition {
name('AGENT')
defaultValue('docker-host')
description('Node on which to run.')
}
wHideParameterDefinition {
name('ENV_FILE_DIR')
defaultValue('local2')
description('Name of environment directory which houses .env')
}
booleanParam("include_search_stack", false, "Build/run the local Fess, Elasticsearch, and Kibana containers.")
booleanParam("SKIP_404_GENERATION", false, "Helpful sometimes during local development.")
}
definition {
cpsScm {
scm {
git {
remote {
url("https://myrepo/blah.git")
credentials("scm")
}
branch("master")
}
}
scriptPath("pipeline/main/Jenkinsfile")
}
}
disabled(false)
}
In this example, the top two params, AGENT and ENV_FILE_DIR are sort of "hard-coded" from CasC, because the those parameters are not editable at build-time. However, the include_search_stack and SKIP_404_GENERATION parameters are editable. I used this mixed example to show that either/both are usable in the same job.
Read-only parameters have been useful in some of my use cases.
I have installed Pipeline Plugin which used to be called as Workflow Plugin earlier.
https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin
I want to know how can i use Job Dsl to create and configure a job which is of type Pipeline
You should use pipelineJob:
pipelineJob('job-name') {
definition {
cps {
script('logic-here')
sandbox()
}
}
}
You can define the logic by inlining it:
pipelineJob('job-name') {
definition {
cps {
script('''
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
echo 'logic'
}
}
stage('Stage 2') {
steps {
echo 'logic'
}
}
}
}
}
'''.stripIndent())
sandbox()
}
}
}
or load it from a file located in workspace:
pipelineJob('job-name') {
definition {
cps {
script(readFileFromWorkspace('file-seedjob-in-workspace.jenkinsfile'))
sandbox()
}
}
}
Example:
Seed-job file structure:
jobs
\- productJob.groovy
logic
\- productPipeline.jenkinsfile
then productJob.groovy content:
pipelineJob('product-job') {
definition {
cps {
script(readFileFromWorkspace('logic/productPipeline.jenkinsfile'))
sandbox()
}
}
}
I believe this question is asking something how to use the Job DSL to create a pipeline job which references the Jenkinsfile for the project, and doesn't combine the job creation with the detailed step definitions as has been given in the answers to date. This makes sense: the Jenkins job creation and metadata configuration (description, triggers, etc) could belong to Jenkins admins, but the dev team should have control over what the job actually does.
#meallhour, is the below what you're after? (works as at Job DSL 1.64)
pipelineJob('DSL_Pipeline') {
def repo = 'https://github.com/path/to/your/repo.git'
triggers {
scm('H/5 * * * *')
}
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote { url(repo) }
branches('master', '**/feature*')
scriptPath('misc/Jenkinsfile.v2')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
// the single line below also works, but it
// only covers the 'master' branch and may not give you
// enough control.
// git(repo, 'master', { node -> node / 'extensions' << '' } )
}
}
}
}
Ref the Job DSL pipelineJob: https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob, and hack away at it on http://job-dsl.herokuapp.com/ to see the generated config.
This example worked for me. Here's another example based on what worked for me:
pipelineJob('Your App Pipeline') {
def repo = 'https://github.com/user/yourApp.git'
def sshRepo = 'git#git.company.com:user/yourApp.git'
description("Your App Pipeline")
keepDependencies(false)
properties{
githubProjectUrl (repo)
rebuild {
autoRebuild(false)
}
}
definition {
cpsScm {
scm {
git {
remote { url(sshRepo) }
branches('master')
scriptPath('Jenkinsfile')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
}
}
}
If you build the pipeline first through the UI, you can use the config.xml file and the Jenkins documentation https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob to create your pipeline job.
In Job DSL, pipeline is still called workflow, see workflowJob.
The next Job DSL release will contain some enhancements for pipelines, e.g. JENKINS-32678.
First you need to install Job DSL plugin and then create a freestyle project in jenkins and select Process job DSLs from the dropdown in the build section.
Select Use the provided DSL script and provide following script.
pipelineJob('job-name') {
definition {
cps {
script('''
pipeline {
agent any
stages {
stage('Stage name 1') {
steps {
// your logic here
}
}
stage('Stage name 2') {
steps {
// your logic here
}
}
}
}
}
''')
}
}
}
Or you can create your job by pointing the jenkinsfile located in remote git repository.
pipelineJob("job-name") {
definition {
cpsScm {
scm {
git {
remote {
url("<REPO_URL>")
credentials("<CREDENTIAL_ID>")
}
branch('<BRANCH>')
}
}
scriptPath("<JENKINS_FILE_PATH>")
}
}
}
If you are using a git repo, add a file called Jenkinsfile at the root directory of your repo. This should contain your job dsl.