Groovy script for a Jenkins multijob pipeline - jenkins

I have a multijob pipeline that triggers several other builds to run. It takes in a couple of choice parameters (PRODUCT and BRANCH) to separate the builds into different groups. The UI was easy to set up and works well. Now I need to transfer the same functionality onto a Groovy script instead of using the UI. I have the script below, but it's not working. I'm getting the following error message:
eCaught: groovy.lang.MissingMethodException: No signature of method...
I'm pretty sure my syntax is way off:
pipeline {
parameters {
choice(
choices: ['A', 'B'],
description: 'Select which set of artifacts to trigger',
name: 'PRODUCT')
choice(
choices: ['develop', 'release'],
description: 'Select which branch to build the artifacts from',
name: 'BRANCH')
}
stages {
stage ('Build') {
when {
expression {
env.PRODUCT == 'A' && env.BRANCH == 'release'
}
}
steps {
parallel (
build (job: '../../Builds/artifact1/release'),
build (job: '../../Builds/artifact2/release'),
build (job: '../../Builds/artifact3/release'),
)
}
when {
expression {
env.PRODUCT == 'A' && env.BRANCH == 'develop'
}
}
steps {
parallel (
build (job: '../../Builds/artifact1/develop'),
build (job: '../../Builds/artifact2/develop'),
build (job: '../../Builds/artifact3/develop'),
)
}
when {
expression {
env.PRODUCT == 'B' && env.BRANCH == 'release'
}
}
steps {
parallel (
build (job: '../../Builds/artifact4/release'),
build (job: '../../Builds/artifact5/release'),
build (job: '../../Builds/artifact6/release'),
}
when {
expression {
env.PRODUCT == 'B' && env.BRANCH == 'develop'
}
}
steps {
parallel (
build (job: '../../Builds/artifact4/develop'),
build (job: '../../Builds/artifact5/develop'),
build (job: '../../Builds/artifact6/develop'),
)
}
}
}
}

Inside the steps , run the build in a script segment, it might help
script{
j1BuildResult = build job: "Compile", propagate: true, wait: true, parameters: [
//booleanParam(name: 'portal', value: env.PORTAL),
string(name: 'BRANCH', value: params.BRANCH),
booleanParam(name: 'portal', value: false),
string(name: 'db', value: params.SCHEME_NAME),
]
}

Related

Is it possible to get build number even if build is unstable but not failed?

When building a job in a scripted pipeline, I would like to keep the external build number even if that build is unstable but not failed.
pipeline {
agent any
stages {
stage('Job1') {
steps {
script {
Job1 = build job: 'Job1'
}
}
}
stage('Job2') {
steps {
build job: 'Job2',
parameters: [
string(
name: 'Job1_ID'
value: "${Job1.number}"
)
]
}
}
}
}
I have tried with a catchError() around the job1 build, but still have that problem if the build is unstable.
I have also tried with propagate:false parameter, but I can never see the actual status of the build visually, plus, I don't want the second build to be triggered if the first is failed.
Is there any solution for that ?
What you can do is set propagate: false and then conditionally execute your second Job. Please see the pipeline below.
pipeline {
agent any
stages {
stage('Job1') {
steps {
script {
Job1 = build job: 'Job1', propagate: false
}
}
}
stage('Job2') {
when { expression { return Job1.resultIsBetterOrEqualTo("SUCCESS")}}
steps {
build job: 'Job2',
parameters: [
string(name: 'Job1_ID',value: "${Job1.number}")
]
}
}
}
}

How to use a matrix section in a declarative pipeiline

I have the following pipeline. I need this pipeline to run on 4 different nodes at the same time. I have read that using a matrix section within the declarative pipeline is key to making this work. How can I go about doing that with the pipeline below?
pipeline
{
stages
{
stage ('Test')
{
steps
{
script
{
def test_proj_choices = ['AD', 'CD', 'DC', 'DISP_A', 'DISP_PROC', 'EGI', 'FD', 'FLT', 'FMS_C', 'IFF', 'liblO', 'libNGC', 'libSC', 'MISCMP_MP', 'MISCMP_GP', 'NAV_MGR', 'RADALT', 'SYS', 'SYSIO15', 'SYSIO42', 'SYSRED', 'TACAN', 'VOR_ILS', 'VPA', 'WAAS', 'WCA']
for (choice in test_proj_choices)
{
stage ("${choice}")
{
echo "Running ${choice}"
build job: "UH60Job", parameters: [string(name: "TEST_PROJECT", value: choice), string(name: "SCADE_SUITE_TEST_ACTION", value: "all"), string(name: "VIEW_ROOT", value: "myview")]
}
}
}
}
}
}
}
One helpful article can be found here : https://www.jenkins.io/blog/2019/11/22/welcome-to-the-matrix/
The official documentation here: https://www.jenkins.io/doc/book/pipeline/syntax/#declarative-matrix
Accordingly, the syntax should be:
pipeline {
agent none
stages {
stage('Tests') {
matrix {
agent any
axes {
axis {
name 'CHOICE'
values 'AD', 'CD', 'DC', 'DISP_A', 'DISP_PROC', 'EGI', 'FD', 'FLT', 'FMS_C', 'IFF', 'liblO', 'libNGC', 'libSC', 'MISCMP_MP', 'MISCMP_GP', 'NAV_MGR', 'RADALT', 'SYS', 'SYSIO15', 'SYSIO42', 'SYSRED', 'TACAN', 'VOR_ILS', 'VPA', 'WAAS', 'WCA'
}
}
stages {
stage("Test") {
steps {
echo "Running ${CHOICE}"
build job: "UH60Job", parameters: [string(name: "TEST_PROJECT", value: CHOICE), string(name: "SCADE_SUITE_TEST_ACTION", value: "all"), string(name: "VIEW_ROOT", value: "myview")]
}
}
}
}
}
}
}
Note that your inner stage cannot be named dynamically, you'd get a syntax error trying to expand "${CHOICE}".

Jenkins: Trigger another job with branch name

I am using Jenkins Pipeline via declarative and I would like to trigger another job with branch name.
For instance, I have two different pipeline(PipelineA -PipelineB) with stages JobA and JobB.
One of the stage for JobA should trigger the JobB via paramater using env.GIT_BRANCH. What I mean, if we trigger the JobA via origin/develop, then it should trigger the 'JobB' and run the stages where it has origin/develop condition.
Meanwhile, we also making some separate changes on JobB and it also has its own GIT_BRANCH expression.Thus I could not able to find a way to manage this separately without affecting JobA. To be clarify, when JobA trigger JobB with origin/stage parameter, due to latest changes on JobB is origin/development whereas GIT_BRANCH is origin/development, I can not able to run the stages which has stage condition.
Here is my script.
stage ('Job A') {
steps {
script {
echo "Triggering job for branch ${env.GIT_BRANCH}"
ret = build(job: "selenium_tests",
parameters: [
string(name: "projectName", value: "Project1"),
string(name: "branchName", value: "env.GIT_BRANCH")
],
propagate: true,
wait: true)
echo ret.result
currentBuild.result = ret.result
}
}
}
parameters {
string(defaultValue: "project1", description: 'Which project do you want to test?', name: 'projectName')
string(defaultValue: "origin/development", description: 'Environment for selenium tests', name:'branchName')
}
stage ('Job B') {
when {
beforeAgent true
expression { params.projectName == 'Project1' }
expression { params.branchName == "origin/stage"}
expression{ return env.GIT_BRANCH == "origin/stage"}
}
steps {
script {
//Do something
}
}
}
Pass down one more param for branch when trigger Job B
stage('Trigger Job A') {}
stage('Trigger Job B') {
when {
allOf {
beforeAgent true
expression { params.projectName == 'Project1' }
expression{ return env.GIT_BRANCH == "origin/stage"}
}
}
steps {
build(job: "selenium_tests/Job B",
parameters: [
string(name: "projectName", value: "Project1")
strint(name: "branchName", value: "${env.GIT_BRANCH}")
],
propagate: true,
wait: true)
}
}
In Job B' Jenkinsfile add one stage as the first stage to switch to desired branch
pipeline {
parameters {
string(name: 'branchName', defaultValue: 'develop')
}
stages {
stage('Switch branch') {
steps {
sh "git checkout ${params.branchName}"
}
}
// other stages
}
}

Jenkins - matrix jobs - variables on different slaves overwrite each other?

I think i dont get how matrix builds work. When i set some variable in some stage depending on which node i run, then on rest of the stage sometimes this variable is set as it should and sometimes it gets values from other nodes (axes). In example below its like job which runs on ub18-1 sometimes has VARIABLE1='Linux node' and sometimes is VARIABLE1='Windows node'. Or gitmethod sometimes it is created from LinuxGitInfo and sometimes WindowsGitInfo.
Source i based on
https://jenkins.io/doc/book/pipeline/syntax/#declarative-matrix
Script almost exactly the same as real one
#Library('firstlibrary') _
import mylib.shared.*
pipeline {
parameters {
booleanParam name: 'AUTO', defaultValue: true, description: 'Auto mode sets some parameters for every slave separately'
choice(name: 'SLAVE_NAME', choices:['all', 'ub18-1','win10'],description:'Run on specific platform')
string(name: 'BRANCH',defaultValue: 'master', description: 'Preferably common label for entire group')
booleanParam name: 'SONAR', defaultValue: false, description: 'Scan and gateway'
booleanParam name: 'DEPLOY', defaultValue: false, description: 'Deploy to Artifactory'
}
agent none
stages{
stage('BuildAndTest'){
matrix{
agent {
label "$NODE"
}
when{ anyOf{
expression { params.SLAVE_NAME == 'all'}
expression { params.SLAVE_NAME == env.NODE}
}}
axes{
axis{
name 'NODE'
values 'ub18-1', 'win10'
}
}
stages{
stage('auto mode'){
when{
expression { return params.AUTO }
}
steps{
echo "Setting parameters for each slave"
script{
nodeLabelsList = env.NODE_LABELS.split()
if (nodeLabelsList.contains('ub18-1')){
println("Setting params for ub18-1");
VARIABLE1 = 'Linux node'
}
if (nodeLabelsList.contains('win10')){
println("Setting params for Win10");
VARIABLE1 = 'Windows node'
}
if (isUnix()){
gitmethod = new LinuxGitInfo(this,env)
} else {
gitmethod = new WindowsGitInfo(this, env)
}
}
}
}
stage('GIT') {
steps {
checkout scm
}
}
stage('Info'){
steps{
script{
sh 'printenv'
echo "branch: " + env.BRANCH_NAME
echo "SLAVE_NAME: " + env.NODE_NAME
echo VARIABLE1
gitinfo = new GitInfo(gitmethod)
gitinfo.init()
echo gitinfo.author
echo gitinfo.id
echo gitinfo.msg
echo gitinfo.buildinfo
}
}
}
stage('install'){
steps{
sh 'make install'
}
}
stage('test'){
steps{
sh 'make test'
}
}
}
}
}
}
}
Ok i solved the problem by defining variables maps with node/slave names as keys. Some friend even suggested to define variables in yml/json file in repository and parse them. Maybe i will, but so far this works well
example:
before the pipelines
def DEPLOYmap = [
'ub18-1': false,
'win10': true
]
in stages
when {
equals expected: true, actual: DEPLOYmap[NODE]
}

How to trigger a Jenkins Job on multiple nodes from a Pipeline (only one job executing)

I have a Jenkins Job, configured as a Scripted Jenkins Pipeline, which:
Checks out the code from GitHub
merges in developer changes
builds a debug image
it is then supposed to split into 3 separate parallel processes - one of which builds the release version of the code and unit tests it.
The other 2 processes are supposed to be identical, with the debug image being flashed onto a target and various tests running.
The targets are identified in Jenkins as slave_1 and slave_2 and are both allocated the label 131_ci_targets
I am using 'parallel' to trigger the release build, and the multiple instances of the test job. I will post a (slightly redacted) copy of my Scripted pipeline below for full reference, but for the question I have tried all 3 of the following options.
Using a single build call with LabelParamaterValue and allNodesMatchingLabel set to true. In this the TEST_TARGETS is the label 131_ci_targets
parallel_steps = [:]
parallel_steps["release"] = { // Release build and test steps
}
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'LabelParameterValue',
name: 'RUN_NODE', label: "${TEST_TARGETS}",
allNodesMatchingLabel: true,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
stage('Parallel'){
parallel parallel_steps
} // Parallel
Using a single build call with NodeParamaterValue and a list of all nodes. In this TEST_TARGETS is again the label, while test_nodes is a list of 2 strings: [slave_1, slave_2]
parallel_steps = [:]
parallel_steps["release"] = { // Release build and test steps
}
test_nodes = hostNames("${TEST_TARGETS}")
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: test_nodes,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
stage('Parallel'){
parallel parallel_steps
} // Parallel
3: Using multiple stages, each with a single build call with NodeParamaterValue and a list containing only 1 slave id.
test_nodes is the list of strings : [slave_1, slave_2], while the first call passes slave_1 and the second slave_2.
for ( tn in test_nodes ) {
parallel_steps["${tn}"] = {
stage("${tn}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: [tn],
nodeEligibility: [$class: 'IgnoreOfflineNodeEligibility']]],
wait: false
}
} // ${tn}
}
All of the above will trigger only a single run of the 'Trial_Test_Pipe' on slave_2 assuming that both slave_1 and slave_2 are defined, online and have available executors.
The Trial_Test_Pipe job is another Jenkins Pipeline job, and has the checkbox "Do not allow concurrent builds" unchecked.
Any thoughts on:
Why the job will only trigger one of the runs, not both?
What the correct solution may be?
For reference now: here is my full(ish) scripted Jenkins job:
import hudson.model.*
import hudson.EnvVars
import groovy.json.JsonSlurperClassic
import groovy.json.JsonBuilder
import groovy.json.JsonOutput
import java.net.URL
def BUILD_SLAVE=""
// clean the workspace before starting the build process
def clean_before_build() {
bat label:'',
script: '''cd %GITHUB_REPO_PATH%
git status
git clean -x -d -f
'''
}
// Routine to build the firmware
// Can build Debug or Release depending on the environment variables
def build_the_firmware() {
return
def batch_script = """
REM *** Build script here
echo "... Build script here ..."
"""
bat label:'',
script: batch_script
}
// Copy the hex files out of the Build folder and into the Jenkins workspace
def copy_hex_files_to_workspace() {
return
def batch_script = """
REM *** Copy HEX file to workspace:
echo "... Copy HEX file to workspace ..."
"""
bat label:'',
script: batch_script
}
// Updated from stackOverflow answer: https://stackoverflow.com/a/54145233/1589770
#NonCPS
def hostNames(label) {
nodes = []
jenkins.model.Jenkins.instance.computers.each { c ->
if ( c.isOnline() ){
labels = c.node.labelString
labels.split(' ').each { l ->
if (l == label) {
nodes.add(c.node.selfLabel.name)
}
}
}
}
return nodes
}
try {
node('Build_Slave') {
BUILD_SLAVE = "${env.NODE_NAME}"
echo "build_slave=${BUILD_SLAVE}"
stage('Checkout Repo') {
// Set a desription on the build history to make for easy identification
currentBuild.setDescription("Pull Request: ${PULL_REQUEST_NUMBER} \n${TARGET_BRANCH}")
echo "... checking out dev code from our repo ..."
} // Checkout Repo
stage ('Merge PR') {
// Merge the base branch into the target for test
echo "... Merge the base branch into the target for test ..."
} // Merge PR
stage('Build Debug') {
withEnv(['LIB_MODE=Debug', 'IMG_MODE=Debug', 'OUT_FOLDER=Debug']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Debug
stage('Post Build') {
if (currentBuild.resultIsWorseOrEqualTo("UNSTABLE")) {
echo "... Send a mail to the Admins and the Devs ..."
}
} // Post Merge
} // node
parallel_steps = [:]
parallel_steps["release"] = {
node("${BUILD_SLAVE}") {
stage('Build Release') {
withEnv(['LIB_MODE=Release', 'IMG_MODE=Release', 'OUT_FOLDER=build\\Release']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Release
stage('Unit Tests') {
echo "... do Unit Tests here ..."
}
}
} // release
test_nodes = hostNames("${TEST_TARGETS}")
if (true) {
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'LabelParameterValue',
name: 'RUN_NODE', label: "${TEST_TARGETS}",
allNodesMatchingLabel: true,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
} else if ( false ) {
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: test_nodes,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
} else {
for ( tn in test_nodes ) {
parallel_steps["${tn}"] = {
stage("${tn}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: [tn],
nodeEligibility: [$class: 'IgnoreOfflineNodeEligibility']]],
wait: false
}
} // ${tn}
}
}
stage('Parallel'){
parallel parallel_steps
} // Parallel
} // try
catch (Exception ex) {
if ( manager.logContains(".*Merge conflict in .*") ) {
manager.addWarningBadge("Pull Request ${PULL_REQUEST_NUMBER} Experienced Git Merge Conflicts.")
manager.createSummary("warning.gif").appendText("<h2>Experienced Git Merge Conflicts!</h2>", false, false, false, "red")
}
echo "... Send a mail to the Admins and the Devs ..."
throw ex
}
So ... I have a solution for this ... as in, I understand what to do, and why one of the above solutions wasn't working.
The winner is Option 3 ... the reason it wasn't working is that the code inside the enclosure (the stage part) isn't evaluated until the stage is actually being run. As a result the strings aren't expanded until then and, since tn is fixed at slave_2 by that point, that's the value used on both parallel streams.
In the Jenkins examples here ... [https://jenkins.io/doc/pipeline/examples/#parallel-from-grep] ... the enclosures are returned from a function transformIntoStep and by doing this I was able to force early evaluation of the strings and so get parallel steps running on both slaves.
If you're here looking for answers, I hope this helps. If you are, and it has, please feel free to give me an uptick. Cheers :)
My final scripted jenkinsfile looks something like this:
import hudson.model.*
import hudson.EnvVars
import groovy.json.JsonSlurperClassic
import groovy.json.JsonBuilder
import groovy.json.JsonOutput
import java.net.URL
BUILD_SLAVE=""
parallel_steps = [:]
// clean the workspace before starting the build process
def clean_before_build() {
bat label:'',
script: '''cd %GITHUB_REPO_PATH%
git status
git clean -x -d -f
'''
}
// Routine to build the firmware
// Can build Debug or Release depending on the environment variables
def build_the_firmware() {
def batch_script = """
REM *** Build script here
echo "... Build script here ..."
"""
bat label:'',
script: batch_script
}
// Copy the hex files out of the Build folder and into the Jenkins workspace
def copy_hex_files_to_workspace() {
def batch_script = """
REM *** Copy HEX file to workspace:
echo "... Copy HEX file to workspace ..."
"""
bat label:'',
script: batch_script
}
// Updated from stackOverflow answer: https://stackoverflow.com/a/54145233/1589770
#NonCPS
def hostNames(label) {
nodes = []
jenkins.model.Jenkins.instance.computers.each { c ->
if ( c.isOnline() ){
labels = c.node.labelString
labels.split(' ').each { l ->
if (l == label) {
nodes.add(c.node.selfLabel.name)
}
}
}
}
return nodes
}
def transformTestStep(nodeId) {
return {
stage(nodeId) {
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: TARGET_BRANCH),
string(name: 'FRAMEWORK_VERSION', value: FRAMEWORK_VERSION),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: [nodeId],
nodeEligibility: [$class: 'IgnoreOfflineNodeEligibility']]],
wait: false
}
}
}
def transformReleaseStep(build_slave) {
return {
node(build_slave) {
stage('Build Release') {
withEnv(['LIB_MODE=Release', 'IMG_MODE=Release', 'OUT_FOLDER=build\\Release']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Release
stage('Unit Tests') {
echo "... do Unit Tests here ..."
}
}
}
}
try {
node('Build_Slave') {
BUILD_SLAVE = "${env.NODE_NAME}"
echo "build_slave=${BUILD_SLAVE}"
parallel_steps["release"] = transformReleaseStep(BUILD_SLAVE)
test_nodes = hostNames("${TEST_TARGETS}")
for ( tn in test_nodes ) {
parallel_steps[tn] = transformTestStep(tn)
}
stage('Checkout Repo') {
// Set a desription on the build history to make for easy identification
currentBuild.setDescription("Pull Request: ${PULL_REQUEST_NUMBER} \n${TARGET_BRANCH}")
echo "... checking out dev code from our repo ..."
} // Checkout Repo
stage ('Merge PR') {
// Merge the base branch into the target for test
echo "... Merge the base branch into the target for test ..."
} // Merge PR
stage('Build Debug') {
withEnv(['LIB_MODE=Debug', 'IMG_MODE=Debug', 'OUT_FOLDER=Debug']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Debug
stage('Post Build') {
if (currentBuild.resultIsWorseOrEqualTo("UNSTABLE")) {
echo "... Send a mail to the Admins and the Devs ..."
}
} // Post Merge
} // node
stage('Parallel'){
parallel parallel_steps
} // Parallel
} // try
catch (Exception ex) {
if ( manager.logContains(".*Merge conflict in .*") ) {
manager.addWarningBadge("Pull Request ${PULL_REQUEST_NUMBER} Experienced Git Merge Conflicts.")
manager.createSummary("warning.gif").appendText("<h2>Experienced Git Merge Conflicts!</h2>", false, false, false, "red")
}
echo "... Send a mail to the Admins and the Devs ..."
throw ex
}

Resources