Jenkins groovy - copy file based on string in params - jenkins

I have a Jenkins job where that needs to copy a file to a specific server per user choice. Till today all was working since I needed to copy the same file to the server that the user choose.
Now I need to copy a specific file per server. I n case the user chooses to deploy Server_lab1-1.1.1.1 so lab1.file.conf file should be copied. in case the user chooses to deploy Server_lab2-2.2.2.2 , lab2.file.conf should be copied.
I’m guessing that I need to add to the function
Check if the Server parameter includes lab1 if so, copy lab1.file.conf file and if the Server parameter includes lab2 if so, copy lab2.file.conf file
parameters {
extendedChoice(name: 'Servers', description: 'Select servers for deployment', multiSelectDelimiter: ',',
type: 'PT_CHECKBOX', value: 'Server_lab1-1.1.1.1, Server_lab2-2.2.2.2', visibleItemCount: 5)
stage ('Copy nifi.flow.properties file') {
steps { copy_file() } }
def copy_file() {
params.Servers.split(',').each { item -> server = item.split('-').last()
sh "scp **lab1.file.conf or lab2.file.conf** ${ssh_user_name}#${server}:${spath}"
}
}

Are you looking for something like below.
def copy_file() {
params.Servers.split(',').each { item ->
def server = item.split('-').last()
def fileName = item.contains('lab1') ? 'lab1.file' : 'lab2.file'
sh "scp ${fileName} ${ssh_user_name}#${server}:${spath}"
}
}
Update Classic if-else
def copy_file() {
params.Servers.split(',').each { item ->
def server = item.split('-').last()
def fileName = "default"
if( item.contains('lab1')) {
fileName = 'lab1.file'
} else if(item.contains('lab2')) {
fileName = 'lab2.file'
} else if(item.contains('lab3')) {
fileName = 'lab3.file'
}
sh "scp ${fileName} ${ssh_user_name}#${server}:${spath}"
}
}

Related

How to integrate Jenkins pipeline jobs and pass dynamic variables using Groovy?

I want to integrate Jenkins jobs using Groovy by passing dynamic variables based on the projects for which the job is triggered.
Can anyone please suggest on how to proceed with this?
Looks like you would like to persist data between two jenkins jobs or two runs of the same jenkins job. In both cases, I was able to do this using files. you can use write file to do it using groovy or redirection operator (>) to just use bash.
In first job, you can write to the file like so.
node {
// write to file
writeFile(file: 'variables.txt', text: 'myVar:value')
sh 'ls -l variables.txt'
}
In second job, you can read from that file and empty the contents after you read it.
stage('read file contents') {
// read from the file
println readFile(file: 'variables.txt')
}
The file can be anywhere on the filesystem. Example with a file created in /tmp folder is as follows. You should be able to run this pipeline by copy-pasting.
node {
def fileName = "/tmp/hello.txt"
stage('Preparation') {
sh 'pwd & rm -rf *'
}
stage('write to file') {
writeFile(file: fileName, text: "myvar:hello", encoding: "UTF-8")
}
stage('read file contents') {
println readFile(file: fileName)
}
}
You could also use this file as a properties file and update a property that exists and append ones that don't . A quick sample code to do that looks like below.
node {
def fileName = "/tmp/hello.txt"
stage('Preparation') {
sh 'pwd & rm -rf *'
}
stage('write to file') {
writeFile(file: fileName, text: "myvar:hello", encoding: "UTF-8")
}
stage('read file contents') {
println readFile(file: fileName)
}
// Add property
stage('Add property') {
if (fileExists(fileName)) {
existingContents = readFile(fileName)
}
newProperty = "newvar:newValue"
writeFile(file: fileName, text: existingContents + "\n" + newProperty)
println readFile(file: fileName)
}
}
You could easily delete a line that has a property if you would like to get rid of it

How to run jenkins pipeline on all or some servers based

I have a jenkins pipeline that copy file to a server. In job, i have defined 3 servers with the IPs.
What i need to achieve is that
A user can choose on which server to deploy the copy by typing yes or no under the depoly_on_server_x.
In my original pipeline, i'm using a list of IP - But the request is as I mentioned above
How can I define the request?
Thanks
server_1_IP - '1.1.1.1'
server_2_IP - '1.1.1.2'
server_3_IP - '1.1.1.3'
deploy_on_server_1 = 'yes'
deploy_on_server_2 = 'yes'
deploy_on_server_3 = 'no'
pipeline {
agent { label 'client-1' }
stages {
stage('Connect to git') {
steps {
git branch: 'xxxx', credentialsId: 'yyy', url: 'https://zzzz'
}
}
stage ('Copy file') {
when { deploy == yes }
steps {
dir('folder_a') {
file_copy(server_list)
}
}
}
}
}
def file_copy(list) {
list.each { item ->
sh "echo Copy file"
sh "scp 11.txt user#${item}:/data/"
}
}
How about using checkboxes instead?
You can use the Extended Choice Parameter to create a checkbox list based on the server values, when the user builds the job he selects the relevant servers, this list of selected servers is propagated to the job with the selected values, which you can then use for your logic.
Something like:
pipeline {
agent { label 'client-1' }
parameters {
extendedChoice(name: 'Servers', description: 'Select servers for deployment', multiSelectDelimiter: ',',
type: 'PT_CHECKBOX', value: '1.1.1.1,1.1.1.2 ,1.1.1.3', visibleItemCount: 5)
}
stages {
stage('Connect to git') {
steps {
git branch: 'xxxx', credentialsId: 'yyy', url: 'https://zzzz'
}
}
stage ('Copy files') {
steps {
dir('folder_a') {
script{
params.Servers.split(',').each { server ->
sh "echo Copy file to ${server}"
sh "scp 11.txt user#${server}:/data/"
}
}
}
}
}
}
}
In the UI it will look like:
You can also use a multi select select-list instead of checkboxes, or if you want to allow only a single value you can use radio buttons or a single select select-list.
If you want the user to see different values then those that will be used in the code it is also possible because you have the ability to manipulate the input value using groovy before using it.
For example if you want the user options to be <Hostname>-<IP> you can update the parameter value to be something like value: 'server1-1.1.1.1,server2-2.2.2.2', then in your code extract the relevant ip from the given values:
script {
params.Servers.split(',').each { item ->
server = item.split('-').last()
sh "echo Copy file to ${server}"
sh "scp 11.txt user#${server}:/data/"
}
}

How to pass parameters and variables from a file to jenkinsfile?

I'm trying to convert my jenkins pipeline to a shared library since it can be reusable on most of the application. As part of that i have created groovy file in vars folder and kept pipeline in jenkins file in github and able to call that in jenkins successfully
As part of improving this i want to pass params, variables, node labels through a file so that we should not touch jenkins pipeline and if we want to modify any vars, params, we have to do that in git repo itself
pipeline {
agent
{
node
{
label 'jks_deployment'
}
}
environment{
ENV_CONFIG_ID = 'jenkins-prod'
ENV_CONFIG_FILE = 'test.groovy'
ENV_PLAYBOOK_NAME = 'test.tar.gz'
}
parameters {
string (
defaultValue: 'test.x86_64',
description: 'Enter app version',
name: 'app_version'
)
choice (
choices: ['10.0.0.1','10.0.0.2','10.0.0.3'],
description: 'Select a host to be delpoyed',
name: 'host'
)
}
stages {
stage("reading properties from properties file") {
steps {
// Use a script block to do custom scripting
script {
def props = readProperties file: 'extravars.properties'
env.var1 = props.var1
env.var2 = props.var2
}
echo "The variable 1 value is $var1"
echo "The variable 2 value is $var2"
}
In above code,i used pipeline utility steps plugin and able to read variables from extravars.properties file. Is it same way we can do for jenkins parameters also? Or do we have any suitable method to take care of passing this parameters via a file from git repo?
Also is it possible to pass variable for node label also?
=====================================================================
Below are the improvements which i have made in this project
Used node label plugin to pass the node name as variable
Below is my vars/sayHello.groovy file content
def call(body) {
// evaluate the body block, and collect configuration into the object
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
agent
{
node
{
label "${pipelineParams.slaveName}"
}
}
stages {
stage("reading properties from properties file") {
steps {
// Use a script block to do custom scripting
script {
// def props = readProperties file: 'extravars.properties'
// script {
readProperties(file: 'extravars.properties').each {key, value -> env[key] = value }
//}
// env.var1 = props.var1
// env.var2 = props.var2
}
echo "The variable 1 value is $var1"
echo "The variable 2 value is $var2"
}
}
stage ('stage2') {
steps {
sh "echo ${var1}"
sh "echo ${var2}"
sh "echo ${pipelineParams.appVersion}"
sh "echo ${pipelineParams.hostIp}"
}
}
}
}
}
Below is my vars/params.groovy file
properties( [
parameters([
choice(choices: ['10.80.66.171','10.80.67.6','10.80.67.200'], description: 'Select a host to be delpoyed', name: 'host')
,string(defaultValue: 'fxxxxx.x86_64', description: 'Enter app version', name: 'app_version')
])
] )
Below is my jenkinsfile
def _hostIp = params.host
def _appVersion = params.app_version
sayHello {
slaveName = 'master'
hostIp = _hostIp
appVersion = _appVersion
}
Now Is it till we can improve this?Any suggestions let me know.

Creating XML file using StreamingMarkupBuilder() in Jenkins

i have a groovy method that now creates XML files. I have verified it using the groovyConsole. But if i use this snippet in my jenkinsfile, the XML file is not seen to be created in the workspace, although the job completes successfully.
Question: how do i make sure that the XML file is generated in the workspace? I will be using this XML for subsequent stages in the jenkinsfile
Here is how the jenkinsfile looks like:
import groovy.xml.*
node('master') {
deleteDir()
stage('Checkout') {
// checks out the code
}
generateXML("deploy.xml") //This calls the method to generate the XML file
//stage for packaging
//Stage to Publish
//Stage to Deploy
}
#NonCPS
def generateXML(file1) {
println "Generating the manifest XML........"
def workflows = [
[ name: 'A', file: 'fileA', objectName: 'wf_A', objectType: 'workflow', sourceRepository: 'DEV2', folderNames: [ multifolder: '{{multifolderTST}}', multifolder2: '{{multifolderTST2}}' ]],
[ name: 'B', file: 'fileB', objectName: 'wf_B', objectType: 'workflow', sourceRepository: 'DEV2', folderNames: [ multifolder3: '{{multifolderTST3}}', multifolder4: '{{multifolderTST4}}']]
]
def builder = new StreamingMarkupBuilder()
builder.encoding = 'UTF-8'
new File(file1).newWriter() << builder.bind {
mkp.xmlDeclaration()
mkp.declareNamespace(udm :'http://www.w3.org/2001/XMLSchema')
mkp.declareNamespace(powercenter:'http://www.w3.org/2001/XMLSchema')
delegate.udm.DeploymentPackage(version:'$BUILD_NUMBER', application: "informaticaApp"){
delegate.deployables {
workflows.each { item ->
delegate.powercenter.PowercenterXml(name:item.name, file:item.file) {
delegate.scanPlaceholders(true)
delegate.sourceRepository(item.sourceRepository)
delegate.folderNameMap {
item.folderNames.each { name, value ->
it.entry(key:name, value)
}
}
delegate.objectNames {
delegate.value(item.objectName)
}
delegate.objectTypes {
delegate.value(item.objectType)
}
}
}
}
delegate.dependencyResolution('LATEST')
delegate.undeployDependencies(false)
}
}
}
I found the file in the / dir.. As i haven't entered any path in the filewriter.
UPDATE:
this is not the right solution for a distributed env. It appears that the java file io operations only works in master and not in the agent machines.

How to tell Jenkins "Build every project in folder X"?

I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.

Resources