For Loop to call multiple variables from properties file - jenkins

I have a properties file which I call inside my Jenkins Pipeline Script to get multiple variables.
BuildCounter = n
BuildName1 = Name 1
BuildName2 = Name 2
...
Buildnamen = Name n
I call my properties file with: def props = readProperties file: Path
Now I want to create a loop to print all my BuildNames
for (i = 0; i < BuildJobCounterInt; i++){
tmp = 'BuildName' + i+1
println props.tmp
}
But of course this is not working. ne last println call is searching for a variable called tmp in my properties file. Is there a way to perform this or am I completely wrong?
EDIT:
This is my .properties file:
BuildJobCounter = 1
BuildName1 = 'Win32'
BuildPath1 = '_Build/MBE3_Win32'
BuildName2 = 'empty'
BuildPath2 = 'empty'
TestJobCounter = '0'
TestName1 = 'empty'
TestPath1 = 'empty'
TestName2 = 'empty'
TestPath2 = 'empty'
In my Jenkins pipeline I want to have the possibility to check the ammount of Build/TestJobs and automatically calle the Jobs (each BuildName and BuildPath is a Freestyle Job) To call all these Job I thought of calling the variables inside a for loop. So for every istep I have the Name/Path pair.

Try the below:
Change from:
println props.tmp
To:
println props[tmp]
or
println props."$tmp"
EDIT : based on OP comment
change from:
tmp = 'BuildName' + i+1
To:
def tmp = "BuildName${(i+1).toString()}"

Related

Jenkins pass trigger block to Shared Library Pipeline

I have a Shared Library containing a declarative pipeline which numerous jobs are using to build with.
However I want to be able to pass the trigger block in from the Jenkinsfile to the Shared Library as some jobs I want to trigger via Cron, others I want to trigger via SNS and others with an upstream job trigger.
Is there a way I can do this? Everything I have tried so fails
I have tried
#Jenkinsfile
#Library('build') _
buildAmi{
OS = "amazonlinux"
owners = "amazon"
filters = "\"Name=name,Values=amzn-ami-hvm-*-x86_64-gp2\""
template = "linux_build_template.json"
trigger = triggers {
cron('0 H(06-07) * * *')
}
#Shared Lib
pipeline {
$buildArgs.trigger
which fails with
Not a valid section definition
Have also tried passing just the cron schedule into the Shared lib e.g.
triggers {
cron("${buildArgs.cron}")
}
but that gives the error
Method calls on objects not allowed outside "script" blocks
Have tried various other thing but it seems the declarative style requires a trigger block with just triggers inside.
Does anyone know of a way to achieve what I am trying to do?
Too much code for a comment:
We wanted to merge some preset properties with jenkinsfile properties.
This is the way to get this to work since the properties method can only be called once.
We used an additional pipeline parameter with the trigger info:
Jenkinsfile:
mypipeline (
pipelineTriggers: [pollSCM(scmpoll_spec: '0 5,11,15,22 * * 1-5')],
)
then in the shared library
#Field def pipelineProperties = [
disableConcurrentBuilds(abortPrevious: true),
buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '0'))
]
def pipelineTriggersArgs = args.pipelineTriggers
def setJobProperties() {
def props = pipelineProperties
def str = ""
pipelineProperties.each { str += (blue + it.getClass().toString() + it.toString() + "\n") }
if (pipelineTriggersArgs) {
if (debugLevel > 29) {
def plTrig = pipelineTriggers(pipelineTriggersArgs)
str += (mag + "args: " + pipelineTriggersArgs.getClass().toString() + "\t\t" + pipelineTriggersArgs.toString() + "\n" + off)
str += (red + "expr: " + plTrig.getClass().toString() + "\t\t" + plTrig.toString() + "\n" + off)
echo str
}
props?.add(pipelineTriggers(pipelineTriggersArgs)) // pass param with list as the arg
}
properties(props)
}
That was the only way to merge preset and parameters since the properties cannot be overwritten. not exactly the answer but an approach to solve it I think.

JavaScript eval how to use in groovy or grails

I want to assign a value to an existing variable, but the name of the variable is dynamic. How do I do that
def a1 = 0;
def b = 1;
eval("a${b} =1;");
print a1
No need for javascript here:
def name = 'someName';
def value = 'someValue';
new GroovyShell(this.binding).evaluate("${name} = '${value}'")
assert someName == value;
Though this doesn't answer your question exactly, an easy way around this is to drop your dynamic variables as map keys instead... avoid needing to eval them
def b = 1
def map = [:]
map."a${b}" = 1
assert map."a${b}" == 1
println(map) // result is [a1:1]

Anybody to explain the magic of groovy closure / Jenkins

I want to understand a portion of code of my Jenkins's pipeline based on groovy DSL and closure.
I have a Jenkins file as follow:
foo {
var1 = "foo value 1"
var2 = "foo value 2"
}
I have a groovy script (foo.groovy in vars directory) in my Jenkins's shared lib:
def call(body) {
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
println config.var1 // display foo value 1 : for me the magic is here !!
}
I want to understand the groovy / jenkins mechanisme that when the closure is called the map config is set with the variables var1 and var2.
I understand (nearly) the closure mechanisme and the delegate method, but how can we know that the affectation of the config map to the delegate field of the closure allow the construction of the map with the variables declared in my Jenkinsfile ?
I hope I am quite clear in my question ! :)
Regards,
Stef
When a property is referenced within a closure, and that reference cannot be resolved within the closure, attempts are made to resolve it in various "places"
this
The closure's delegate property, which can be reassigned
The closure's owner
In your example, var1 and var2 are examples of references that cannot be resolved within the closure.
The following assigns the closure's delegate to config and ensures that this is the first "place" that will be used to resolve unresolved references
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
Therefore, when we set the properties var1 and var2 within the closure, they are resolved against config, i.e. set as key-value pairs of this Map.
If your example was changed to:
foo {
def var3 = "some value"
var1 = "foo value 1"
var2 = "foo value 2"
var3 = "some value"
}
var3 would not be resolved by config because it can be resolved within the closure.
Update
In response to your comment which (I think) is asking: why does setting the closure's delegate to a map cause a key-value pair to be added to that map?
When var1 = "foo value 1" can't be resolved within the closure, it is resolved instead against the map, because of this
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
so that effectively means we're calling
config.var1 = "foo value 1"
which is Groovy shorthand for
config.put("var1", "foo value 1")
Maybe it's a bit easier to understand if you change your code to call the put method directly, e.g.
def foo = {
put('var1', "foo value 1")
}
def call(body) {
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
println config.var1 // display foo value 1 : for me the magic is here !!
}
call(foo)
If you run this code in the Groovy Console, you'll see that it also prints "foo value 1".
If you're still struggling, maybe this question will help.

How to fix java.io.NotSerializable exception when pushing each stage data from jenkins pipeline to influx db?

I’m trying to push stage data from Jenkins pipeline to influx db and I get issue as below
Issue: In jenkin builds console output, after each stage, I see : java.io.NotSerializableException: sun.net.www.protocol.http.HttpURLConnection
Jenkins pipeline 2.150.2
InfluxDB 1.6.2
Any suggestions would be helpful. I'm new to this topic.
Note: I have commented #NonCPS annotation. If I uncomment then it sends only first stage data and exits and fails to iterate for loop which has 20 stages data.
//Maps for Field type columns
myDataField1 = [:]
myDataField2 = [:]
myDataField3 = [:]
//Maps for Custom Field measurements
myCustomDataFields1 = [:]
myCustomDataFields2 = [:]
myCustomDataFields3 = [:]
//Maps for Tag type columns
myDataTag1 = [:]
myDataTag2 = [:]
myDataTag3 = [:]
//Maps for Custom Tag measurements
myCustomDataTags1 = [:]
myCustomDataTags2 = [:]
myCustomDataTags3 = [:]
//#NonCPS
def pushStageData() {
def url_string = "${JENKINS_URL}job/ENO_ENG_TP/job/R421/13/wfapi/describe"
def get = new URL(url_string).openConnection();
get.addRequestProperty ("User-Agent","Mozilla/4.0");
get.addRequestProperty("Authorization", "Basic dZXZvceDIwMTk=");
//fetching the contents of the endpoint URL
def jsonText = get.getInputStream().getText();
//converting the text into JSON object using
JsonSlurperClassic
def jsonObject = new JsonSlurperClassic().parseText(jsonText)
// Extracting the details of all the stages present in that particular build number
for (int i=0; i<jsonObject.stages.size()-1; i++){ //size-1 to ignore the post stage
//populating the field type columns of InfluxDB measurements and pushing them to the map called myDataField1
def size = jsonObject.stages.size()-1
myDataField1['result'] = jsonObject.stages[i].status
myDataField1['duration'] =
jsonObject.stages[i].durationMillis
myDataField1['stage_name'] = jsonObject.stages[i].name
//populating the tag type columns of InfluxDB
measurements and pushing them to the map called
myDataTag1
myDataTag1['result_tag'] = jsonObject.stages[i].status
myDataTag1['stage_name_tag'] = jsonObject.stages[i].name
//assigning field type columns to the measurement called CustomData
myCustomDataFields1['CustomData'] = myDataField1
//assigning tag type columns to the measurement called CustomData
myCustomDataTags1['CustomData'] = myDataTag1
//Push the data into influx instance
try
{ step([$class: 'InfluxDbPublisher', target: 'jenkins_data', customPrefix: null, customDataMapTags: myCustomDataTags1]) }
catch (err)
{ println ("pushStagData exception: " + err) }
}
}
Expected: I want to push each stages of jenkins pipeline data to influx db.
Try to use jenkins' standard methods instead of creating objects manually. You can use httpRequest instead of URL. This should fix exception you are getting. You can also use readJson instead of JsonSlurperClassic (latter is optional since classic slurper is actually serializable).
you just need to explicitly set get to null to make sure there's none non serializable object being instantiated.
def jsonText = get.getInputStream().getText();
get = null;

How to filter Flat File Source using script component

I have the following scenario:
I have thousands of text files with the below format.The column names are written in separate lines where as the row values are delimited by Pipe(|).
START-OF-FILE
PROGRAMNAME=getdata
DATEFORMAT=yyyymmdd
#Some Text
#Some Text
#Some Text
#Some Text
#Some Text
START-OF-FIELDS
Field1
Field2
Field3
------
FieldN
END-OF-FIELDS
TIMESTARTED=Tue May 12 16:04:42 JST 2015
START-OF-DATA
Field1Value|Field2value|Field3Value|...|Field N Value
Field1Value|Field2value|Field3Value|...|Field N Value
------|...........|----|-------
END-OF-DATA
DATARECORDS=30747
TIMEFINISHED=Tue May 12 16:11:53 JST 2015
END-OF-FILE
Now I have a corresponding SQL Server table, where I can easily load the data as destination.
Since I am new to SSIS, having trouble as to how to write the Script Component so that I can filter the Source Text files and easily load into sql server table.
Thanks in advance!
There are a few ways to do it. If the format of the files are constant, there are some useful properties of the flat file connection manager editor. For example, you can add a new flat file connection into the connection managers. There are some properties such as "Rows to skip" for the above file, you could set this to 18. Then it would start at the columns line with the "|".
Another property of the flat file connection manager that may be useful is that if you open the flat file connection manager, and then click on columns in the side menu, you can set the column delimter to the pipe "|"
But if the format of the file will change, e.g. variable number of header rows, you can use a script task to remove any non-piped rows. e.g. the header and footer.
For example, you can add a method such as file.readalllines and then edit or remove the lines as needed then save the file.
Info about that method is here:
https://msdn.microsoft.com/en-us/library/s2tte0y1%28v=vs.110%29.aspx
e.g. to remove last line in script task
string[] lines = File.ReadAllLines( "input.txt" );
StringBuilder sb = new StringBuilder();
int count = lines.Length - 1; // all except last line
for (int i = 0; i < count; i++)
{
sb.AppendLine(lines[i]);
}
File.WriteAllText("output.txt", sb.ToString());
USE Below VB Script in your SSIS SCript Component Task as source
enter code here
Imports System
Imports System.Data
Imports System.Math
Imports System.IO
Imports Microsoft.SqlServer.Dts.Runtime
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
<Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute()> _
<CLSCompliant(False)> _
Public Class ScriptMain
Inherits UserComponent
'Private strSourceDirectory As String
'Private strSourceFileName As String
Private strSourceSystem As String
Private strSourceSubSystem As String
Private dtBusinessDate As Date
Public Overrides Sub PreExecute()
MyBase.PreExecute()
'
' Add your code here for preprocessing or remove if not needed
''
End Sub
Public Overrides Sub PostExecute()
MyBase.PostExecute()
'
' Add your code here for postprocessing or remove if not needed
' You can set read/write variables here, for example:
Dim strSourceDirectory As String = Me.Variables.GLOBALSourceDirectory.ToString()
Dim strSourceFileName As String = Me.Variables.GLOBALSourceFileName.ToString()
'Dim strSourceSystem As String = Me.Variables.GLOBALSourceSystem.ToString()
'Dim strSourceSubSystem As String = Me.Variables.GLOBALSourceSubSystem.ToString()
'Dim dtBusinessDate As Date = Me.Variables.GLOBALBusinessDate.Date
End Sub
Public Overrides Sub CreateNewOutputRows()
'
' Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
' For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
'
Dim sr As System.IO.StreamReader
Dim strSourceDirectory As String = Me.Variables.GLOBALSourceDirectory.ToString()
Dim strSourceFileName As String = Me.Variables.GLOBALSourceFileName.ToString()
'Dim strSourceSystem As String = Me.Variables.GLOBALSourceSystem.ToString()
'Dim strSourceSubSystem As String = Me.Variables.GLOBALSourceSubSystem.ToString()
'Dim dtBusinessDate As Date = Me.Variables.GLOBALBusinessDate.Date
'sr = New System.IO.StreamReader("C:\QRM_SourceFiles\BBG_BONDS_OUTPUT_YYYYMMDD.txt")
sr = New System.IO.StreamReader(strSourceDirectory & strSourceFileName)
Dim lineIndex As Integer = 0
While (Not sr.EndOfStream)
Dim line As String = sr.ReadLine()
If (lineIndex <> 0) Then 'remove header row
Dim columnArray As String() = line.Split(Convert.ToChar("|"))
If (columnArray.Length > 1) Then
Output0Buffer.AddRow()
Output0Buffer.Col0 = columnArray(0).ToString()
Output0Buffer.Col3 = columnArray(3).ToString()
Output0Buffer.Col4 = columnArray(4).ToString()
Output0Buffer.Col5 = columnArray(5).ToString()
Output0Buffer.Col6 = columnArray(6).ToString()
Output0Buffer.Col7 = columnArray(7).ToString()
Output0Buffer.Col8 = columnArray(8).ToString()
Output0Buffer.Col9 = columnArray(9).ToString()
Output0Buffer.Col10 = columnArray(10).ToString()
Output0Buffer.Col11 = columnArray(11).ToString()
Output0Buffer.Col12 = columnArray(12).ToString()
Output0Buffer.Col13 = columnArray(13).ToString()
Output0Buffer.Col14 = columnArray(14).ToString()
Output0Buffer.Col15 = columnArray(15).ToString()
Output0Buffer.Col16 = columnArray(16).ToString()
Output0Buffer.Col17 = columnArray(17).ToString()
Output0Buffer.Col18 = columnArray(18).ToString()
Output0Buffer.Col19 = columnArray(19).ToString()
Output0Buffer.Col20 = columnArray(20).ToString()
Output0Buffer.Col21 = columnArray(21).ToString()
Output0Buffer.Col22 = columnArray(22).ToString()
Output0Buffer.Col23 = columnArray(23).ToString()
Output0Buffer.Col24 = columnArray(24).ToString()
End If
End If
lineIndex = lineIndex + 1
End While
sr.Close()
End Sub
End Class
Code End

Resources