Write a List of Strings into CSV file - jenkins

list[0] = "ilocpde406,Pass,Pass,w38,Used,oc-cd-22-02"
list[1] = "ilocpde406,Pass,Pass,w39,Available,NA"
list[2] = "ilocpde406,Pass,Pass,w40,Available,NA"
Output:
Cluster NTNET INTEG NodeName Status Namespace
ilocpde406 Pass Pass w38 Used oc-cd-22-02
ilocpde406 Pass Pass w39 Available NA
etc..
I thought about splitting using the comma, and appending each line with all the elements, but I was wondering if there is a better approach for this in Jenkins.

Using split / replace is not a bad idea.
Down below is one of thousand ways to do it:
def list = []
list << 'Cluster,NTNET,INTEG,NodeName,Status,Namespace'
list << "ilocpde406,Pass,Pass,w38,Used,oc-cd-22-02"
list << "ilocpde406,Pass,Pass,w39,Available,NA"
list << "ilocpde406,Pass,Pass,w40,Available,NA"
StringWriter out = new StringWriter() // or File
list.eachWithIndex{ item, ix ->
if( ix ) out << '\n'
item.split( ',' ).each{ out << it.padRight( 12, ' ' ) }
}
out
prints
Cluster NTNET INTEG NodeName Status Namespace
ilocpde406 Pass Pass w38 Used oc-cd-22-02
ilocpde406 Pass Pass w39 Available NA
ilocpde406 Pass Pass w40 Available NA
If you want do add fancyness, you can scan the list for max column sizes and padRight each item with respective size.

Related

Groovy create map from lines of text

I have raw data that is contained in 3 separate lines. I want to build a single map record using parts of each line. I then read the next 3 lines and create the next map record and so on. All the groovy examples I've found on maps show them being created from data on a single line, or possibly i am misunderstanding the examples. Here is what the raw data looks like.
snmp v2: data result = "Local1"
snmp v2: data result ip = "10.10.10.121"
snmp v2: data result gal = "899"
new
snmp v2: data result = "Local2"
snmp v2: data result ip = "192.168.10.2"
snmp v2: data result gal = "7777"
new
I want to put this data into a map. In this example Local1 and Local2 would be keys and they would each have 2 associated values. I will show you my latest attempt but it is little more then a guess that failed.
def data = RAW
def map = [:]
data.splitEachLine("="){
it.each{ x ->
map.put(it[0], it[1])
map.each{ k, v -> println "${k}:${v}" } }}
The desired output is:
[ Local1 : [ ip: "10.10.10.121", gal: "899" ],
Local2: [ ip: "192.168.10.2", gal: "7777" ] ]
You can build a new data structure from an existing one using aggregate operations defined on collections; collect produces a list from an existing list, collectEntries creates a map from a list.
The question specifies there are always three lines for an entry, followed by a line with "new" on it. If I can assume they're always in the same order I can grab the last word off each line, use collate to group every four lines into a sublist, then convert each sublist to a map entry:
lines = new File('c:/temp/testdata.txt').readLines()
mymap = lines.collect { it.tokenize()[-1] }
.collate(4)
.collectEntries { e-> [(e[0].replace('"', ''))) : [ip: e[1], gal: e[2]]] }
which evaluates to
[Local1:[ip:"10.10.10.121", gal:"899"], Local2:[ip:"192.168.10.2", gal:"7777"]]
or remove all the quotes in the first step:
mymap = lines.collect { (it.tokenize()[-1]).replace('"', '') }
.collate(4)
.collectEntries { e-> [(e[0]) : [ip: e[1], gal: e[2]]] }
in order to get
[Local1:[ip:10.10.10.121, gal:899], Local2:[ip:192.168.10.2, gal:7777]]
If you want to get a nested map as suggested by dmahapatro try this:
def map = [:]
data=data.eachLine() { line ->
if(line.startsWith("new")) return
tokens=line.replace("snmp v2: data","").split("=")
tokens=tokens.collect() { it.trim().replace("result ","").replaceAll(/"/, "") }
if(tokens[0]=="result") {
nested=[:]
map[tokens[1]]=nested
}
else
nested[tokens[0]]=tokens[1]
}
println("map: $map")
here we:
iterate over lines
skip lines with "new" at the beginning
remove "snmp v2: data" from the text of the line
split each line in tokens, trim() each token, and remove "result " and quotes
tokens are in pairs and now look like:
result, Local1
ip, 10.10.10.121
gal, 899
next when the first token is "result", we build a nested map and place in the main map at the key given by the value of token[1]
otherwise we populate the nested map with key=token[0] and value=token[1]
the result is:
map: [Local1:[ip:10.10.10.121, gal:899], Local2:[ip:192.168.10.2, gal:7777]]
edit: fixed to remove quotes

PIG parsing string inputs

I have two files,
one is titles.csv and has a movie ID and title with this format:
999: Title
734: Another_title
the other is a list of user IDs who link to the movie
categoryID: user1_id, ....
222: 120
227: 414 551
249: 555
Of different sizes (minimum is one user per genre category)
The goal is to first parse the strings so that they are each split into two (for both files), everything before the ':' and everything after.
I have tried doing this
movies = LOAD .... USING PigStorage('\n') AS (line: chararray)
users = LOAD .... USING PigStorage('\n') AS (line: chararray)
-- parse 'users'/outlinks, make a list and count fields
tokenized = FOREACH users GENERATE FLATTEN(TOKENIZE(line, ':')) AS parameter;
filtered = FILTER tokenized BY INDEXOF(parameter, ' ') != -1;
result = FOREACH filtered GENERATE SUBSTRING(parameter, 2, (int)SIZE(parameter)) AS number;
But this is where I got stuck/confused. Thoughts?
I'm also supposed to output the top 10 entries who have the most user IDs in the second part of the string.
try like this
movies = LOAD 'file1' AS titleLine;
A = FOREACH movies GENERATE FLATTEN(REGEX_EXTRACT_ALL(titleLine,'^(.*):\\s+(.*)$')) AS (movieId:chararray,title:chararray);
users = LOAD 'file2' AS userLine;
B = FOREACH users GENERATE FLATTEN(REGEX_EXTRACT_ALL(userLine,'^(.*):\\s+(.*)$')) AS (categoryId:chararray,userId:chararray);
Output1:
(999,Title)
(734,Another_title)
Output2:
(222,120)
(227,414 551)
(249,555 )

Querying values from a Map Property in a Grails Domain Class

I found the solution Burt Beckwith offered in this question: add user define properties to a domain class and think this could be a viable option for us in some situations. In testing this, I have a domain with a Map property as described in the referenced question. Here is a simplified version (I have more non-map properties, but they are not relevant to this question):
class Space {
String spaceDescription
String spaceType
Map dynForm
String toString() {
return (!spaceType)?id:spaceType + " (" + spaceDescription + ")"
}
}
I have some instances of space saved with some arbitrary data in the dynForm Map like 'Test1':'abc' and 'Test2':'xyz'.
I am trying to query this data and have succesfully used HQL to filter doing the following:
String className = "Space"
Class clazz = grailsApplication.domainClasses.find { it.clazz.simpleName == className }.clazz
def res = clazz.executeQuery("select distinct space.id, space.spaceDescription from Space as space where space.dynForm['Test1'] = 'abc' " )
log.debug "" + res
I want to know if there is a way to select an individual item from the Map dynForm in a select statement. Somehting like this:
def res = clazz.executeQuery("select distinct space.id, elements(space.dynForm['Test1']) from Space as space where space.dynForm['Test1'] = 'abc' " )
I can select the entire map like this:
def res = clazz.executeQuery("select distinct elements(space.dynForm) from Space as space where space.dynForm['Test1'] = 'abc' " )
But I just want to get a specific instance based on the string idx.
how about to use Criteria, but i don't have any sql server so i haven't tested yet.
def results = Space.createCriteria().list {
dynForm{
like('Test1', 'abc')
}
}

Inserting Key Pairs into Lua table

Just picking upon Lua and trying to figure out how to construct tables.
I have done a search and found information on table.insert but all the examples I have found seem to assume I only want numeric indices while what I want to do is add key pairs.
So, I wonder if this is valid?
my_table = {}
my_table.insert(key = "Table Key", val = "Table Value")
This would be done in a loop and I need to be able to access the contents later in:
for k, v in pairs(my_table) do
...
end
Thanks
There are essentially two ways to create tables and fill them with data.
First is to create and fill the table at once using a table constructor. This is done like follows:
tab = {
keyone = "first value", -- this will be available as tab.keyone or tab["keyone"]
["keytwo"] = "second value", -- this uses the full syntax
}
When you do not know what values you want there beforehand, you can first create the table using {} and then fill it using the [] operator:
tab = {}
tab["somekey"] = "some value" -- these two lines ...
tab.somekey = "some value" -- ... are equivalent
Note that you can use the second (dot) syntax sugar only if the key is a string respecting the "identifier" rules - i.e. starts with a letter or underscore and contains only letters, numbers and underscore.
P.S.: Of course you can combine the two ways: create a table with the table constructor and then fill the rest using the [] operator:
tab = { type = 'list' }
tab.key1 = 'value one'
tab['key2'] = 'value two'
Appears this should be the answer:
my_table = {}
Key = "Table Key"
-- my_table.Key = "Table Value"
my_table[Key] = "Table Value"
Did the job for me.

Ruby array, convert to two arrays in my case

I have an array of string which contains the "firstname.lastname?some.xx" format strings:
customers = ["aaa.bbb?q21.dd", "ccc.ddd?ew3.yt", "www.uuu?nbg.xcv", ...]
Now, I would like to use this array to produce two arrays, with:
the element of the 1st array has only the string before "?" and replace the "." to a space.
the element of the 2nd array is the string after "?" and include "?"
That's I want to produce the following two arrays from the customers array:
1st_arr = ["aaa bbb", "ccc ddd", "www uuu", ...]
2nd_arr = ["?q21.dd", "?ew3.yt", "?nbg.xcv", ...]
What is the most efficient way to do it if I use customers array as an argument of a method?
def produce_two_arr customers
#What is the most efficient way to produce the two arrays
#What I did:
1st_arr = Array.new
2nd_arr = Array.new
customers.each do |el|
1st_Str, 2nd_Str=el.split('?')
1st_arr << 1st_str.gsub(/\./, " ")
2nd_arr << "?"+2nd_str
end
p 1st_arr
p 2nd_arr
end
Functional approach: when you are generating results inside a loop but you want them to be split in different arrays, Array#transpose comes handy:
ary1, ary2 = customers.map do |customer|
a, b = customer.split("?", 2)
[a.gsub(".", " "), "?" + b]
end.transpose
Anytime you're building an array from another, reduce (a.k.a. inject) is a great help:
But sometimes, a good ol' map is all you need (in this case, either one works because you're building an array of the same size):
a, b = customers.map do |customer|
a, b = customer.split('?')
[a.tr('.', ' '), "?#{b}"]
end.transpose
This is very efficient since you're only iterating through customers a single time and you are making efficient use of memory by not creating lots of extraneous strings and arrays through the + method.
Array#collect is good for this type of thing:
arr1 = customers.collect{ |c| c.split("?").first.sub( ".", "" ) }
arr2 = customers.collect{ |c| "?" + c.split("?").last }
But, you have to do the initial c.split("?") twice. So, it's effecient from an amount of code point of view, but more CPU intensive.
1st_arr = customers.collect{ |name| name.gsub(/\?.*\z/,'').gsub(/\./,' ') }
2nd_arr = customers.collect{ |name| name.match(/\?.*\z/)[0] }
array1, array2 = customers.map{|el| el.sub('.', ' ').split /(?:\?)/}.transpose
Based on #Tokland 's code, but it avoids the extra variables (by using 'sub' instead of 'gsub') and the re-attaching of '?' (by using a non-capturing regex).

Resources