I am using Grails excel import plugin to import an excel file.
static Map propertyConfigurationMap = [
name:([expectedType: ExcelImportService.PROPERTY_TYPE_STRING, defaultValue:null]),
age:([expectedType: ExcelImportService.PROPERTY_TYPE_INT, defaultValue:0])]
static Map CONFIG_BOOK_COLUMN_MAP = [
sheet:'Sheet1',
startRow: 1,
columnMap: [
//Col, Map-Key
'A':'name',
'B':'age',
]
]
I am able to retrieve the array list by using the code snippet:
def usersList = excelImportService.columns(workbook, CONFIG_USER_COLUMN_MAP)
which results in
[[name: Mark, age: 25], [name: Jhon, age: 46], [name: Anil, age: 62], [name: Steve, age: 32]]
And also I'm able to read each record say [name: Mark, age: 25] by using usersList.get(0)
How do I read the each column value?
I know I can read something like this
String[] row = usersList.get(0)
for (String s : row)
println s
I wonder is there any thing that plugin supports so that I can read column value directly rather manipulating it to get the desired result.
Your usersList is basically a List<Map<String, Object>> (list of maps). You can read a column using the name you gave it in the config. In your example, you named column A name and column B age. So using your iteration example as a basis, you can read each column like this:
Map row = usersList.get(0)
for(Map.Entry entry : row) {
println entry.value
}
Groovy makes this easier to do with Object.each(Closure):
row.each { key, value ->
println value
}
If you want to read a specific column value, here are a few ways to do it:
println row.name // One
println row['name'] // Two
println row.getAt('name') // Three
Hint: These all end up calling row.getAt('name')
Related
I'm trying to use alasql to export a set of HTML tables into an excel document.
The documentation has code that looks similar to this:
var data1 = alasql('SELECT * FROM HTML("#dev-table",{headers:false})');
var data2 = alasql('SELECT * FROM HTML("#dev2-table",{headers:false})');
var data3 = alasql('SELECT * FROM HTML("#dev3-table",{headers:false})');
//var data4 = alasql('SELECT * FROM HTML("#dev2-table",{headers:true})');
var data = data1.concat(data2, data3);
alasql('SELECT * INTO XLS("data.xls",{headers:false}) FROM ?', [data]);
The problem is that this code concatenates the data1 and data2 field so that all of the data is printed in the same column. This is not the result I desire. I want "data1" to go into column "A" and data2 to go into column "B".
I've looked through the documentation and am unsure how to get the desired result. I'm aware of the existence of "options" that include fields for specifying columns based on the data itself, but none of those examples are what I want. If this is not possible using alasql, I'm willing to use a different library or framework for this.
Based on this JSFiddle, http://jsfiddle.net/95j0txwx/7/
$scope.items = [{
name: "John Smith",
email: "j.smith#example.com",
dob: "1985-10-10"
}, {
name: "Jane Smith",
email: "jane.smith#example.com",
dob: "1988-12-22"
},
...
I would guess that your data is not formatted correctly to be inserted.
EDIT: JSFiddle is from documentation. https://github.com/agershun/alasql/wiki/XLSX
Hello
I need to perform Domain class filtering in Groovy for provided filter fields.
Here is code sample:
User.findAll(name: filter.name, age: filter.age, department: filter.department)
Is there exists some syntax sugar to help me to validate if for example filter.name is not provided e.g. null or empty - do not filter by this field. Thanks.
Abs has right. Here is an example with if-statements for not provided filter fields:
User.createCriteria().list {
if (filter.name) eq("name", filter.name)
if (filter.age) eq("age", filter.age)
if (filter.department) eq("department", filter.department)
}
you can apply the sugar right on the arguments map (assuming filter is a Map):
def args = filter.collectEntries{ k, v ->
[ 'age', 'name', 'department' ].contains( k ) && v ? [ (k):v ] : Collections.EMPTY_MAP
}
def users = User.findAll args
I'm trying to use the Aerospike bulk loader to seed a cluster with data from a tab-separated file.
The source data looks like this:
set key segments
segment 123 10,20,30,40,50
segment 234 40,50,60,70
The third column, 'segments', contains a comma separated list of integers.
I created a JSON template:
{
"version" : "1.0",
"input_type" : "csv",
"csv_style": { "delimiter": " " , "n_columns_datafile": 3, "ignore_first_line": true}
"key": {"column_name":"key", "type": "integer"},
"set": { "column_name":"set" , "type": "string"},
"binlist": [
{"name": "segments",
"value": {"column_name": "segments", "type": "list"}
}
]
}
... and ran the loader:
java -cp aerospike-load-1.1-jar-with-dependencies.jar com.aerospike.load.AerospikeLoad -c template.json data.tsv
When I query the records in aql, they seem to be a list of strings:
aql> select * from test
+--------------------------------+
| segments |
+--------------------------------+
| ["10", "20", "30", "40", "50"] |
| ["40", "50", "60", "70"] |
+--------------------------------+
The data I'm trying to store is a list of integers. Is there an easy way to convert the objects stored in this bin to a list of integers (possibly a Lua UDF) or perhaps there's a tweak that can be made to the bulk loader template?
Update:
I attempted to solve this by creating a Lua UDF to convert the list from strings to integers:
function convert_segment_list_to_integers(rec)
for i=1, table.maxn(rec['segments']) do
rec['segments'][i] = math.floor(tonumber(rec['segments'][i]))
end
aerospike:update(rec)
end
... registered it:
aql> register module 'convert_segment_list_to_integers.lua'
... and then tried executing against my set:
aql> execute convert_segment_list_to_integers.convert_segment_list_to_integers() on test.segment
I enabled some more verbose logging and notice that the UDF is throwing an error. Apparently, it's expecting a table and it was passed userdata:
Dec 04 2015 23:23:34 GMT: DEBUG (udf): (udf_rw.c:send_result:527) FAILURE when calling convert_segment_list_to_integers convert_segment_list_to_integers ...rospike/usr/udf/lua/convert_segment_list_to_integers.lua:2: bad argument #1 to 'maxn' (table expected, got userdata)
Dec 04 2015 23:23:34 GMT: DEBUG (udf): (udf_rw.c:send_udf_failure:407) Non-special LDT or General UDF Error(...rospike/usr/udf/lua/convert_segment_list_to_integers.lua:2: bad argument #1 to 'maxn' (table expected, got userdata))
It seems that maxn isn't an applicable method to a userdata object.
Can you see what needs to be done to fix this?
To convert your lists with string values to lists of integer values you can run the following record udf:
function convert_segment_list_to_integers(rec)
local list_with_ints = list()
for value in list.iterator(rec['segments']) do
local int_value = math.floor(tonumber(value))
list.append(list_with_ints, int_value)
end
rec['segments'] = list_with_ints
aerospike:update(rec)
end
When you edit your existing lua module, make sure to re-run register module 'convert_segment_list_to_integers.lua'.
The cause of this issue is within the aerospike-loader tool: it will always assume/enforce strings as you can see in the following java code:
case LIST:
/*
* Assumptions
* 1. Items are separated by a colon ','
* 2. Item value will be a string
* 3. List will be in double quotes
*
* No support for nested maps or nested lists
*
*/
List<String> list = new ArrayList<String>();
String[] listValues = binRawText.split(Constants.LIST_DELEMITER, -1);
if (listValues.length > 0) {
for (String value : listValues) {
list.add(value.trim());
}
bin = Bin.asList(binColumn.getBinNameHeader(), list);
} else {
bin = null;
log.error("Error: Cannot parse to a list: " + binRawText);
}
break;
Source on Github: http://git.io/vRAQW
If you prefer, you can modify this code and re-compile to always assume integer list values. Change line 266 and 270 to something like this (untested):
List<Integer> list = new ArrayList<Integer>();
list.add(Integer.parseInt(value.trim());
I'm working with a Grails query service and I'm using these the code blocks to retrieve database rows via a domain class.
adjustmentCodeList = AdjustmentCode.findAll {
or {
ilike('description', "%$filterText%")
like('id', "%$filterText%")
}
}
adjustmentCodeList = AdjustmentCode.list()
adjustmentCodeList = AdjustmentCode.list(max: count, offset: from)
It works fine actually, but there is a little problem though. It returns the following list (some sensitive data are omitted):
[
{
"class": "rvms.maintenance.AdjustmentCode",
"id": ...,
"description": ...,
"lastUpdateBy": ...,
"lastUpdateDate": ...,
"status": ...,
"statusDate": ...,
"type": ...
},
{
"class": "rvms.maintenance.AdjustmentCode",
"id": ...,
"description": ...,
"lastUpdateBy": ...,
"lastUpdateDate": ...,
"status": ...,
"statusDate": ...,
"type": ...
},
...
{
"class": "rvms.maintenance.AdjustmentCode",
"id": ...,
"description": ...,
"lastUpdateBy": ...,
"lastUpdateDate": ...,
"status": ...,
"statusDate": ...,
"type": ...
}
]
It includes the domain class name. How can I remove the class key using some config? My current solution is to manually remove the class key from the list by iterating it inside a loop, removing that key one at a time. But maybe... there is another Grails-ly way.
If you want to see the domain, it looks like this:
package rvms.maintenance
import grails.util.Holders
import groovy.sql.Sql
import oracle.jdbc.OracleTypes
import java.sql.Connection
class AdjustmentCode implements Serializable {
String id
String description
String type
String status
Date statusDate
String lastUpdateBy
Date lastUpdateDate
static mapping = {
table '...'
version false
id column : '...'
description column : '...'
type column : '...'
status column : '...'
statusDate column : '...'
lastUpdateBy column : '...'
lastUpdateDate column : '...'
}
Map getAdjustmentCodeValues() {
Map values = [];
values << [id: this.getId()]
values << [description: this.getDescription()]
values << [type: this.getType()]
values << [status: this.getStatus()]
values << [statusDate: this.getStatusDate()]
values << [lastUpdateBy: this.getLastUpdateBy()]
values << [lastUpdateDate: this.getLastUpdateDate()]
return values
}
}
The Grails way to accomplish this is to customize the marshaller. I've explained how to do this with named marshallers in this answer and the same concept applies to your case as well (minus the named portion).
I have raw data that is contained in 3 separate lines. I want to build a single map record using parts of each line. I then read the next 3 lines and create the next map record and so on. All the groovy examples I've found on maps show them being created from data on a single line, or possibly i am misunderstanding the examples. Here is what the raw data looks like.
snmp v2: data result = "Local1"
snmp v2: data result ip = "10.10.10.121"
snmp v2: data result gal = "899"
new
snmp v2: data result = "Local2"
snmp v2: data result ip = "192.168.10.2"
snmp v2: data result gal = "7777"
new
I want to put this data into a map. In this example Local1 and Local2 would be keys and they would each have 2 associated values. I will show you my latest attempt but it is little more then a guess that failed.
def data = RAW
def map = [:]
data.splitEachLine("="){
it.each{ x ->
map.put(it[0], it[1])
map.each{ k, v -> println "${k}:${v}" } }}
The desired output is:
[ Local1 : [ ip: "10.10.10.121", gal: "899" ],
Local2: [ ip: "192.168.10.2", gal: "7777" ] ]
You can build a new data structure from an existing one using aggregate operations defined on collections; collect produces a list from an existing list, collectEntries creates a map from a list.
The question specifies there are always three lines for an entry, followed by a line with "new" on it. If I can assume they're always in the same order I can grab the last word off each line, use collate to group every four lines into a sublist, then convert each sublist to a map entry:
lines = new File('c:/temp/testdata.txt').readLines()
mymap = lines.collect { it.tokenize()[-1] }
.collate(4)
.collectEntries { e-> [(e[0].replace('"', ''))) : [ip: e[1], gal: e[2]]] }
which evaluates to
[Local1:[ip:"10.10.10.121", gal:"899"], Local2:[ip:"192.168.10.2", gal:"7777"]]
or remove all the quotes in the first step:
mymap = lines.collect { (it.tokenize()[-1]).replace('"', '') }
.collate(4)
.collectEntries { e-> [(e[0]) : [ip: e[1], gal: e[2]]] }
in order to get
[Local1:[ip:10.10.10.121, gal:899], Local2:[ip:192.168.10.2, gal:7777]]
If you want to get a nested map as suggested by dmahapatro try this:
def map = [:]
data=data.eachLine() { line ->
if(line.startsWith("new")) return
tokens=line.replace("snmp v2: data","").split("=")
tokens=tokens.collect() { it.trim().replace("result ","").replaceAll(/"/, "") }
if(tokens[0]=="result") {
nested=[:]
map[tokens[1]]=nested
}
else
nested[tokens[0]]=tokens[1]
}
println("map: $map")
here we:
iterate over lines
skip lines with "new" at the beginning
remove "snmp v2: data" from the text of the line
split each line in tokens, trim() each token, and remove "result " and quotes
tokens are in pairs and now look like:
result, Local1
ip, 10.10.10.121
gal, 899
next when the first token is "result", we build a nested map and place in the main map at the key given by the value of token[1]
otherwise we populate the nested map with key=token[0] and value=token[1]
the result is:
map: [Local1:[ip:10.10.10.121, gal:899], Local2:[ip:192.168.10.2, gal:7777]]
edit: fixed to remove quotes