In my GSP I have a form with a lot textfields populated by a map that came from the controller, let me put this into an example, because my actual form is a lot more complicated than this:
for example, if I use users to populate a bunch of textfields where I can enter each person's age, I grouped them up into a map called data, and I want to process and save all that information after submitting:
<g:form useToken="true" name='example_form' action='submit'>
<g:each in='${users}' var='user' status='i'>
<g:textField name="data.${user.id}.name" value="${i.name}">
<g:field name="data.${user.id}.age" value="">
</g:each>
<button>Submit</button>
</g:form>
But when I am printing out the params.data in my submit controller, I noticed that not only I am getting the data map that I've created, I am also getting a bunch of garbage within it:
for(i in params.data){
println "key: ${i.key} value: ${i.value}"
}
output:
key: 0.name value: john
key: 0 value: [age: 35, name: john]
key: 1.name value: liz
key: 1 value: [age: 24, name: liz]
key: 2.name value: robert
key: 3.name value: david
key: 0.age value: 35
key: 1.age value: 24
key: 2 value: [age: 44, name: robert]
key: 3 value: [age: 23, name: david]
key: 3.age value: 23
key: 2.age value: 44
Am I doing something wrong?
expected output:
key: 0 value: [age: 35, name: john]
key: 1 value: [age: 24, name: liz]
key: 2 value: [age: 44, name: robert]
key: 3 value: [age: 23, name: david]
It should work exactly this way. When you're submitting data from your form, the body of your POST request looks this way:
data.0.name=john&data.0.age=35&data.1.name=liz&data.1.age=24&data.2.name=robert&data.2.age=44&data.3.name=david&data.3.age=23
So, it's just a plain string, representing a plain key-value map and Grails could parse is just like that:
['data.0.name': 'john', 'data.0.age': '35', 'data.1.name': 'liz', 'data.1.age': '24', 'data.2.name': 'robert', 'data.2.age': '44', 'data.3.name': 'david', 'data.3.age': '23']
But Grails developers wanted to simplify programmers' life, and they decided that if the key contains a dot, the request may represent some kind of structured data. So they decided to put it to the map, in addition to the raw request data. Thus, the dot can be interpreted in two ways - as a plain symbol, or as a separator between map name and map key. And it's up to developer which way the dot should be interpreted.
If you prefer to have clearer params over the easy use like def name = params.data.0.name, then you can use "_" insted of ".". In the controller you can use split("_") in a loop.
In a previous post #Alexander Tokarev explained what happened. The solution is an if statement as shown below:
for(i in params.data){
if( i.key.isNumber() ) {
println "key: ${i.key} value: ${i.value}"
}
}
Related
Neo.ClientError.Statement.SyntaxError: Invalid input ')': expected
whitespace or a relationship pattern (line 66, column 100 (offset:
1898)) "CREATE (z:Subscription{ subscriptionId: subs.subscriptionId,
startDate: subs.startDate, endDate:''})<-[r:ASSOCIATION]-(y:Person
{nationalIdentityNumber: subs.nationalIdentityNumber, name: subs.name,
surname: subs.surname, fathername: subs.fathername , nationality:
subs.nationality, passportNo: subs.passportNo, birthdate:
subs.birthdate})"
I want to create/merge nodes and relation that types are Person, Subscription and Line
If I had same subscription I should check to startDate, If new data's start date greater then old data; I sould create new Subscription and also change old subscription's end date.
UNWIND [{
msisdn:'99658321564',
name:'Lady',
surname:'Camble',
fatherName:'Aeron',
nationality:'EN',
passportNo:'PN-1234224',
birthDate:'12-05-1979',
nationalIdentityNumber:'112124224',
subscriptionId:'2009201999658321564',
startDate:'20-09-2019 12:00:12'
},{msisdn:'99658363275',
name:'John',
surname:'Mckeen',
fatherName:'Frank',
nationality:'EN',
passportNo:'PN-126587',
birthDate:'15-08-1998',
nationalIdentityNumber:'2548746542',
subscriptionId:'1506201999658363275',
startDate:'15-06-2019 13:00:12'}
{
msisdn:'99658321564',
name:'Lady',
surname:'Camble',
fatherName:'Aeron',
nationality:'EN',
passportNo:'PN-1234224',
birthDate:'12-05-1979',
nationalIdentityNumber:'112124224',
subscriptionId:'2009201999658321564',
startDate:'31-11-2019 12:00:12'
}
] as subs
MERGE (y:Person {nationalIdentityNumber: subs.nationalIdentityNumber, name: subs.name, surname: subs.surname, fathername: subs.fathername , nationality: subs.nationality, passportNo: subs.passportNo, birthdate: subs.birthdate })
MERGE (t:Subscription{subscriptionId:subs.subscriptionId })
MERGE (y)-[rel:ASSOCIATION]-(t)
ON MATCH SET
t.endDate = (case when t.startDate <subs.startDate then subs.startDate else ''
end)
MATCH (t:Subscription) where t.subscriprionId=subs.subscriprionId and
(CASE
WHEN t.endDate=subs.startDate then
CREATE (z:Subscription{ subscriptionId: subs.subscriptionId, startDate: subs.startDate, endDate:''})-[r:ASSOCIATION]-(y:Person {nationalIdentityNumber: subs.nationalIdentityNumber, name: subs.name, surname: subs.surname, fathername: subs.fathername , nationality: subs.nationality, passportNo: subs.passportNo, birthdate: subs.birthdate})
END)
RETURN y
UNWIND[...] as subs
MERGE (y:Person {nationalIdentityNumber: subs.nationalIdentityNumber, name: subs.name, surname: subs.surname, fatherName: subs.fatherName , nationality: subs.nationality, passportNo: subs.passportNo, birthDate: subs.birthDate })
MERGE (t:Subscription{subscriptionId:subs.subscriptionId,startDate:subs.startDate,endDate:''})
MERGE (y)-[rel:ASSOCIATION]-(t)
MERGE(x:Subscription{subscriptionId:subs.subscriptionId, endDate:''})
SET
x.endDate = (case when x.startDate < subs.startDate then subs.startDate else null end);
CQL should like this. Thanks my co-worker.
You're trying to have conditional Cypher clauses through a CASE statement, and that won't work. You can't do a nested CREATE (or any other Cypher clause) in a CASE.
You can however use a trick with FOREACH and CASE to mimic an if conditional. That should work in your case, as you want to only execute a CREATE under certain conditions (though since you already matched to the y node for the person, just reuse (y) in that CREATE instead of trying to define the entire node again from labels and properties, that won't work properly).
If you need more advanced conditional logic, that's available via conditional procs in APOC Procedures
In a Jenkins pipeline, it is possible to request input data using
def returnValue = input message: 'Need some input',
parameters: [string(defaultValue: 'adefval', description: 'a', name: 'aaa'),
string(defaultValue: 'bdefval', description: 'b', name: 'bbb')]
To build such a list dynamically, I tried something like
def list = ["foo","bar"]
def inputRequest = list.collect { string(defaultValue: "def", description: 'description', name: it) }
def returnValue = input message: 'Need some input', parameters: [inputRequest]
This does not work:
java.lang.ClassCastException: class org.jenkinsci.plugins.workflow.support.steps.input.InputStep.setParameters() expects class hudson.model.ParameterDefinition but received class java.util.ArrayList
Probably, Groovy can figure out dynamically in the first case which object is required, but in the second case it does not work anymore, as collect returns an ArrayList?
How to create such a list correctly?
edit: maybe this question not very useful for itself, but may still serve as a code sample...
Ok it was quite a simple fix, as the collect already returns an ArrayList, it should not be wrapped into another list when setting the parameters...
def returnValue = input message: 'Need some input', parameters: inputRequest
I struggled with this for a while, but it's not as complicated as I thought.
At its simplest, you can define an empty array and fill it with parameter types.
I'm honestly not clear on what the object type is, but it works.
ArrayList pa = []
if(<some conditional>) {
pa += choice (name: 'source_feed', choices: ['Development', 'Production'])
pa += string (name: 'deployVersion', defaultValue: 'Latest', description: 'version')
pa += extendedChoice (name: 'environments', value: 'one,two,three', description: 'envs', type: 'PT_CHECKBOX')
pa += booleanParam (name: 'dryRunMode', defaultValue: false, description: 'dry')
pa += booleanParam (name: 'skipPreChecks', defaultValue: false, description: 'skip')
}
else if (<some other conditional>) {
pa += string (name: 'CommandServiceVersion', defaultValue: '', description: 'commandservice')
}
def result = input message: "Enter Parameters",
ok: 'ok go!',
parameters: pa
echo result.toString()
As long as you can pass the Array[] around, it works. I ended up creating the array[] outside the pipeline, as a global, so it can be passed around throughout the run; as opposed to an env var, which can only be a string.
You can use the the following code. It will generate the choices A,B,C for you. This can be found by using the Pipeline syntax option in Jenkins. I use it because it saves time and less typos.
def createInputParameters(){
properties([[$class: 'RebuildSettings', autoRebuild: false, rebuildDisabled:
false],parameters([choice(choices: ['A', 'B', 'C'], description: '', name:
'Test')]),
[$class: 'ThrottleJobProperty', categories: [],
limitOneJobWithMatchingParams: false,
maxConcurrentPerNode: 0, maxConcurrentTotal: 0,
paramsToUseForLimit: '',
throttleEnabled: false,
throttleOption: 'project']])
}
Recently a fellow has recommended Swagger to me to write my API with...
Now after searching for a while, I haven't found the way to map my JSONs quite easily.
This is how my response looks like:
{ 1 : {
name: "foo", age: 22
},
2 : {
name: "bar", age:14
},
3 : {
name: "boo", age: 26
},
4 : {
name: "far", age: 19
}
}
So basically I have an object, where the key is an ID, and the value is another object, which has normal key/value pairs.
Now I'm sure someone before me has needed this, but I couldn't find the way to write this.
How would I write this in Swagger?
Thank you for any help / example / referance to another question!
I am using Grails excel import plugin to import an excel file.
static Map propertyConfigurationMap = [
name:([expectedType: ExcelImportService.PROPERTY_TYPE_STRING, defaultValue:null]),
age:([expectedType: ExcelImportService.PROPERTY_TYPE_INT, defaultValue:0])]
static Map CONFIG_BOOK_COLUMN_MAP = [
sheet:'Sheet1',
startRow: 1,
columnMap: [
//Col, Map-Key
'A':'name',
'B':'age',
]
]
I am able to retrieve the array list by using the code snippet:
def usersList = excelImportService.columns(workbook, CONFIG_USER_COLUMN_MAP)
which results in
[[name: Mark, age: 25], [name: Jhon, age: 46], [name: Anil, age: 62], [name: Steve, age: 32]]
And also I'm able to read each record say [name: Mark, age: 25] by using usersList.get(0)
How do I read the each column value?
I know I can read something like this
String[] row = usersList.get(0)
for (String s : row)
println s
I wonder is there any thing that plugin supports so that I can read column value directly rather manipulating it to get the desired result.
Your usersList is basically a List<Map<String, Object>> (list of maps). You can read a column using the name you gave it in the config. In your example, you named column A name and column B age. So using your iteration example as a basis, you can read each column like this:
Map row = usersList.get(0)
for(Map.Entry entry : row) {
println entry.value
}
Groovy makes this easier to do with Object.each(Closure):
row.each { key, value ->
println value
}
If you want to read a specific column value, here are a few ways to do it:
println row.name // One
println row['name'] // Two
println row.getAt('name') // Three
Hint: These all end up calling row.getAt('name')
I have just started with MongoDB and mongoid.
The biggest problem I'm having is understanding the map/reduce functionality to be able to do some very basic grouping and such.
Lets say I have model like this:
class Person
include Mongoid::Document
field :age, type: Integer
field :name
field :sdate
end
That model would produce objects like these:
#<Person _id: 9xzy0, age: 22, name: "Lucas", sdate: "2013-10-07">
#<Person _id: 9xzy2, age: 32, name: "Paul", sdate: "2013-10-07">
#<Person _id: 9xzy3, age: 23, name: "Tom", sdate: "2013-10-08">
#<Person _id: 9xzy4, age: 11, name: "Joe", sdate: "2013-10-08">
Could someone show how to use mongoid map reduce to get a collection of those objects grouped by the sdate field? And to get the sum of ages of those that share the same sdate field?
I'm aware of this: http://mongoid.org/en/mongoid/docs/querying.html#map_reduce
But somehow it would help to see that applied to a real example. Where does that code go, in the model I guess, is a scope needed, etc.
I can make a simple search with mongoid, get the array and manually construct anything I need but I guess map reduce is the way here. And I imagine these js functions mentioned on the mongoid page are feeded to the DB that makes those operations internally. Coming from active record these new concepts are a bit strange.
I'm on Rails 4.0, Ruby 1.9.3, Mongoid 4.0.0, MongoDB 2.4.6 on Heroku (mongolab) though I have locally 2.0 that I should update.
Thanks.
Taking the examples from http://mongoid.org/en/mongoid/docs/querying.html#map_reduce and adapting them to your situation and adding comments to explain.
map = %Q{
function() {
emit(this.sdate, { age: this.age, name : this. name });
// here "this" is the record that map
// is going to be executed on
}
}
reduce = %Q{
function(key, values) {
// this will be executed for every group that
// has the same sdate value
var result = { avg_of_ages: 0 };
var sum = 0; // sum of all ages
var totalnum = 0 // total number of people
values.forEach(function(value) {
sum += value.age;
});
result.avg_of_ages = sum/total // finding the average
return result;
}
}
results = Person.map_reduce(map, reduce) //You can access this as an array of maps
first_average = results[0].avg_of_ages
results.each do |result|
// do whatever you want with result
end
Though i would suggest you use Aggregation and not map reduce for such a simple operation. The way to do this is as follows :
results = Person.collection.aggregate([{"$group" => { "_id" => {"sdate" => "$sdate"},
"avg_of_ages"=> {"$avg" : "$age"}}}])
and the result will be almost identical with map reduced and you would have written a lot less code.