Can I Do This In Grails 1: Create Data Entry Form Dynamicly - grails

I'm considering using Grails for my current project. I have a couple of requirements that I'm hoping I can do in Grails.
First, I have the following database table:
TagType
---------
tag_type_id
tag_type
Sample Data: TagType
--------------------
1,title
2,author
Based on that data, I need to generate a data entry form like this which
will save its data to another table.
Tile _ _ _ _ _ _ _ _ _ _ _
Author _ _ _ _ _ _ _ _ _ _ _
save cancel
Can I do that in Grails? Can you point me in the right direction?
Thanks!
More Details
I'm building a digital library system that supports OIA-PMH which is a standard for sharing metadata about documents. The standard states that every element is optional and repeatable. To support this requirement I have the following database design.
I need to generate the user GUI (data entry form) based primarily on the contents
of the TagType Table (see above). The data from the form then get's saved to
the Tags (if the tag is new) and Item_Tags tables.
Items
---------
item_id
last_update
Tags
--------
tag_id
tag_type_id
tag
TagType
---------
tag_type_id
tag_type
Item_tags
---------
item_id
tag_id
Sample Data: Items
------------------
1,2009-06-15
Sample Data: TagType
--------------------
1,title
2,author
Sample Data: Tags
------------------
1,1,The Definitive Guide to Grails
2,2,Graeme Rocher
3,2, Jeff Brown
Sample Data: Item_tags
-----------------------
1,1
1,2
1,3

I am not completely sure what you are asking here in regards to "save its data to another table", but here are some thoughts.
For the table you have, the domain class you'd need is the following:
class Tag {
String type
}
ID field will get generated for you automatically when you create the scaffolding.
Please add more information to your question if this is insufficient.

I'm really liking grails. When I first started playing with it a couple of weeks ago, I didn't realize that it's a full fledged language. In fact it's more than that. It's a complete web stack with a web server and a database included. Anyway, the short answer to my question is yes. You might even say yes, of course! Here's the code for the taglib I created:
import org.maflt.flashlit.pojo.Item
import org.maflt.flashlit.pojo.ItemTag
import org.maflt.flashlit.pojo.Metacollection
import org.maflt.flashlit.pojo.SetTagtype
import org.maflt.flashlit.pojo.Tag
import org.maflt.flashlit.pojo.Tagtype
/**
* #return Input form fields for all fields in the given Collection's Metadataset. Does not return ItemTags where TagType is not in the Metadataset.
*
* In Hibernate, the
*
* "from ItemTag b, Tag a where b.tag= a"
*
* query is a cross-join. The result of this query is a list of Object arrays where the first item is an ItemTag instance and the second is a Tag instance.
*
* You have to use e.g.
*
* (ItemTag) theTags2[0][0]
*
* to access the first ItemTag instance.
* (http://stackoverflow.com/questions/1093918/findall-not-returning-correct-object-type)
**/
class AutoFormTagLib {
def autoForm = {attrs, body ->
//def masterList = Class.forName(params.attrs.masterClass,false,Thread.currentThread().contextClassLoader).get(params.attrs.masterId)
def theItem = Item.get(attrs.itemId)
def theCollection = Metacollection.get(attrs.collectionId)
def masterList = theCollection.metadataSet.setTagtypes
def theParams = null
def theTags = null
def itemTag = null
def tag = null
masterList.each {
theParams = [attrs.itemId.toLong(),it.tagtype.id]
theTags = ItemTag.findAll("from ItemTag d, Item c, Tag b, Tagtype a where d.item = c and d.tag = b and b.tagtype = a and c.id=? and a.id=?",theParams)
for (int i=0;i<it.maxEntries;i++) {
out << "<tr>\n"
out << " <td>${it.tagtype.tagtype}</td>\n"
out << " <td>\n"
if (theTags[i]) {
itemTag = (ItemTag) theTags[i][0]
//item = (Item) theTags[i][1]
tag = (Tag) theTags[i][2]
//itemTag = (Tagtype) theTags[i][3]
out << " <input name='${it.tagtype.tagtype}_${i}_${itemTag.id}' value='${tag.tag}' />\n";
}
else
out << " <input name='${it.tagtype.tagtype}_${i}' />\n";
out << " </td>\n"
out << "</tr>\n"
}
}
}
}

Related

Insert elements to a table inside other table

I need help to know how to do for use table.insert, and insert elements in to one table that is in to another table. This what i have:
Table = {} --Main table
function InsertNewValues()
local TIME = --Any value, string or integer
local SIGNAL = --Any value, string or integer
table.insert(Table, {TIME, SIGNAL})
end
Ok, this allow me to insert the values of TIME and SIGNAL everytime i call that function, so the table will have:
Table[1][1] = TIME
Table[1][2] = SINGAL
...
Table[...][1] = TIME
Table[...][2] = SIGNAL
BUT ... I need to insert the values of TIME and SIGNAL in to another table that is inside of the table "Table", and that table work as a KEY to refer those values ... TIME and SIGNAL ...
therefore, the resulting table would be the following:
+Table
|
+-[1]othertable
|
+-+[1]TIME - [2]SIGNAL
+-+[1]TIME - [2]SIGNAL
+- ...
|
+-[2]othertable
|
+-+[1]TIME - [2]SIGNAL
+-+[1]TIME - [2]SIGNAL
+- ...
How can i do for do that?
----------------- EDIT -----------------
I have not explained myself well, what I need is:
Given a table called "Table", I need to be able to use "strings" as "keys" within that table. That would be:
-- Name of my "container" table
Table = {}
Values to be introduced
Time = '1 seconds' -- this value can change as desired
Value = 'logic' -- this value can change as desired
Add to my master table "Table", using the string key "RandomName"
-- "RandomName" can change as desired
function AddNewValues ()
table.insert (Table [RandomName], {Time, Value})
end
Every time I call the function "AddNewValues ()", it should add the values present in "Time" and "Value" as a "NEW ENTRY" for that "RandomName".
So the table result may should look like this:
+table -- that contains
+-RandomName-- string key to access
+--"Time,Value"
+--"Time,Value"
+--"Time,Value"
+...
And then, to be able to access the values that are inside that table, using "RandomName" as the key:
function Load()
for key, v in pairs(Table) do
a = Table[key][RandomName][1] -- reference to "Time"
b = Table[key][RandomName][2] -- reference to "Value"
print('Time: ' .. a ..'/' .. 'Value: ' .. b)
end
end
You're just not starting deep enough into the table. When you want the sequence values for the table under the key equaling the value of RandomName, just do:
function Load()
for _, v in ipairs(Table[RandomName]) do
a = v[1] -- reference to "Time"
b = v[RandomName][2] -- reference to "Value"
print('Time: ' .. a ..'/' .. 'Value: ' .. b)
end
end
Answer to original question:
It appears that you want something like this:
Table = { ["[1]othertable"] = {}, ["[2]othertable"] = {} }
table.insert(Table["[1]othertable"], {TIME, SIGNAL})
The keys are "[1]othertable" and "[2]othertable".
You might prefer to use keys like "othertable1" and "othertable2". If you use valid identifiers you can drop some of the syntax:
Table = { othertable1 = {}, othertable2 = {} }
table.insert(Table.othertable1, {TIME, SIGNAL})
In fact, you might prefer to do something similar with TIME and SIGNAL. Instead of positive integer indices, you could use string keys:
Table = { othertable1 = {}, othertable2 = {} }
table.insert(Table.othertable1, {TIME = 12.42, SIGNAL = 3.2})
print(Table.othertable1[1].TIME)

Spark join hangs

I have a table with n columns that I'll call A. In this table there are three columns that i'll need:
vat -> String
tax -> String
card -> String
vat or tax can be null, but not at the same time.
For every unique couple of vat and tax there is at least one card.
I need to alter this table, adding a column count_card in which I put a text based on the number of cards every unique combination of tax and vat has.
So I've done this:
val cardCount = A.groupBy("tax", "vat").count
val sqlCard = udf((count: Int) => {
if (count > 1)
"MULTI"
else
"MONO"
})
val B = cardCount.withColumn(
"card_count",
sqlCard(cardCount.col("count"))
).drop("count")
In the table B I have three columns now:
vat -> String
tax -> String
card_count -> Int
and every operation on this DataFrame is smooth.
Now, because I wanted to import the new column in A table, i performed the following join:
val result = A.join(B,
B.col("tax")<=>A.col("tax") and
B.col("vat")<=>A.col("vat")
).drop(B.col("tax"))
.drop(B.col("vat"))
Expecting to have the original table A with the column card_count.
Problem is that the join hangs, getting all system resources blocking the pc.
Additional details:
Table A has ~1.5M elements and is read from parquet file;
Table B has ~1.3M elements.
System is a 8 thread and 30GB of RAM
Let me know what I'm doing wrong
At the end, I didn't found out which was the issue, so I changed approach
val cardCount = A.groupBy("tax", "vat").count
val cardCountSet = cardCount.filter(cardCount.col("count") > 1)
.rdd.map(r => r(0) + " " + r(1)).collect().toSet
val udfCardCount = udf((tax: String, vat:String) => {
if (cardCountSet.contains(tax + " " + vat))
"MULTI"
else
"MONO"
})
val result = A.withColumn("card_count",
udfCardCount(A.col("tax"), A.col("vat")))
If someone knows a better approach let me know it

Can i share composite types across postgres and plpython

I have a composite type called tt to be used by all my plpgsql and plpythonu
procedures. is there some kind of plpy. means of accessing the catalogue or
schema in a consistent way so as to derive types or iterable structs to return
without having to define the class in the plpythonu procedure?
CREATE TYPE tt AS (id integer, name text);
CREATE OR REPLACE FUNCTION python_setof_type() RETURNS SETOF tt AS $$
#-- i want this to be dynamic or to have it's attributes pulled from the tt type
class tt:
def __init__(self, id, name):
plpy.info('constructed type')
self.idx = 0
self.id = id
self.name = name
def __iter__ (self):
return self
def next (self):
if (self.idx == 1):
raise StopIteration
self.idx += 1
return ( self )
return tt(3, 'somename')
#-- was hoping for something like
#-- give me 1 record
#-- return plpy.schema.tt(3, 'somename')
#-- give me 2 records
#-- return plpy.schema.tt([3, 'somename'], [1, 'someornothername'])
#-- give me a no records
#-- return plpy.schema.tt([])
$$ LANGUAGE plpythonu;
There isn't anything built in like that, but it's a neat idea. You could probably write it yourself, by querying the system catalogs, as you indicated.

multi level join in solr

Hi i have data in a 3 level tree structure. Can I use SOlr JOIN to get the root node when the user searches 3rd level node.
FOr example -
PATIENT1
-> FirstName1
-> LastName1
-> DOCUMENTS1_1
-> document_type1_1
-> document_description1_1
-> document_value1_1
-> CODE_ITEMS1_1_1
-> Code_id1_1_1
-> code1_1_1
-> CODE_ITEMS1_1_1
-> Code_id1_1_2
-> code1_1_2
-> DOCUMENTS1_2
-> document_type1_2
-> document_description1_2
-> document_value1_2
-> CODE_ITEMS1_2_1
-> Code_id1_2_1
-> code1_2_1
-> CODE_ITEMS1_2_2
-> Code_id1_2_2
-> code1_2_2
PATIENT2
-> FirstName2
-> LastName2
-> DOCUMENTS2_1
-> document_type2_1
-> document_description2_1
-> document_value2_1
-> CODE_ITEMS2_1_1
-> Code_id2_1_1
-> code2_1_1
-> CODE_ITEMS2_1_2
-> Code_id2_1_2
-> code2_1_2
I want to search a CODE_ITEM and return all the patient that matches the code items search criteria. How can this be done. Is it possible to implement join twice. First join gives all the documents for the code_item search and the next join gives all the Patient.
Something like in SQL query -
select * from patients where docID (select DOCID from DOCUMENTS where CODEID IN (select CODEID from CODE_ITEMS where CODE LIKE '%SEARCH_TEXT%'))
I really don't know how internally Solr joins work, but knowing that RDB multiple joins are extremely inefficient on large data sets, I'd probably end up writing my own org.apache.solr.handler.component.QueryComponent that would, after doing normal search, get root parent (of course, this approach requires that each child doc has a reference to its root patient).
If you choose to go this path I'll post some examples. I had similar (more complex - ontology) problem in one of my previous Solr projects.
The simpler way to go (simpler when it comes to solving this problem, not the whole approach) is to completely flatten this part of your schema and store all information (documents and code items) into its parent patient and just do a regular search. This is more in line with Solr (you have to look at Solr schema in a different way. It's nothing like your regular RDB normalized schema, Solr encourages data redundancy so that you may search blindingly fast without joins).
Third approach would be to do some joins testing on representative data sets and see how search performance is affected.
In the end, it really depends on your whole setup and requirements (and test results, of course).
EDIT 1:
I did this couple of years back, so you'll have to figure out whether things changed in the mean time.
1. Create custom request handler
To do completely clean job, I suggest you define your own Request handler (in solrconfig.xml) by simply copying the whole section that starts with
<requestHandler name="/select" class="solr.SearchHandler">
...
...
</requestHandler>
and then changing name to something meaningful to your users, like e.g. /searchPatients.
Also, add this part inside:
<arr name="components">
<str>patients</str>
<str>facet</str>
<str>mlt</str>
<str>highlight</str>
<str>stats</str>
<str>debug</str>
</arr>
2. Create custom search component
Add this to your solrconfig:
<searchComponent name="patients" class="org.apache.solr.handler.component.PatientQueryComponent"/>
Create PatientQueryComponent class:
The following source probably has errors since I edited my original source in text editor and posted it without testing, but the important thing is that you get recipe, not finished source, right? I threw out caching, lazy loading, prepare method and left only the basic logic. You'll have to see how the performance will be affected and then tweak the source if needed. My performance was fine, but I had a couple of million documents in total in my index.
public class PatientQueryComponent extends SearchComponent {
...
#Override
public void process(ResponseBuilder rb) throws IOException {
SolrQueryRequest req = rb.req;
SolrQueryResponse rsp = rb.rsp;
SolrParams params = req.getParams();
if (!params.getBool(COMPONENT_NAME, true)) {
return;
}
searcher = req.getSearcher();
// -1 as flag if not set.
long timeAllowed = (long)params.getInt( CommonParams.TIME_ALLOWED, -1 );
DocList initialSearchList = null;
SolrIndexSearcher.QueryCommand cmd = rb.getQueryCommand();
cmd.setTimeAllowed(timeAllowed);
cmd.setSupersetMaxDoc(UNLIMITED_MAX_COUNT);
// fire standard query
SolrIndexSearcher.QueryResult result = new SolrIndexSearcher.QueryResult();
searcher.search(result, cmd);
initialSearchList = result.getDocList();
// Set which'll hold patient IDs
List<String> patientIds = new ArrayList<String>();
DocIterator iterator = initialSearchList.iterator();
int id;
// loop through search results
while(iterator.hasNext()) {
// add your if logic (doc type, ...)
id = iterator.nextDoc();
doc = searcher.doc(id); // , fields) you can try lazy field loading and load only patientID filed value into the doc
String patientId = doc.get("patientID") // field that's in child doc and points to its root parent - patient
patientIds.add(patientId);
}
// All all unique patient IDs in TermsFilter
TermsFilter termsFilter = new TermsFilter();
Term term;
for(String pid : patientIds){
term = new Term("patient_ID", pid); // field that's unique (name) to patient and holds patientID
termsFilter.addTerm(term);
}
// get all patients whose ID is in TermsFilter
DocList patientsList = null;
patientsList = searcher.getDocList(new MatchAllDocsQuery(), searcher.convertFilter(termsFilter), null, 0, 1000);
long totalSize = initialSearchList.size() + patientsList.size();
logger.info("Total: " + totalSize);
SolrDocumentList solrResultList = SolrPluginUtils.docListToSolrDocumentList(patientsList, searcher, null, null);
SolrDocumentList solrInitialList = SolrPluginUtils.docListToSolrDocumentList(initialSearchList, searcher, null, null);
// Add patients to the end of the list
for(SolrDocument parent : solrResultList){
solrInitialList.add(parent);
}
// replace initial results in response
SolrPluginUtils.addOrReplaceResults(rsp, solrInitialList);
rsp.addToLog("hitsRef", patientsList.size());
rb.setResult( result );
}
}
Take a look at this post: http://blog.griddynamics.com/2013/12/grandchildren-and-siblings-with-block.html
Actually you can do it in SOLR 4.5

magento join table collection

I'm customizing Magento FAQ extension for sort faq items by category.below collection is
used to get all items active faq items.
$collection = Mage :: getModel('flagbit_faq/faq')->getCollection()
->addStoreFilter(Mage :: app()->getStore())
->addIsActiveFilter();
there is relation table "faq_category_item"
Table structure:-
category_id faq_id
1 1
2 2
1 3
So I decide to join two tables.I unsuccess in that.
What i tried is below.
$tbl_faq_item = Mage::getSingleton('core/resource')->getTableName('faq_category_item');
$collection = Mage :: getModel('flagbit_faq/faq')->getCollection()
->getSelect()
->join(array('t2' => $tbl_faq_item),'main_table.faq_id = t2.faq_id','t2.category_id')
->addStoreFilter(Mage :: app()->getStore())
->addIsActiveFilter();
Whats wrong in this and how can i filter the particular category items.Please share some good links to learn Magento model collections.
Thanks in advance
The returned type from getSelect() and join() is a select object, not the collection that addStoreFilter() and addIsActiveFilter() belong to. The select part needs to occur later in the chain:
$collection = Mage :: getModel('flagbit_faq/faq')->getCollection()
->addStoreFilter(Mage :: app()->getStore())
->addIsActiveFilter();
// Cannot append getSelect right here because $collection will not be a collection
$collection->getSelect()
->join(array('t2' => $tbl_faq_item),'main_table.faq_id = t2.faq_id','t2.category_id');
Try this function from
Mage_Eav_Model_Entity_Collection_Abstract
/**
* Join a table
*
* #param string|array $table
* #param string $bind
* #param string|array $fields
* #param null|array $cond
* #param string $joinType
* #return Mage_Eav_Model_Entity_Collection_Abstract
*/
public function joinTable($table, $bind, $fields = null, $cond = null, $joinType = 'inner')
{
So to join tables you can do like this:
$collection->joinTable('table-to-join','left.id=right.id',array('alias'=>'field'),'some condition or null', joinType(left right inner));

Resources