Envers with Nhibernate - nhibernate-envers

I am using Envers for auditing some of my DB Tables.
Auditing is woking fine, I can see the data in the DB in the corresponding tables with my custom prefix etc.
I can't query any data becouse I am getting always the following QueryException:
could not resolve property: originalId of: NaturalPerson [select e__, r__ from NaturalPerson e__, ExtendedRevisionEntity r__ where e__.originalId.RevisionID.id = r__.id order by e__.originalId.RevisionID.id asc]
This is the query code:
AuditReaderFactory.Get(session).CreateQuery().ForHistoryOf<NaturalPerson, ExtendedRevisionEntity>().Results();
Mappings for NaturalPerson
<?xml version="1.0" encoding="utf-8"?>
<hibernate-mapping assembly="Domain" namespace="Domain" xmlns="urn:nhibernate-mapping-2.2">
<joined-subclass name="NaturalPerson" schema="MySchema" table="NaturalPersons">
<key column="PersonID" />
<property name="Name" type="AnsiString"/>
</joined-subclass>
</hibernate-mapping>
Envers config using fluent:
configuration.SetEnversProperty(ConfigurationKey.AuditTableSuffix, " ");
configuration.SetEnversProperty(ConfigurationKey.DefaultSchema, "aud");
configuration.SetEnversProperty(ConfigurationKey.StoreDataAtDelete, true);
configuration.SetEnversProperty(ConfigurationKey.RevisionFieldName, "RevisionID");
configuration.SetEnversProperty(ConfigurationKey.RevisionTypeFieldName, "RevisionTypeID");
enversConf.Audit<NaturalPerson>();

As stated in the comments above, the problem is the "only spaces" in the AuditTableSuffix.
The audited entity name in code is
audittableprefix + "original entity name" + audittablesuffix
When querying, the "empty space" means nothing ("select a from b a" becomes "select a from b a") and wrong data will be read.
Please add a JIRA issue about it here
https://nhibernate.jira.com/browse/NHE
preferably with a small, isolated failing test.

Related

Strip string from a field with defined separator in invoice report

first of all - I'm still newbie in Odoo so this is maybe explained wrong but I will try.
In inherited invoice_report xml document i have conditional field that needs to be shown - if column (field) in db is equal to another column. To be more precise - if invoice_origin from account_move is equal to name in sale_order.
This is it's code:
<t t-foreach="request.env['sale.order'].search([('name', '=', o.invoice_origin)])" t-as="obj">
For example in database this invoice_origin is [{'invoice_origin': 'S00151-2022'}]
On invoices that are created from more than one sales orders it is this [{'invoice_origin': 'S00123-2022, S00066-2022'}]
How can I strip this data to use in foreach separately the part [{'invoice_origin': 'S00123-2022'}] and separately [{'invoice_origin': 'S00066-2022'}]
Thank you.
You can try to split up the invoice origin and use the result for your existing code:
<t t-set="origin_list" t-value="o.invoice_origin and o.invoice_origin.split(', ') or []" />
<t t-foreach="request.env['sale.order'].search([('name', 'in', origin_list)])" t-as="obj">
<!-- do something -->
</t>

Element type "foreach" must be declared - mybatis

Is using foreach attribute in mybatis/ibatis for oracle sql updates a best practice? Below is my query in the sql map.
<update id="updateFG" parameterClass="java.util.Map">
<foreach collection="entries.entrySet()" item="item" index="index" >
UPDATE <<tablename>>
SET description = #{item.value},
last_mod_date= SYSDATE
WHERE name = #{item.key}
</foreach>
</update>
When I try to run this piece of code it is throwing me an error:
Error parsing XML. Cause: org.xml.sax.SAXParseException; lineNumber: 49; columnNumber: 72; Element type "foreach" must be declared.
Okay so when I changed my DCOTYPE from sqlmap to mapper - it worked fine.. I think foreach cannot be used for sqlMap..
EDIT: realized the foreach is not efficient for multiple rows so used batch instead

How to reduce XML reading cost in SQL Server 2008 R2

I am reading a xml value and storing in a table but its cost is 82%.
This is my query:
declare #StateNameList XML
set #StateNameList='<StatusList>
<Status>
<StatusName>All</StatusName>
<StatusID>1</StatusID>
</Status>
<Status>
<StatusName>test</StatusName>
<StatusID>2</StatusID>
</Status>
</StatusList>'
SELECT
Table1.Column1.value('(./StatusName)[1]', 'varchar(50)') AS StatusName
FROM
#StateNameList.nodes('/StatusList/Status') AS Table1(Column1)
Please add the below code after from tag then see the execution plan
declare #StateNameList XML
set #StateNameList='<StatusList>
<Status>
<StatusName>All</StatusName>
<StatusID>1</StatusID>
</Status>
<Status>
<StatusName>test</StatusName>
<StatusID>2</StatusID>
</Status>
</StatusList>'
SELECT
Table1.Column1.value('(./StatusName)[1]','varchar(50)') AS StatusName
FROM
#StateNameList.nodes('/StatusList/Status') AS Table1(Column1)
OPTION (OPTIMIZE FOR ( #StateNameList = NULL ))

Transform portion of ETL using Scriptella?

I am trying out Scriptella to see if it will meet my needs. So far, it seems like a great tool. I've spent several hours studying sample scripts, searching forums, and trying to get the hang of nested queries/scripts.
This is an example of my ETL file, slightly cleaned up for brevity. Lines beginning with # are added and not part of the actual ETL file. I am trying to insert/retrieve IDs and then pass them on to later script blocks. The most promising way to do this appears to be using global variables but I'm getting null when trying to retrieve the values. Later, I will be adding code in the scripts blocks that parse and significantly transform fields before adding them into the DB.
There are no errors. I'm just not getting the OS ID and Category IDs that I'd expect. Thank you in advance.
<!DOCTYPE etl SYSTEM "http://scriptella.javaforge.com/dtd/etl.dtd">
<etl>
<connection id="in" driver="csv" url="mycsvfile.csv"/>
<connection id="dest" url="jdbc:mysql://localhost:3306/pvm3" user="user" password="password"/>
<connection id="js" driver="script"/>
<query connection-id="in">
<!-- all columns are selected, notably: OPERATINGSYSTEM, CATEGORY, QID, TITLE -->
<query connection-id="dest">
#Check to see if the OS already exists, and get the ID if it does
select max(os_id) as os_id, count(*) as os_cnt from etl_os where os = ?OPERATINGSYSTEM;
#If it doesnt exist then add it and get the auto_increment value
<script if="os_cnt==0">
insert into etl_os(os) values(?OPERATINGSYSTEM);
<query connection-id="dest">
select last_insert_id() as os_id;
#Store in global so it can be accessed in later script blocks
<script connection-id="js">
etl.globals.put('os_id', os_id);
</script>
</query>
</script>
#Same style select/insert as above for category_id (excluded for brevity)
#See if KB record exists by qid, if not then add it with the OS ID and category ID we got earlier
<query connection-id="dest">
select max(qid) as existing_qid, count(*) as kb_cnt from etl_qids where qid = ?QID
<script if="kb_cnt==0">
insert into etl_qids(qid, category_id, os_id) values (?QID, ?{etl.globals.get('category_id')}, ?{etl.globals.get('os_id')});
</script>
</query>
</query>
</query>
</etl>
Found out how to do it. Essentially, just nest queries to modify the data before passing it to a script. The below is a quick type-up of the solution. I did not understand at first that queries could be immediately nested to transform the row before passing it for processing. My impression was also that only scripts could manipulate the data.
(Query)Raw data -> (Query)manipulate data -> (Script)write new data.
.. in is a CSV file ..
.. js is a driver="script" block ..
<query connection-id="in">
<query connection-id="js">
//transform data as needed here
if (BASE_TYPE == '-') BASE_TYPE = '0';
if (SECONDARY_TYPE == '-') SECONDARY_TYPE = '0';
SIZES = SIZES.toLowerCase();
query.next(); //call nested scripts
<script connection-id="db">
INSERT IGNORE INTO sizes(size) VALUES (?SIZE);
INSERT IGNORE INTO test(base_type,secondary_type) VALUES (?BASE_TYPE, ?SECONDARY_TYPE);
</script>
</query>
</query>

Please explain the CDA entryRelationship element

I have access to the HL7 Clinical Document Architecture, Release 2.0, which states that it is used essentially to link entries with each other in a CDA document. Specifically, it links between what is called the "source" and the "target" entries. I also read about the different types of relationships (CAUS, COMP, GEVL, MFST, REFR, RSON, SAS, SPRT, SUBJ, XCRPT) and somewhat understand those.
My main question: what are the "source" and "target" elements? Are they the element containing the entryRelationship, and the element contained by entryRelationship?
For example:
<entry typeCode="DRIV">
<act classCode="ACT" moodCode="EVN">
...
<entryRelationship typeCode="SUBJ">
<observation classCode="OBS" moodCode="EVN">
...
<entryRelationship typeCode="REFR">
<observation classCode="OBS" moodCode="EVN">
...
</observation>
</entryRelationship>
</observation>
</entryRelationship>
</act>
</entry>
In the above snippet, according to my understanding, there is a SUBJ relationship between the act the the first observation, and there is a REFR relationship between the two observations. Is this correct?
The source of a entryRelationship is the entry that contains in its body the element in your example the source entry is
<act classCode="ACT" moodCode="EVN">
and the target is
<observation classCode="OBS" moodCode="EVN">
Its is posible to indicate a inverse relationship using the "InversionInd" attribute of the entryRelationship elemet. If this attribute is set to true source and target are inverted.

Resources