How to reduce XML reading cost in SQL Server 2008 R2 - stored-procedures

I am reading a xml value and storing in a table but its cost is 82%.
This is my query:
declare #StateNameList XML
set #StateNameList='<StatusList>
<Status>
<StatusName>All</StatusName>
<StatusID>1</StatusID>
</Status>
<Status>
<StatusName>test</StatusName>
<StatusID>2</StatusID>
</Status>
</StatusList>'
SELECT
Table1.Column1.value('(./StatusName)[1]', 'varchar(50)') AS StatusName
FROM
#StateNameList.nodes('/StatusList/Status') AS Table1(Column1)

Please add the below code after from tag then see the execution plan
declare #StateNameList XML
set #StateNameList='<StatusList>
<Status>
<StatusName>All</StatusName>
<StatusID>1</StatusID>
</Status>
<Status>
<StatusName>test</StatusName>
<StatusID>2</StatusID>
</Status>
</StatusList>'
SELECT
Table1.Column1.value('(./StatusName)[1]','varchar(50)') AS StatusName
FROM
#StateNameList.nodes('/StatusList/Status') AS Table1(Column1)
OPTION (OPTIMIZE FOR ( #StateNameList = NULL ))

Related

XSLT: how to pass different prefix values for Node and its feilds

I have a requirement where Nodes have a different prefix value where as fields under them has a different prefix, how to achieve this using XSLT. I have attached sample input and expected its output. Can you please advise.
I am expecting nodes have prebix "cac" and its fields as "cbc" and also replace namespace ns2 with r1 prefix.
Input:
<?xml version="1.0" encoding="UTF-8"?>
<ns0:StandardBusinessDoc xmlns:ns0="http://www.unece.org/cefact/namespaces/StandardBusinessDocumentHeader">
<ns1:Invoice xmlns:ns1="urn:oasis:names:specification:ubl:schema:xsd:Invoice-2">
<ns1:CustomizationID>urn:cen.eu:en131:2017#compliant#urn:fdc:peppol.eu:2017:pocc:billing:3.0</ns1:CustomizationID>
<ns1:ProfileID>urn:fdc:peppol.eu:2017:poacc:billing:01:1.0</ns1:ProfileID>
<ns1:ID>80160238</ns1:ID>
<ns1:BuyerReference>202208_604</ns1:BuyerReference>
<ns1:BillingReference>
<ns1:InvoiceDocumentReference>
<ns1:ID>test</ns1:ID>
<ns1:IssueDate>2022-09-28</ns1:IssueDate>
</ns1:InvoiceDocumentReference>
</ns1:BillingReference>
<ns1:AdditionalDocumentReference>
<ns1:ID>06AB87FD6E1E1EED96F1653A13ADC23</ns1:ID>
<ns1:DocumentDescription>SupplierUID</ns1:DocumentDescription>
</ns1:AdditionalDocumentReference>
<ns1:AdditionalDocumentReference>
<ns1:ID>2M</ns1:ID>
<ns1:DocumentDescription>Series</ns1:DocumentDescription>
</ns1:AdditionalDocumentReference>
<ns2:Classification xmlns:ns2="rl:rl-einvoicing">
<ns2:Line>
<ns2:ID>000010</ns2:ID>
<ns2:VatCategory>
<ns2:VatRate>24</ns2:VatRate>
<ns2:IncomeClassification>
<ns2:Category>category1_2</ns2:Category>
<ns2:Type>E3_561_005</ns2:Type>
<ns2:Amount>112.33</ns2:Amount>
</ns2:IncomeClassification>
</ns2:VatCategory>
</ns2:Line>
</ns2:Classification>
</ns0:StandardBusinessDoc>
Expected Output:
<?xml-model href="http://www.unece.org/fileadmin/DAM/cefact/namespaces/StandardBusinessDocumentHeader/StandardBusinessDocumentHeader.xsd" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?>
<StandardBusinessDoc xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.unece.org/cefact/namespaces/StandardBusinessDocumentHeader http://www.unece.org/fileadmin/DAM/cefact/namespaces/StandardBusinessDocumentHeader/StandardBusinessDocumentHeader.xsd" xmlns:rl="rl:rl-einvoicing" xmlns="http://www.unece.org/cefact/namespaces/StandardBusinessDocumentHeader">
<Invoice xmlns:cac="urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2" xmlns:cbc="urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2" xmlns="urn:oasis:names:specification:ubl:schema:xsd:Invoice-2">
<cbc:CustomizationID>urn:cen.eu:en131:2017#compliant#urn:fdc:peppol.eu:2017:pocc:billing:3.0</cbc:CustomizationID>
<cbc:ProfileID>urn:fdc:peppol.eu:2017:poacc:billing:01:1.0</cbc:ProfileID>
<cbc:ID>80160238</cbc:ID>
<cbc:BuyerReference>202208_604</cbc:BuyerReference>
<cac:BillingReference>
<cac:InvoiceDocumentReference>
<cbc:ID>test</cbc:ID>
<cbc:IssueDate>2022-09-28</cbc:IssueDate>
</cac:InvoiceDocumentReference>
</cac:BillingReference>
<cac:AdditionalDocumentReference>
<cbc:ID>06AB87FD6E1E1EED96F1653A13ADC23</cbc:ID>
<cbc:DocumentDescription>SupplierUID</cbc:DocumentDescription>
</cac:AdditionalDocumentReference>
<cac:AdditionalDocumentReference>
<cbc:ID>2Μ</cbc:ID>
<cbc:DocumentDescription>Series</cbc:DocumentDescription>
</cac:AdditionalDocumentReference>
<rl:Classification>
<rl:Line>
<rl:ID>000010</rl:ID>
<rl:VatCategory>
<rl:VatRate>24</rl:VatRate>
<rl:IncomeClassification>
<rl:Category>category1_2</rl:Category>
<rl:Type>E3_561_005</rl:Type>
<rl:Amount>112.33</rl:Amount>
</rl:IncomeClassification>
</rl:VatCategory>
</rl:Line>
</rl:Classification>
</StandardBusinessDoc>

Element type "foreach" must be declared - mybatis

Is using foreach attribute in mybatis/ibatis for oracle sql updates a best practice? Below is my query in the sql map.
<update id="updateFG" parameterClass="java.util.Map">
<foreach collection="entries.entrySet()" item="item" index="index" >
UPDATE <<tablename>>
SET description = #{item.value},
last_mod_date= SYSDATE
WHERE name = #{item.key}
</foreach>
</update>
When I try to run this piece of code it is throwing me an error:
Error parsing XML. Cause: org.xml.sax.SAXParseException; lineNumber: 49; columnNumber: 72; Element type "foreach" must be declared.
Okay so when I changed my DCOTYPE from sqlmap to mapper - it worked fine.. I think foreach cannot be used for sqlMap..
EDIT: realized the foreach is not efficient for multiple rows so used batch instead

AntSCript to extract xml tag having specific matching string in attribute value from xml file

I have and XML file as below
<sca:composite xmlns:sca="http://www.osoa.org/xmlns/sca/1.0" xmlns:atleastonce="http://www.tibco.com/wrm/policy/atleastonce" xmlns:common="http://xsd.tns.tibco.com/n2/models/common" xmlns:compositeext="http://schemas.tibco.com/amx/3.0/compositeext" xmlns:jdbc="http://xsd.tns.tibco.com/amf/models/sharedresource/jdbc" xmlns:pbu="http://www.tibco.com/wrm/policy/pbu" xmlns:pfe="http://xsd.tns.tibco.com/n2/models/pfe/1.0" xmlns:scact="http://xsd.tns.tibco.com/amf/models/sca/componentType" xmlns:scaext="http://xsd.tns.tibco.com/amf/models/sca/extensions" xmlns:service="http://xsd.tns.tibco.com/bx/amx/model" xmlns:smtp="http://xsd.tns.tibco.com/amf/models/sharedresource/smtp" xmlns:soapbt="http://xsd.tns.tibco.com/amf/models/sca/binding/soap" xmlns:startservicefirst="http://www.tibco.com/wrm/policy/startservicefirst" xmlns:threading="http://www.tibco.com/wrm/policy/threading" xmlns:transactedoneway="http://www.tibco.com/wrm/policy/transactedoneway" xmlns:webapp="http://xsd.tns.tibco.com/amf/models/sca/implementationtype/webapp" xmlns:wrm="http://www.tibco.com/wrm" xmlns:xmi="http://www.omg.org/XMI" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" compositeext:formatVersion="2" compositeext:version="1.0.0.20180112132229840" name="za.co.rmb.dealamendmentsmaintenance" targetNamespace="http://www.example.com/za.co.rmb.dealamendmentsmaintenance" xmi:id="_4EfRQfeKEeeZRvktH3XIjg" xmi:version="2.0">
<sca:reference multiplicity="0..1" name="WorkListService_Consumer1" promote="dealAmendmentsMaintenanceProcessFlow/WorkListService_Consumer" wiredByImpl="false" xmi:id="_AR2UQPeLEeeZRvktH3XIjg">
<sca:interface.wsdl interface="http://services.brm.n2.tibco.com#wsdl.interface(WorkListService)" scaext:wsdlLocation=".processOut/process/dealAmendmentsMaintenance.xpdl/brm.wsdl" xmi:id="_AR2UQfeLEeeZRvktH3XIjg"/>
</sca:reference>
<sca:reference multiplicity="0..1" name="CreateDailyTasks_Consumer1" promote="dealAmendmentsMaintenanceProcessFlow/CreateDailyTasks_Consumer" wiredByImpl="false" xmi:id="_ATRQkPeLEeeZRvktH3XIjg">
<sca:interface.wsdl interface="http://www.tibco.com/bs3.0/_8uwIINbzEeWTpucOvGErRg#wsdl.interface(CreateDailyTasks)" scaext:wsdlLocation=".processOut/process/dealAmendmentsMaintenance.xpdl/dealAmendments_segregation.wsdl" xmi:id="_ATRQkfeLEeeZRvktH3XIjg"/>
</sca:reference>
</sca:composite>
With ant script i want to extract value in the "interface" attribute under sca:interface, by matching input value in "name" attribute in sca:refernce.
So lets say
if input will be : WorkListService_Consumer1
Expected Output : http://services.brm.n2.tibco.com#wsdl.interface(WorkListService)
Similarly, if
input will be : CreateDailyTasks_Consumer1
Expected Output : http://www.tibco.com/bs3.0/_8uwIINbzEeWTpucOvGErRg#wsdl.interface(CreateDailyTasks)
I tried using various xmltask commands but i am not getting succesfull.
Thanks
Shrijeet Sinha
You almost had the solution, however text() is used to reference the inner text of an XML element, such as <element>This text here</element>. Here is the syntax for referencing an attribute's value:
<xmltask source="xmlfile.xml">
<copy path="sca:composite/sca:reference[#name='${input}']/sca:interface.wsdl/#interface" property="testproperty"/>
</xmltask>

Transform portion of ETL using Scriptella?

I am trying out Scriptella to see if it will meet my needs. So far, it seems like a great tool. I've spent several hours studying sample scripts, searching forums, and trying to get the hang of nested queries/scripts.
This is an example of my ETL file, slightly cleaned up for brevity. Lines beginning with # are added and not part of the actual ETL file. I am trying to insert/retrieve IDs and then pass them on to later script blocks. The most promising way to do this appears to be using global variables but I'm getting null when trying to retrieve the values. Later, I will be adding code in the scripts blocks that parse and significantly transform fields before adding them into the DB.
There are no errors. I'm just not getting the OS ID and Category IDs that I'd expect. Thank you in advance.
<!DOCTYPE etl SYSTEM "http://scriptella.javaforge.com/dtd/etl.dtd">
<etl>
<connection id="in" driver="csv" url="mycsvfile.csv"/>
<connection id="dest" url="jdbc:mysql://localhost:3306/pvm3" user="user" password="password"/>
<connection id="js" driver="script"/>
<query connection-id="in">
<!-- all columns are selected, notably: OPERATINGSYSTEM, CATEGORY, QID, TITLE -->
<query connection-id="dest">
#Check to see if the OS already exists, and get the ID if it does
select max(os_id) as os_id, count(*) as os_cnt from etl_os where os = ?OPERATINGSYSTEM;
#If it doesnt exist then add it and get the auto_increment value
<script if="os_cnt==0">
insert into etl_os(os) values(?OPERATINGSYSTEM);
<query connection-id="dest">
select last_insert_id() as os_id;
#Store in global so it can be accessed in later script blocks
<script connection-id="js">
etl.globals.put('os_id', os_id);
</script>
</query>
</script>
#Same style select/insert as above for category_id (excluded for brevity)
#See if KB record exists by qid, if not then add it with the OS ID and category ID we got earlier
<query connection-id="dest">
select max(qid) as existing_qid, count(*) as kb_cnt from etl_qids where qid = ?QID
<script if="kb_cnt==0">
insert into etl_qids(qid, category_id, os_id) values (?QID, ?{etl.globals.get('category_id')}, ?{etl.globals.get('os_id')});
</script>
</query>
</query>
</query>
</etl>
Found out how to do it. Essentially, just nest queries to modify the data before passing it to a script. The below is a quick type-up of the solution. I did not understand at first that queries could be immediately nested to transform the row before passing it for processing. My impression was also that only scripts could manipulate the data.
(Query)Raw data -> (Query)manipulate data -> (Script)write new data.
.. in is a CSV file ..
.. js is a driver="script" block ..
<query connection-id="in">
<query connection-id="js">
//transform data as needed here
if (BASE_TYPE == '-') BASE_TYPE = '0';
if (SECONDARY_TYPE == '-') SECONDARY_TYPE = '0';
SIZES = SIZES.toLowerCase();
query.next(); //call nested scripts
<script connection-id="db">
INSERT IGNORE INTO sizes(size) VALUES (?SIZE);
INSERT IGNORE INTO test(base_type,secondary_type) VALUES (?BASE_TYPE, ?SECONDARY_TYPE);
</script>
</query>
</query>

Envers with Nhibernate

I am using Envers for auditing some of my DB Tables.
Auditing is woking fine, I can see the data in the DB in the corresponding tables with my custom prefix etc.
I can't query any data becouse I am getting always the following QueryException:
could not resolve property: originalId of: NaturalPerson [select e__, r__ from NaturalPerson e__, ExtendedRevisionEntity r__ where e__.originalId.RevisionID.id = r__.id order by e__.originalId.RevisionID.id asc]
This is the query code:
AuditReaderFactory.Get(session).CreateQuery().ForHistoryOf<NaturalPerson, ExtendedRevisionEntity>().Results();
Mappings for NaturalPerson
<?xml version="1.0" encoding="utf-8"?>
<hibernate-mapping assembly="Domain" namespace="Domain" xmlns="urn:nhibernate-mapping-2.2">
<joined-subclass name="NaturalPerson" schema="MySchema" table="NaturalPersons">
<key column="PersonID" />
<property name="Name" type="AnsiString"/>
</joined-subclass>
</hibernate-mapping>
Envers config using fluent:
configuration.SetEnversProperty(ConfigurationKey.AuditTableSuffix, " ");
configuration.SetEnversProperty(ConfigurationKey.DefaultSchema, "aud");
configuration.SetEnversProperty(ConfigurationKey.StoreDataAtDelete, true);
configuration.SetEnversProperty(ConfigurationKey.RevisionFieldName, "RevisionID");
configuration.SetEnversProperty(ConfigurationKey.RevisionTypeFieldName, "RevisionTypeID");
enversConf.Audit<NaturalPerson>();
As stated in the comments above, the problem is the "only spaces" in the AuditTableSuffix.
The audited entity name in code is
audittableprefix + "original entity name" + audittablesuffix
When querying, the "empty space" means nothing ("select a from b a" becomes "select a from b a") and wrong data will be read.
Please add a JIRA issue about it here
https://nhibernate.jira.com/browse/NHE
preferably with a small, isolated failing test.

Resources