Extract multiple Substrings from XML stored in a table with datatype CLOB (Oracle 9i) - oracle9i

<!DOCTYPE PODesc SYSTEM "PODesc.dtd"><PODesc><doc_type>P</doc_type><order_no>62249675</order_no><order_type>N/B</order_type><order_type_desc>N/B</order_type_desc><supplier>10167</supplier><qc_ind>N</qc_ind><not_before_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></not_before_date><not_after_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></not_after_date><otb_eow_date><year>2016</year><month>09</month><day>25</day><hour>00</hour><minute>00</minute><second>00</second></otb_eow_date><earliest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></earliest_ship_date><latest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></latest_ship_date><terms>10003</terms><terms_code>45 days</terms_code><freight_terms>SHIP</freight_terms><cust_order>N</cust_order><status>A</status><exchange_rate>1</exchange_rate><bill_to_id>BT</bill_to_id><po_type>00</po_type><po_type_desc>No Store Cluster</po_type_desc><pre_mark_ind>N</pre_mark_ind><currency_code>CZK</currency_code><comment_desc>created by the Tesco Group Ordering System</comment_desc><PODtl><item>120000935</item><physical_location_type>W</physical_location_type><physical_location>207</physical_location><physical_qty_ordered>625</physical_qty_ordered><unit_cost>281.5</unit_cost><origin_country_id>CZ</origin_country_id><supp_pack_size>25</supp_pack_size><earliest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></earliest_ship_date><latest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></latest_ship_date><packing_method>FLAT</packing_method><round_lvl>C</round_lvl><POVirtualDtl><location_type>W</location_type><location>507</location><qty_ordered>625</qty_ordered></POVirtualDtl></PODtl><PODtl><item>218333522</item><physical_location_type>W</physical_location_type><physical_location>207</physical_location><physical_qty_ordered>180</physical_qty_ordered><unit_cost>230.94</unit_cost><origin_country_id>CZ</origin_country_id><supp_pack_size>18</supp_pack_size><earliest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></earliest_ship_date><latest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></latest_ship_date><packing_method>FLAT</packing_method><round_lvl>C</round_lvl><POVirtualDtl><location_type>W</location_type><location>507</location><qty_ordered>180</qty_ordered></POVirtualDtl></PODtl><PODtl><item>218333416</item>
Above is a part of XML file stored in a table column. I want to extract all the Strings between tags <item> and </item>. There are multiple values in a single file for <item>. I am using oracle 9i. Can anyone please provide a proper query for that?

Figure out what the XPath of the values are in your XML, then use ExtractValue
http://docs.oracle.com/cd/B10501_01/appdev.920/a96620/xdb04cre.htm#1024805
e.g.
select <your_rowid>, extractvalue( xmltype(<your_column>), <your_xpath>) from <your_table>
For multiple values just perform multiple extractvalues in the same select.

Related

deleting columns from influx DN using flux command line

Is there any way to delete columns of an influx timeseries as we have accidentally injected data using the wrong data type (int instead of float).
Or to change the type of data instead.
Unfortunately, there is no way to delete a "column" (i.e. a tag or a field) from an Influx measurement so far. Here's the feature request for that but there is no ETA yet.
Three workarounds:
use SELECT INTO to copy the desirable data into a different measurement, excluding the undesirable "columns". e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2 INTO new_measurement FROM measurement
use CAST operations to "change the data type" from float to int. e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2, undesiredableTag3::integer, undesiredableField3::integer INTO new_measurement FROM measurement
"Update" the data with insert statement, which will overwrite the data with the same timestamp, same tags, same field keys. Keep all other things equal, except that the "columns" that you would like to update. To make the data in integer data type, remember to put a trailing i on the number. Example: 42i. e.g.:
insert measurement,desirableTag1=v1 desirableField1=fv1,desirableField2=fv2,undesirableField1=someValueA-i 1505799797664800000
insert measurement,desirableTag1=v21 desirableField1=fv21,desirableField2=fv22,undesirableField1=someValueB-i 1505799797664800000

Neo4j imports zero records from csv

I am new to Neo4j and graph database. While trying to import a few relationships from a CSV file, I can see that there are no records, even when the file is filled with enough data.
LOAD CSV with headers FROM 'file:/graphdata.csv' as row WITH row
WHERE row.pName is NOT NULL
MERGE(transId:TransactionId)
MERGE(refId:RefNo)
MERGE(kewd:Keyword)
MERGE(accNo:AccountNumber {bName:row.Bank_Name, pAmt:row.Amount, pName:row.Name})
Followed by:
LOAD CSV with headers FROM 'file/graphdata.csv' as row WITH row
WHERE row.pName is NOT NULL
MATCH(transId:TransactionId)
MATCH(refId:RefNo)
MATCH(kewd:Keyword)
MATCH(accNo:AccountNumber {bName:row.Bank_Name, pAmt:row.Amount, pName:row.Name})
MERGE(transId)-[:REFERENCE]->(refId)-[:USED_FOR]->(kewd)-[:AGAINST]->(accNo)
RETURN *
Edit (table replica):
TransactionId Bank_Name RefNo Keyword Amount AccountNumber AccountName
12345 ABC 78 X 1000 5421 WE
23456 DEF X 2000 5471
34567 ABC 32 Y 3000 4759 HE
Is it likely the case that the Nodes and relationships are not created at all? How do I get all these desired relationships?
Neither file:/graphdata.csv nor file/graphdata.csv are legal URLs. You should use file:///graphdata.csv instead.
By default, LOAD CSV expects a "csv" file to consist of comma separated values. You are instead using a variable number of spaces as a separator (and sometimes as a trailer). You need to either:
use a single space as the separator (and specify an appropriate FIELDTERMINATOR option). But this is not a good idea for your data, since some bank names will likely also contain spaces.
use a comma separator (or some other character that will not occur in your data).
For example, this file format would work better:
TransactionId,Bank_Name,RefNo,Keyword,Amount,AccountNumber,AccountName
12345,ABC,78,X,1000,5421,WE
23456,DEF,,X,2000,5471
34567,ABC,32,Y,3000,4759,HE
Your Cypher query is attempting to use row properties that do not exist (since the file has no corresponding column headers). For example, your file has no pName or Name headers.
Your usage of the MERGE clause is probably not doing what you want, generally. You should carefully read the documentation, and this answer may also be helpful.

Setting dynamic properties for Node in neo4j

Assume a Node "Properties". I am using "LOAD CSV with headers..."
Following is the sample file format:
fields
a=100,b=110,c=120,d=500
How do I convert fields column to having a node with a,b,c,d and 100,110,120,500 respectively as the properties of the node "Properties"?
LOAD CSV WITH HEADERS FROM 'file:/sample.tsv' AS row FIELDTERMINATOR '\t'
CREATE (:Properties {props: row.fields})
The above does not create individual properties, but sets a string value to props as "a=100,b=110,c=120,d=500"
Also, different rows could have different set of Key values. That is the key needs to be dynamic. (There are other columns as well, I trimmed it for SO)
fields
a=100,b=110,c=120,d=500
X=300,y=210,Z=420,P=600
...
I am looking for a way to not split this key-value as columns and then load. The reason is they are dynamic - today it is a,b,c,d it may change to aa,bb,cc,dd etc.
I don't want to keep on changing my loader script to recognize new column headers.
Any pointers to solve this? I am using the latest 3.0.1 neo4j version.
First things first: Your file format currently defines a single header/property: fields:
fields
a=100,b=110,c=120,d=500
Since you defined a tab as field terminator, that entire string (a=100,b=110,c=120,d=500) would end up in your node's props property:
To have properties loaded dynamically: First set up proper header:
"a","b","x","y"
1,2,,
,,3,4
Then you can query with something like this:
LOAD CSV WITH HEADERS FROM 'file:///Users/David/overflow.csv' AS row
CREATE (:StackOverflow { a:row.a, b:row.b,x:row.x,y:row.y})
Then when you run something like:
match(so:StackOverflow) return so
You'll get the variable properties you wanted:

Orbeon - how to set values of numerous fields with one query

I have created a database service that retrieves numerous columns. I have successfully created the action to call other queries which passes in a parameter and displays the output in drop-down box or check boxes. HOWEVER, with this new query I would like to set the values of 5 different fields on the form based on the single query call. What xpath expression syntax is needed in the 'Set Response Control Values' section in order to make this work.....or is this not the right place or way to do this?
Sounds like you're using Form Builder - in the "Set Response Control Values" section in the Actions Editor, you should set up one item for each form field to be updated, with the Destination Control drop-down specifying the form field. So in your case you'll have 5 rows pointing to your 5 fields.
Let's assume that your query returns a single row, with the values that will go into your form fields in separate columns. Your query results come from the database service looking like this:
<response>
<row>
<query-column-1>value</query-column-1>
<query-column-2>value</query-column-2>
...
</row>
</response>
So if the column name for your first item is "id", the "Set Response Control Values" entry would look like this:
/response/row/id
There is one gotcha...if a column name in the database includes an underscore, this will be converted to a hyphen in the results from the database service. So if your column name was "asset_id" you'd put response/row/asset-id.
If your query returns multiple rows, you can refer to a specific row using a predicate, like so: response/row[1]/id

Given the hexadecimal code of a character, how to convert it to the corresponding character in CL program?

Now I need to find a particular entry in a journal using a CL program. The way I use to locate it is to DSPJRNE to put the journal entries in an output file, then use OPNQRYF to filter the desired one. The file is uniquely keyed so my plan is to compare the journal entry data with the key. The problem is that one of the key is a packed decimal so in the journal entry it is treated as hexadecimal code of characters and displayed as some strange symbols. So in order to compare the strings I need to convert the packed decimal key into the corresponding characters. How to achieve this in CL? If using CL is not possible, what about RPG?
To answer your immediate question, the CVTCH MI instruction will convert hex to char but I would not go that route; neither in CL nor RPG. Rather, I would take James' advice with a few additional steps.
DSPJRNE OUTFILE(QTEMP/DSPJRNE)
QRY input file DSPJRNE, output file QRYJRNE, select only JOESD
CRTDUPOBJ PRODUCTION_FILE QTEMP/JRNF DATA(*NO)
CPYF QRYJRNE JRNF FMTOPT(*NOCHK)
This will give you an externally described file with the exact same layout as your production file. You can query that, etc.
If you are pulling journal entries for a specific file you can dump them into an externally described file with a clever use of SQL:
CREATE TABLE QTEMP/QADSPJRN LIKE QSYS/QADSPJRN
ALTER TABLE QTEMP/QADSPJRN DROP COLUMN JOESD
CREATE TABLE QTEMP/DSPJRNE AS (SELECT * FROM QTEMP/QADSPJRN, FILE-LIB/FILE)
WITH NO DATA
DSPJRNE ... OUTPUT(*OUTFILE) OUTFILFMT(*TYPE1) OUTFILE(QTEMP/DSPJRNE)
ENDDTALEN(*CALC)

Resources