QBXML : Can there be multiple credit/debit line items in a journal entry query response? - quickbooks

I am retrieving the journal entries and trying to determine whether there will only ever be one JournalCreditLine node and one JournalDebitLine node per JournalEntryRet or if there could be multiple line entries.
EDIT:
I have added multiple journal entries in one place with the same timestamp, but I always get multiple <JournalEntryRet> and never multiple <JournalDebitLine> or <JournalCreditLine>
Query I am sending:
<?xml version="1.0" encoding="utf-8"?>
<?qbxml version="11.0"?>
<QBXML>
<QBXMLMsgsRq onError="stopOnError">
<JournalEntryQueryRq requestID="[request id from DB]">
<IncludeLineItems>1</IncludeLineItems>
</JournalEntryQueryRq>
</QBXMLMsgsRq>
</QBXML>';
Example Response (with all customer data removed):
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
[data]
<JournalEntryRet>
<TxnID>[data]</TxnID>
<TimeCreated>[data]</TimeCreated>
<TimeModified>[data]</TimeModified>
<EditSequence>[data]</EditSequence>
<TxnNumber>[data]</TxnNumber>
<TxnDate>[data]</TxnDate>
<RefNumber>[data]</RefNumber>
<IsAdjustment>[data]</IsAdjustment>
<JournalDebitLine>
<TxnLineID>[data]</TxnLineID>
<AccountRef>
<ListID>[data]</ListID>
<FullName>[data]</FullName>
</AccountRef>
<Amount>[data]</Amount>
<Memo>[data]</Memo>
</JournalDebitLine>
<JournalCreditLine>
<TxnLineID>[data]</TxnLineID>
<AccountRef>
<ListID>[data]</ListID>
<FullName>[data]</FullName>
</AccountRef>
<Amount>[data]</Amount>
<Memo>[data]</Memo>
</JournalCreditLine>
</JournalEntryRet>
<!-- more JournalEntryRet nodes -->
</JournalEntryQueryRs>
</QBXMLMsgsRs>
</QBXML>

There could be multiple journal credit lines, and multiple journal debit lines, in a single JournalEntry object. This mirrors the behavior of the QuickBooks GUI.
The business rule is that the sum of all credit lines must equal the sum of all debit lines.

Related

Need generic format for converting Collection to Tablerow

I am doing transformation by reading CSV file from bucket and store in Big Query
PCollection quotes = ....//read data and do transformation
//writing to BQ existing table which have 2 columns "source" and "quote".
quotes.apply(
MapElements.into(TypeDescriptor.of(TableRow.class))
.via(
(Quote elem) ->
new TableRow().set("source", elem.source).set("quote", elem.quote)))
.apply(
BigQueryIO.writeTableRows()
.to(tableSpecname)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
I need to replace code of converting PCollection to TableRow as in some cases table columns may vary, so this hardcore column names will not work.
You can just add a ParDo step between your input PCollection and BigQuery write step and add a DoFn class that formats the data into TableRow objects the way you want.
https://beam.apache.org/documentation/programming-guide/#pardo

OR in whereField to fetch a document from Firestore

I'm implementing chat using Firestore. Here is the structure of Firestore:
|- Chats (collection)
|-- AutoID (document)
|--- user1ID (String)
|--- user2ID (String)
|--- thread (collection)
... and then thread has further fields.
In order to get chat between two users I'm fetching it like:
let db = Firestore.firestore().collection("Chats")
.whereField("user1ID", isEqualTo: Auth.auth().currentUser?.uid!)
.whereField("user2ID", isEqualTo: user2UID!)
It works fine if user 1 is the current user otherwise if I open chat from other account and current user is user 2 it doesn't fetch this document.
Upon searching I found that I can user arrayContains. So I made an array instead called users and in it I've added both these IDs. So now the structure is:
|- Chats (collection)
|-- AutoID (document)
|--- users (Array)
|---- 0: user1ID (String)
|---- 1: user2ID (String)
|--- thread (collection)
... and then thread has further fields.
But when I do:
let db2 = Firestore.firestore().collection("Chats")
.whereField("users", arrayContains: Auth.auth().currentUser?.uid!)
.whereField("users", arrayContains: user2UID!)
It's going to fetch the first document it found that has currentUser.uid (Haven't tested it, I'm saying this based on the documentation I've read).
So, how can I get this chat, if an array contains both id's?
Firstly, the document you outlined doesn't have any array type fields, so arrayContains isn't going to be helpful. arrayContains only matches items of a field that contains an array.
For your current document structure, a single query will not be able to get all the documents between both users, since Cloud Firestore doesn't offer any logical OR type queries. You are going to need two queries: one for getting all documents where user1 is the current user, and one for where user2 is the current user. Then you can merge the results of those two queries in the client code to build the entire list.
What I typically do is to name the document for the two users that are having the chat. That way you don't need to do a query to find the document, but can just directly access is based on the UIDs.
To ensure the order in which the UIDs are specified doesn't matter, I then add them in lexicographical order. For an example of this, see my answer here: Best way to manage Chat channels in Firebase

Creating a graph using MBeans CompositeData in Zabbix

I'm exposing CompositeData[] via a JMX from one of the services. Data will be in the type like,
key1 : value 1 [String]
key2 : value 2 [Integer]
I am trying to consume this data in Zabbix. How can i generate a graph of key2 against key1 or table of key1 & key2.
Documentation doesn't have enough information around this.
You would need those in separate items. Your keys should look similar to this:
jmx["bean","attribute.key1"]

Extract multiple Substrings from XML stored in a table with datatype CLOB (Oracle 9i)

<!DOCTYPE PODesc SYSTEM "PODesc.dtd"><PODesc><doc_type>P</doc_type><order_no>62249675</order_no><order_type>N/B</order_type><order_type_desc>N/B</order_type_desc><supplier>10167</supplier><qc_ind>N</qc_ind><not_before_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></not_before_date><not_after_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></not_after_date><otb_eow_date><year>2016</year><month>09</month><day>25</day><hour>00</hour><minute>00</minute><second>00</second></otb_eow_date><earliest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></earliest_ship_date><latest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></latest_ship_date><terms>10003</terms><terms_code>45 days</terms_code><freight_terms>SHIP</freight_terms><cust_order>N</cust_order><status>A</status><exchange_rate>1</exchange_rate><bill_to_id>BT</bill_to_id><po_type>00</po_type><po_type_desc>No Store Cluster</po_type_desc><pre_mark_ind>N</pre_mark_ind><currency_code>CZK</currency_code><comment_desc>created by the Tesco Group Ordering System</comment_desc><PODtl><item>120000935</item><physical_location_type>W</physical_location_type><physical_location>207</physical_location><physical_qty_ordered>625</physical_qty_ordered><unit_cost>281.5</unit_cost><origin_country_id>CZ</origin_country_id><supp_pack_size>25</supp_pack_size><earliest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></earliest_ship_date><latest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></latest_ship_date><packing_method>FLAT</packing_method><round_lvl>C</round_lvl><POVirtualDtl><location_type>W</location_type><location>507</location><qty_ordered>625</qty_ordered></POVirtualDtl></PODtl><PODtl><item>218333522</item><physical_location_type>W</physical_location_type><physical_location>207</physical_location><physical_qty_ordered>180</physical_qty_ordered><unit_cost>230.94</unit_cost><origin_country_id>CZ</origin_country_id><supp_pack_size>18</supp_pack_size><earliest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></earliest_ship_date><latest_ship_date><year>2016</year><month>09</month><day>22</day><hour>00</hour><minute>00</minute><second>00</second></latest_ship_date><packing_method>FLAT</packing_method><round_lvl>C</round_lvl><POVirtualDtl><location_type>W</location_type><location>507</location><qty_ordered>180</qty_ordered></POVirtualDtl></PODtl><PODtl><item>218333416</item>
Above is a part of XML file stored in a table column. I want to extract all the Strings between tags <item> and </item>. There are multiple values in a single file for <item>. I am using oracle 9i. Can anyone please provide a proper query for that?
Figure out what the XPath of the values are in your XML, then use ExtractValue
http://docs.oracle.com/cd/B10501_01/appdev.920/a96620/xdb04cre.htm#1024805
e.g.
select <your_rowid>, extractvalue( xmltype(<your_column>), <your_xpath>) from <your_table>
For multiple values just perform multiple extractvalues in the same select.

How to extract data from mnesia backup file

Problem statement
I have a mnesia backup file and would like to extract values from it. There are 3 tables(to make it simple), Employee, Skills, and attendance. So the mnesia back up file contains all those data from these three tables.
Emplyee table is :
Empid (Key)
Name
SkillId
AttendanceId
Skill table is
SkillId (Key)
Skill Name
Attendance table is
Code (Key)
AttendanceId
Percentage
What i have tried
I have used
ets:foldl(Fetch,OutputFile,Table)
Fetch : is separate function to traverse the record fetched to bring in desired output format.
OutputFile : it writes to this file
Table : name of the table
Expecting
I am gettig records with AttendanceId(as this is the key) where as i Want to get code only. It displays employee informations and attendance id.
Help me out.
Backup and restore is described in the mnesia user guide here.
To read an existing backup, without restoring it, use mnesia:traverse_backup/4.
1> mnesia:backup(backup_file).
ok
2> Fun = fun(BackupItems, Acc) -> {[], []} end.
#Fun<erl_eval.12.90072148>
3> mnesia:traverse_backup(backup_file, mnesia_backup, [], read_only, Fun, []).
{ok,[]}
Now add something to the Fun to get what you want.

Resources