I am importing Quick books item data from quick books to CSV file using QBFC.
I have seen few fields have same value(103).
ParentRefType = 103
SalesTaxCodeRefType = 103
ExpenseAccountType = 103
PrefVendorType = 103
PurchaseTaxCodeType = 103
Find the file here
Let me know why ? I does not see this values directly form Quick book application.
I hope this values coming from background.
The short answer is that 103 refers to the FullName Reference Type. And yes, these values are coming from the "background" of Quickbooks and QBFC so you will likely not see them anywhere in the Quickbooks UI.
All of the fields you listed above are Reference Types of a Quickbooks object (i.e. Parent, SalesTaxCode, ExpenseAccount, etc). You can reference an object through two means: a ListID or a FullName. The Type of the reference indicates whether or not the object is using a ListID reference or a FullName reference.
The integer 103 appears to be the internal identifier for a FullName reference type. Notice in your export file (Item.csv) that all of the reference objects use the FullName type to reference objects (see the columns ParentRefFullName, SalesTaxCodeRefFullName, ExpenseAccountRefFullName, etc). Notice also that the columns immediately after these are the Ref Type columns (i.e. ParentRefType, SalesTaxCodeRefType, etc). These Ref Type columns are set to 103 whenever the cell to the left (the FullName cell) contains a value. When there is no FullName reference, the Type column contains a zero (which I'm assuming means Ref Type Not Known or something similar).
The QBFC Quick Reference states the following (under the IQBBaseRef definition):
IQBBaseRef is used for all qbXML "object references," which refer to objects. For example, an
AccountRef refers to an account in the chart of accounts. If a request specifies both ListID
and FullName, QuickBooks will use only the ListID.
That last sentence is important to note. A ListID reference takes priority over a FullName reference. It appears though that there are no ListID references used in your export file.
Related
Currently, it seems the relationship output by apoc.export function:
CALL apoc.export.csv.data( [], R, null, {stream:true}) YIELD data AS rdata
RETURN mdata, ndata, rdata
The format is:
_start _end _type
18701 19076 hasMember
The '18701' & '19076' are neo4j's internal ids. Can I use my own id from the node's identifier as the relationship connector? My own node identifier is always guaranteed to be unique. I want to periodically export KG nodes and relationships as KG grows overtime. In such a case, can the IDs always be unique among all nodes in the entire graph?
The possibility of getting duplicate nodes and relationships when using apoc.import.csv (even when ignoreDuplicateNodes is false, which is the default) is a known issue (see issues 1046 and 1048).
Unfortunately, issue 1048 was closed by its submitter even though is was not fixed.
You may want to open a new issue.
I'm working with the interface py2neo to access the neo4j database out of python. I want to used the autogenerated Id column in an OGM model as a property. But my idea doesn't work. Please look a the example:
from py2neo import Graph, Node, Relationship
from py2neo.ogm import GraphObject, Property, RelatedTo, RelatedFrom
class Material(GraphObject):
id = Property()
name = Property()
description = Property()
I insert the values into the system:
mat_f01 = Node('Material', name='F01', description='Fert Product 01')
mat_f02 = Node('Material', name='F02', description='Fert Product 02')
In the neo4j browser the record are displayed as followed - with the id column:
<id>:178 description: Fert Product 02 name: F02
If I look to the same records in flask the Id column contains the value None. It should contain 177 and 178.
description id name
Fert Product 01 Fert None F01
Fert Product 03 Fert None F03
Many thanks in advance.
The node ID has no correlation with the properties on that node. It is more of an internal attribute, closer to the address of a variable than an auto-generated ID. It is exposed by Neo4j as a convenience but should not be used for anything that requires a stable node reference as it can't provide those guarantees of stability.
If you want a unique identifier property then I recommend a UUID4 hex string instead. You can generate one of these in Python via the uuid module, and it should be guaranteed unique for all practical purposes.
Using Delphi 10.2, SQLite and Teecharts. My SQLite database has two fields, created with:
CREATE TABLE HistoryRuntime ('DayTime' DateTime, Device1 INTEGER DEFAULT (0));
I access the table using a TFDQuery called qryGrpahRuntime with the following SQL:
SELECT DayTime AS TheDate, Sum(Device1) As DeviceTotal
FROM HistoryRuntime
WHERE (DayTime >= "2017-06-01") and (DayTime <= "2017-06-26")
Group by Date(DayTime)
Using the Field Editor in the Delphi IDE, I can add two persistent fields, getting TheDate as a TDateTimeField and DeviceTotal as a TLargeIntField.
I run this query in a program to create a TeeChart, which I created at design time. As long as the query returns some records, all this works. However, if there are no records for the requested dates, I get an EDatabaseError exception with the message:
qryGrpahRuntime: Type mismatch for field 'DeviceTotal', expecting: LargeInt actual: Widestring
I have done plenty of searching for solutions on the web on how to prevent this error on an empty query, but have had not luck with anything I found. From what I can tell, SQLite defaults to the wide string field when no data is returned. I have tried using CAST in the query and it did not seem to make any difference.
If I remove the persistent fields, the query will open without problems on an empty return set. However, in order to use the TeeChart editor in the IDE, it appears I need persistent fields.
Is there a way I can make this work with persistent fields, or am I going to have to throw out the persistent fields and then add the TeeChart Series at runtime?
This behavior is described in Adjusting FireDAC Mapping chapter of the FireDAC's SQLite manual:
For an expression in a SELECT list, SQLite avoids type name
information. When the result set is not empty, FireDAC uses the value
data types from the first record. When empty, FireDAC describes those
columns as dtWideString. To explicitly specify the column data type,
append ::<type name> to the column alias:
SELECT count(*) as "cnt::INT" FROM mytab
So modify your command e.g. this way (I used BIGINT, but you can use any pseudo data type that maps to a 64-bit signed integer data type and is not auto incrementing, which corresponds to your persistent TLargeIntField field):
SELECT
DayTime AS "TheDate",
Sum(Device1) AS "DeviceTotal::BIGINT"
FROM
HistoryRuntime
WHERE
DayTime BETWEEN {d 2017-06-01} AND {d 2017-06-26}
GROUP BY
Date(DayTime)
P.S. I did a small optimization by using BETWEEN operator (which evaluates the column value only once), and used an escape sequence for date constants (which, in real you replace by parameter, I guess; so just for curiosity).
This data type hinting is parsed by the FDSQLiteTypeName2ADDataType procedure that takes and parses column name in format <column name>::<type name> in its AColName parameter.
I am using influxDB and using line protocol to insert large set of data into Data base. Data i am getting is in the form of Key value pair, where key is long string contains Hierarchical data and value is simple integer value.
Sample Key Value data :
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/allocations
value = 500
/path/units/unit/subunits/subunit[name\='NAME2']/memory/chip/application/filter/allocations
value = 100
(Note Name = 2)
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/free
value = 700
(Note Instead of allocation it is free at the leaf)
/path/units/unit/subunits/subunit[name\='NAME2']/memory/graphics/application/filter/swap
value = 600
Note Instead of chip, graphics is in path)
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/size
value = 400
Note Different path but till subunit it is same
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free
value=100
Note Same path but last element is different
Below is the line protocol i am using to insert data.
interface, Key= /path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free, valueData= 500
I am Using one measurement namely, Interface. And one tag and one field set. But this DB design is causing issue for querying data.
How can I design database so that i can query like, Get all record for subunit where name = Name1 or get all size data for every hard disk.
Thanks in advance.
The Schema I'd recommend would be the following:
interface,filename=/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free value=500
Where filename is a tag and value is the field.
Given that the cardinality of filename in the thousands this schema should work well.
In Qlikview, I have an excel sheet that I use to map USERNAME to a TEAM value. But everytime I refresh the dashboard, new USERNAME values come up and since they are not in the excel sheet, these USERNAME values show up as their own value in the TEAM column. How would I make it so that any USERNAME that is not in the excel sheet shows up as 'Unidentified' or another value under the TEAM column instead of showing up as their own separate value?
First of all when posting question here if possible always include the source code so everybody will have more clear picture about your problem. Just saying.
On the topic ...
Use the mapping load in this case with supplying the third parameter. For example:
TeamMapping:
Mapping
Load
UserName,
Team
From
[User_to_Team_Mapping.xlsx] (ooxml, embedded labels, table is [Sheet1])
;
Transactions:
Load
Id,
Amount,
ApplyMap( 'TeamMapping', User, 'Unidentified') as Team
From
Transactions.qvd (qvd)
;
The third parameter in ApplyMap is the default string value when mapping value was not found in the mapping table (TeamMapping)