Duplicate Customers Returned in Quickbooks CustomerQueryRq - quickbooks

We have an application that is synchronizing Quickbook's customers with a remote SQL Server database. And, I am running into a bizarre issue with the QBXML CustomerQueryRs. I am using an iterator with varying slice count and the response XML intermittently returns the exact same customer 2 or more times.
I have verified that the customer is exactly the same (i.e. FullName and ListID) are identical. I have even run the query with a MaxReturned value of 1500 which is greater than the number of customers in the company file. Some customers are returned as many as 5 times in the QBXML response.
Below, is a copy of my last request using the large MaxReturned value. My response is huge so, I will not include it. But, the same CustomerRet element is returned more than one time for about 133 customers out of around 1400. Any suggestions?
** QBXML REQUEST **
<?xml version="1.0"?><?qbxml version="12.0"?><QBXML><QBXMLMsgsRq onError="stopOnError"><CustomerQueryRq metaData="MetaDataAndResponseData" iterator="Start"><MaxReturned>1500</MaxReturned><ActiveStatus>ActiveOnly</ActiveStatus></CustomerQueryRq></QBXMLMsgsRq></QBXML>

You are probably getting back different notes or one of the other fields is changing.
Go to c:\ProgramData\Intuit\QuickBooks and edit the qbsdk.ini file.
If qbsdk.ini is not there, just create it.
Add or modify the following lines:
[Parser]
SdkAccelerator=N

Related

Get last page of GitHub Search Api result in iOS

I'm making a new iOS (swift) app to test some concepts, and I'm using the GitHub Serach API to retrieve a list of filtered repositories.
The request is working fine so far, but I'm having trouble understanding the pagination process and how to know I reached the end of the results.
For what I saw, the Search API returns a maximum of 1k results, broke in pages of 100 maximum results. But the field in the returned json with the total count shows way more available results (I imagine that it shows the total repositories that satisfy the query and not the maximum available for return in the API).
The only way I found so far to obtain information about the pages (and the pagination process) in GitHub Documentation comes in the header of the response, like:
Status: 200 OK
Link: <https://api.github.com/resource?page=2>; rel="next",
<https://api.github.com/resource?page=5>; rel="last"
X-RateLimit-Limit: 20
X-RateLimit-Remaining: 19
Anyone can suggest the best approach to detect the end of the pages in this case?
Should I try to parse the information from the header or infer it somehow based on the returned json? I even got the "Link" header value but don't know how to parse it.

Missing or Invalid Url Parameter with Twitter API

I am using the Twitter API's Search Tweets endpoint (https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets) to search for tweets on a particular topic. My application is written in Java and uses Twitter4j. I follow the guidelines (i.e. no more than 180 calls per minute and no more than 500 characters per query) however I do have a test case that has 50 word exclusions (so the total character count is 397). My test program also runs 100 such searches in rapid succession though, like I said, I observe the rate limits strictly.
The odd behavior I'm seeing is that the test runs fine and gets results initially, but after an arbitrary period of time, I start getting the following error:
Message: Missing or invalid url parameter. 403:The request is understood, but it has been refused. An accompanying error message will explain why. This code is used when requests are being denied due to update limits (https://support.twitter.com/articles/15364-about-twitter-limits-update-api-dm-and-following).
message - Missing or invalid url parameter.
code - 195
The error message confuses me because I'm not trying to perform any of the update actions listed in the link provided. I'm just searching tweets. I'm also not sure what "missing or invalid url parameter" means. Is a query parameter missing from my request? The only required parameter is the query itself which I am definitely passing. The url is definitely correct unless Twitter4j is generating incorrect urls. So what does this message mean?
When I stop and restart the search (i.e. the max_id and since_id values get reset to -1, the searches start working again...for a while. As far as I know, nothing else about the search changes when it is restarted. Just those two ids.

MS Graph API response not returning all the data items it supposed to

My intention is to build a Machine Learning program that will give a recommendation for archiving email item by reading all previous email history.
For that, I am trying to read all the email item from:
https://graph.microsoft.com/beta/me/messages
First I am getting the total number of email items in my account using /messages?$count=true which returns 1881 as the result.
Then I am trying to get all the 1881 item using:
https://graph.microsoft.com/beta/me/messages?$top=1881
But the problem is that returns 976 email items. Where are the rest of the email item? How I can find them?
Are you getting a #odata:nextLink property in your response?
If that's the case, you might need to send another request with a skiptoken parameter. It should contain a value from the #odata:nextLink response property.
On the "paging" documentation page - https://developer.microsoft.com/en-us/graph/docs/concepts/paging - it is specified that different APIs have different max page size. It's possible that the endpoint for fetching emails does not support a page size of 1881. In that case, you might need to access a second page of the results.
Another suggestion is to replace beta endpoint with the V1 API call because me/messages is available there also - https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/api/user_list_messages

How to Query Sales Order by Order Number?

I've gotten frustrated with the .NET DevKit and am now considering switching my app over to XML. But I'm still having trouble figuring out how to perform very basic queries.
How can I retrieve a QBD Sales Order by the order number? This is the "Sales Order Number" in the QBD UI, "RefNumber" in the SDK, and "DocNumber" in IPP.
Just in case somebody needs me to explain the use case for looking up a record by the human-readable unique ID: I'm integrating with a system where we don't have the luxury of storing a QB transaction ID after importing a sales order. So if that system wants to query QB later to check the status of a sales order, it needs to do so by that system's unique order #.
I already have links to all the documentation; thanks. I just need to know how to perform this query. Similarly, I need to do it for Invoices and POs.
I need the same thing for Items, the use case being that if we're importing items from another system, we need to query the QB item list by name to see if we already have that item in QB.
For ITEMS, you can use ItemConsolidated and a NameContains filter. For example, the XML would look something like:
POST https://services.intuit.com/sb/itemconsolidated/v2/<realmID>
...
<?xml version="1.0" encoding="UTF-8"?>
<ItemConsolidatedQuery xmlns="http://www.intuit.com/sb/cdm/v2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.intuit.com/sb/cdm/v2 RestDataFilter.xsd ">
<NameContains>Your item name goes here</NameContains>
</ItemConsolidatedQuery>
It won't be perfect, because Intuit only supports "NameContains" (the item name contains the string you specify) rather than "NameEquals", but you can then loop through what you get back and filter it client-side from there.
For SALES ORDERS, unfortunately, Intuit Data Services doesn't support querying by DocNumber at this time.
Instead, a work-around for your situation might be to query for all sales orders, and then cache the Id and DocNumber value of each in your application. When you need to look something up, look up the Id in the cache, and then query by Id value. It's not pretty... but it's really the only way you can do what you're describing.

how to send HL7 message using mirth by reading data from my database

I'm having a problem is sending(creating) an HL7 message using mirth.
I want to read data from my patient table in SQLSERVER 2008 and, using that data,
I want to send a message to my destination connector, a file writer. I want my messages to get saved in the file writer's output directory.
So far I'm able to generate the message, but the size of the output file in my destination directory is increasing as the channel's polling time goes on.
Have I done something wrong in the transformer mapping?
UPDATE:
The size of the output file in my destination directory IS increasing. (My .txt file starts from 1 kb and goes to 900kb and so on). This is happening becasue same data is getting generated again and again and multiple times too. for eg. my generated message has one(MSH,PID,PV1,ORM) for one row of data in my Database. The same MSH,PID, PV1 and ORM are getting generated multiple times.
If you are seeing the same data generated in your output directory multiple time, the most likely cause is that you are not doing anything to indicate to your database that a given record has been processed.
For example, if you have 1 record in your database: ["John", "Smith", "12134" ...] on the first poll, you will generate 1 message. If on the second poll you also have a second record ["Fred", "Jones", "98371" ...], you will generate TWO messages - one for John Smith and one for Fred Jones. And so on.
The key is to use the "Run On-Update Statement" of your Database Reader (Source) connector to update the database table you are polling with an indication that a given record has been processed. This ensures that the same record is not processed multiple times.
This requires that your source table have some kind of column to indicate the record has been processed. Mirth will not keep track of this for you - you must do it manually.
You can't have a file reader as a destination, so I assume you mean file writer. You say that "the size of my file in my destination is increasing." Is that a typo? Do you mean NOT increasing?
If it is increasing, then your messages are getting generated and you can view them to start your next round of troubleshooting...
If not, the you should look at the message log in the dashboard to see what is happening on a message-by-message basis - that would be the next place to troubleshoot.
You have to have a way of distinguishing what records to pull from the database by filtering on some sort of status flag or possible a time-stamp. Then, you have to use some sort of On-Update statement to mark these same records as processed.
i.e.
Select id, patient, result from results where status_flag='N'
or
Select * from results where status_flag = 'N' and created_date >= '9/25/2012'
Then, in either a transformer step or the On-Update section of your Source, you would do something like:
Update results
set status_flag = 'Y' where id=$(id)
If you do not do something like this and you have Mirth polling at a certain interval, it will just keep pulling the same records over and over.
You have to change your connector type as Database reader in source.
You have to change your connector type as file writer in the destination.
And you can write your data in the file, For which you have access to write.
while creating HL7 template you have to use the following code in outbound message template
MSH|^~\&|||
Thanks
Krishna

Resources