We're currently using the Windows QBSDK to interact with QuickBooks. We're evaluating IPP going forward. However, the QBSDK uses ListID's / TransactionID's to identify objects and IPP uses a different scheme. Is there a way to determine the mapping between the two?
Asked the same question at IDN and was told there is not away to perform this translation. However, a little bit of assistance from a partner, some careful watching of results between IPP and QuickBooks and I have a fairly decent answer.
A typical ID in QuickBooks looks like this:
80000001-1296949588
The first portion is the identifier and the second portion is some form of timestamp.
Treat the first portion as a 32-bit hexadecimal number and turn off the high bit. In this case, you will get 1.
If you create the record first in QB then sync to IPP, you will find a record identified in this manner:
<Id idDomain="QB">1</Id>
If you create the record first in IPP, you will find a record identified in this manner:
<Id idDomain="NG">1</Id>
Once you sync, you will find an external record reference for example:
<ExternalKey idDomain="QB">3</ExternalKey>
This would match to a transaction id in QuickBooks:
80000003-1299163737
Unfortunately, this transform is not isomorphic because of the time stamp appended at the end of the identifier in QuickBooks. Therefore, translating from a QuickBooks ID to a IPP based ID is trivial to perform. Translating the other way will require an enumeration of records in QuickBooks and matching up of the ID's.
Related
Our group is working on a sentiment analysis research project. We are trying to use the Twitter API to collect tweets. Out aimed dataset involves a lot of query terms and filters. However, since each of us has a developer account, we were wondering if we can pool API access tokens to accelerate the data collection. For example, we will make an app that allows us to define a configuration file that contains a list of our access tokens that the app will try to use to search for a tweet. This app will be run on our local computer. Since the app uses our individual access tokens, we believe that we are not actually not bypassing or changing any Twitter limit as the record is kept for each access token. Are there any problems legal/technical that may arise from this methodology? Thank you! =D
Here is a pseudocode for what we are trying to do:
1. define a list of search terms such as 'apple', 'banana'
and 'oranges' (we have 100 of these search terms, we are okay
with the 100 limit per tweet)
2. define a list of frequent emotional adjectives such as 'happy', 'sad', 'crazy', etc. (we have have 100 of these) using TF-IDF
3. get the product of the search terms and emotional adjectives,
in total we have 10,000 query terms and we have computed
through the rate limit rules that we would need at least
55 runs of 15-minute sessions with 180 tweets per 15-minute.
55 * 15 = 825 minutes or ~14 hours to collect this amount of tweets.
4. we were thinking of improving the data collection by
pooling access tokens so that we can trim down the time
of collection from 14 hours to ~4 hours, e.g. by dividing the query items into subsets and letting a specific access token work on a subset
We were pushing for this since we just think it's efficient if it's possible and permitted since why not and it might help future researches as well?
The question is, are we actually breaking any Twitter rules or policies by doing this? By sharing one access token per each of us three and creating an app that we name as clones of the research project, we believe that in turn we are also losing something which is the headroom for one more app that we fully control.
I can't find specific rule in Twitter so far about this. Our concern is that we will publish a paper and will publish the app we will program and use for documentation and the app we plan to build. Disclaimer: Only the app's source code will be published and not the dataset because of Twitter's explicit rules about datasets.
This is absolutely not allowed under the Twitter Developer Policy and Agreement.
Twitter developer policy 5a:
Do not do any of the following:
Use a single application API key for multiple use cases or multiple application API keys for the same use case.
Feel free to check with Twitter directly via the developer forums. StackOverflow is not really the best place for this question since it is not specifically a coding question.
I've skimmed through the Keywords Performance Report of the API documentation, and couldn't understand whether it would be possible for me to use this report to determine daily keyword costs.
What I want is basically to be able to look for keyword to an API request result and get the cost associated with it. Is such a thing possible? Am I looking in the right place?
Apparently, it's not possible to do so, since all costs on all Display Network items are listed with a special ID (3000000) in costs, meant to capture all GDN displays.
I'm trying to find a way to retrieve the available (i.e. in stock) serial numbers from Items in QuickBooks Enterprise using qbxml. I've looked through OSR and the various Item Query requests as well as read the docs for the C-Data QuickBooks drivers. I'm not seeing a way to pull the available serial numbers out.
Anyone know if this is possible? Maybe there is a report that contains it?
The OSR shows that you should be able to fetch them on InventoryTransactionQuery, but I'm currently running it against the sample "advanced inventory company" and it's returning "null" for the RetList.
Just parsed out the XML by toXmlString-ing the QBXML response:
Was doing it wrong: It is the ret-list, you have to subparse it for things.
Only problem is that this does not include the bins, so your visibility of inventory is only to the warehouse level.
I'm finding information extremely sparse so I hope this is useful for someone, even if it's a necro for the OP.
I am using the Alchemy API (Bluemix) and rails wrapper and am getting back nil for blocks of text. For example, consider below text:
"The Vancouver International Flamenco Festival presents renowned flamenco dancer Mercedes “La Winy” Amaya in an electrifying tribute to flamenco’s vibrant past, featuring the authentic Spanish Gypsy style of flamenco, from sumptuous sway to fierce flourish."
When I call the keyword endpoint, I only get keyword results about half the time. When I search the same block of text multiple times, I get results half the time and nil half the time.
I'm only making calls about once per second so rate capping is not an issue.
What is causing this to happen? Where should I start looking?
Since AlchemyAPI was acquired and integrated into IBM Watson (as accessed via Bluemix), I don't think this question can be answered in it's current form. The AlchemyAPI services as used with the old Ruby wrapper mentioned above have been deprecated.
Instead, I suggest getting new credentials on the services for whichever aspect of AlchemyAPI you were using. The new products are mapped as follows:
AlchemyLanguage -> Watson Natural Language Understanding
AlchemyDataNews -> Watson Discovery
AlchemyVision -> Watson Visual Recognition
I want to be able to run queries locally comparing latitude and longitude of locations so I can run queries for certain addresses I've captured based on distance.
I found a free database that has this information for zip codes but I want this information for more specific addresses. I've looked at google's geolocation service and it appears it's against the TOS to store these values in my database or to use them for anything other than doing stuff with google maps. (If somebody's looked deeper into this and I'm incorrect let me know)
Am I likely to find any (free or pay) service that will let me store these lat/lon values locally? The number of addresses I need is currently pretty small but if my site becomes popular it could expand quite a bit over time to a large number. I just need to get the coordinates of each address entered once though.
This question hasn't received enough attention...
You're correct -- it can't be done with Google's service and still conform to the TOS. Cheers to you for honestly seeking to comply with the TOS.
I work at a company called SmartyStreets where we process addresses and verify addresses -- and geocode them, too. Google's terms don't allow you to store the data returned from the API, and there's pretty strict usage limits before they throttle or cut off your access.
Screen scraping presents many challenges and problems which are both technical and ethical, and I don't suppose I'll get into them here. The Microsoft library linked to by Giorgio is for .NET only.
If you're still serious about doing this, we have a service called LiveAddress which is accessible from any platform or language. It's a RESTful API which can be called using GET or POST for example, and the output is JSON which is easy to parse in pretty much every common language/platform.
Our terms allow you to store the data you collect as long as you don't re-manufacture our product or build your own database in an attempt to duplicate ours (or something of the like). For what you've described, though, it shouldn't be a problem.
Let me know if you have further questions about address geocoding; I'll be happy to help.
By the way, there's some sample code at our GitHub repo: https://github.com/smartystreets/LiveAddressSamples
http://www.zip-info.com/cgi-local/zipsrch.exe?ll=ll&zip=13206&Go=Go could use a screen scraper if you just need to get them once.
Also Microsoft provides this service. Check if this can help you http://msdn.microsoft.com/en-us/library/cc966913.aspx