Is it possible to get the lot number from a sales order using the quickbooks QBFC SDK.
I have the code running and working, but lot number doesn't come through the QBFC SDK.
Lot number is there in the sales order in quickbooks.
How do I get lot number from a sales order via the SDK?
Thanks
Although the SDK has fields for the Serial / Lot numbers, they currently don't return any information on a SalesOrder. This functionality hasn't been implemented in the latest version of the SDK (13.0). The only way that I know of to get this information would be to do a CustomDetailReportQueryRq and filter the report with the following:
ReportTxnTypeFilter = SalesOrder
ReportDetailLevelFilter = AllExceptSummary
ReportPostingStatusFilter = NonPosting
SummarizeRowsBy = TotalOnly
IncludeColumn = SerialOrLotNumber
You will probably want to add in the columns for RefNumber, TxnID, and ItemName to help parse out the values that you need.
Related
I'm looking at ib_insync, framework for new Interactive Brokers python API. One thing I can't figure out is how to get the trading price after placing market order. Has anyone figured it out?
I've seen this question, but it's for the old API and IBPY.
If anyone wants to know, here's how to get the price at which you bought/sold:
from ib_insync.order import MarketOrder
order = MarketOrder(action, quantity)
trade = self.ib.placeOrder(contract, order)
# Wait for the trade to complete
...
filled_quantity = trade.filled()
filled_price = trade.orderStatus.avgFillPrice
I'm going to collect tweets about an event that has been happened 3 years ago, but I read somewhere that Twitter only let its API users to collect tweets not older than a week. So, I'd like to ask if this is true, how can I collect tweets from 3 or more years ago?
Get tweets using:
time_line_statuses = api.GetUserTimeline(screen_name=screen_name, include_rts=True)
Loop through time_line_statuses using a for loop
Check "created_at" property of each item to see if it is younger than your cut off date.
Each item has an "id" property. Value seems to grow with time. Lower ID = older.
Store 'id' of oldest status from time_line_statuses as oldest_id.
Call
.
time_line_statuses = api.GetUserTimeline(screen_name=screen_name, include_rts=True, max_id=)
Store oldest_id as previous_oldest_id
Repeat 1-6 while checking that oldest_id is not equal to previous_oldest_id before continuing the loop
You can only make 100 get request to twitter per hour. You need to count your Get() calls and have the program sleep for an hour when you've hit that limit. I don't know if their API has a limitation on how far back it can go. You may be able to save API calls if you can find the ID of the tweet that would be at the start of your cutoff date and seed this process from there.
Your only option is to pay for a service such as Gnip. Gnip provides an API that will let you search for tweets older than one week.
I have an application that gets its data from our Google Adwords account.
Particularly, It gets the list of ad campaigns along with their results (number of clicks, cost, etc...). The problem is that, in spite of the google API definition specification say that:
"The Campaigns Performance report includes all statistics aggregated
by default at the campaign level, one row per campaign".
I'm only getting the campaigns which have non-zero value in their columns (For example, has been displayed al least one time).
Do you have any idea of **how to include all the campaigns in the report, even when its values are
Thanks a lot.
EDIT: It happened when the campaigns had exactly 0 impressions.
I have finally found a way to get it:
// Report Creation
$reportDefinition = new ReportDefinition();
$reportDefinition->selector = $selector;
$reportDefinition->reportName = 'Adgroup performance report';
$reportDefinition->reportType = 'CAMPAIGN_PERFORMANCE_REPORT';
$reportDefinition->downloadFormat = 'CSV';
// Exclude criteria that haven't recieved any impressions over the date range.
$reportDefinition->includeZeroImpressions = TRUE;
The last line specifies that also must be included Campaigns with zero impressions. By default it's value is false, set it to true and you'll get all the info.
I'm working on a research project which analyses closure patterns in social networks.
Part of my requirement is to collect followers and following IDs of thousands of users under scrutiny.
I have a problem with rate limit exceeding 350 requests/hour.
With just 4-5 requests my limit is exceeding - ie, when the number of followers I collected exceeds the 350 mark.
ie, if I have 7 members each having 50 followers, then when I collect the follower details of just 7 members, my rate exceeds.(7*50 = 350).
I found a related question in stackoverflow here - What is the most effective way to get a list of followers using Twitter4j?
The resolution mentioned there was to use lookupUsers(long[] ids) method which will return a list of User objects... But I find no way in the API to find the screen names of friends/followers of a particular "User" object. Am I missing something here.. Is there a way to collect friends/followers of thousands of users effectively?
(Right now, I'm using standard code - Oauth authentication(to achieve 350 request/hour) followed by a call to twitter.getFollowersIDs)
It's fairly straightforward to do this with a limited number of API calls.
It can be done with two API calls.
Let's say you want to get all my followers
https://api.twitter.com/1/followers/ids.json?screen_name=edent
That will return up to 5,000 user IDs.
You do not need 5,000 calls to look them up!
You simply post those IDs to users/lookup
You will then get back the full profile of all the users following me - including screen name.
I would like to get the adjusted price (adjusting for splits and dividends) for a group of stock symbols using Yahoo! Finance. It looks like the historical prices call is limited to one symbol at a time. Could please let me know if there is a way to get multiple symbols in one call?
I would like to get this data so I can do some back testing on that data. Since I may require quite a few symbols (say 500-1000), it will be easier if I can make just a few batch calls to Yahoo!'s servers instead of making one call per symbol everyday.
Another way of getting the adjusted price is to use their daily stock price api and adjust it manually using dividend and splits information (they allow multiple symbols for their daily stock quotes). Unfortunately I cannot find any way to get splits information from the http call (guessing based on 50% or 200% is one option but if you deal with penny stocks, this can be dangerous and cannot figure out uneven splits). Also, the dividend information returned by it is not easy to decode. They seem to be returning the total over 4 quarters and the dividend date doesn't really correspond with the actual dividend date based on the historical price. The various options for the call can be found here: http://www.gummy-stuff.org/Yahoo-data.htm
Any suggestions on getting adjusted price for multiple symbols? Or Am I unnecessarily worrying about making 100s of calls to Yahoo! everyday? Ideally I would like to download all the required data within a couple of hours each day - that would be 10-20 calls per minute. Is that too much? I couldn't find any documentation on the permissible number of requests per second.
I am open to other places where I can get similar data. However, since I am just trying to learn the basics of quant trading and not trade, I would prefer free downloads.
Thanks
-e
This is an old question, but I did find a source where split data is available. Not sure how comprehensive these announcements are though:
http://biz.yahoo.com/c/09/s1.html
In the url, the "09" part is the year (2009), and the "s1" part is the month (s1 = Jan, s2 = Feb., s3 = Mar., etc.)
It isn't a nice clean CSV, but the format of the page is consistent and should be parseable. Just make a query each day for the current month, parse the page, and process any splits that you didn't see the day before.
ETA: And another source (probably less reliable than Yahoo, but can be queried by ticker):
http://getsplithistory.com/
I am not sure which language you are using but I have a sample in C#. I think it will give you the idea at least or may be help some one else
private string BASE_URL = "http://query.yahooapis.com/v1/public/yql?q=" + "select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20in%20({0})" + "&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys";
Collection<Quote> quotes;
string symbolList = String.Join("%2C", quotes.Select(w => "%22" + w.Symbol + "%22").ToArray());
string url = string.Format(BASE_URL,symbolList);
XDocument doc = XDocument.Load(url);
Parse(quotes,doc);
What we are doing here is appending "," to each array item then passing that symbol list to yahoo. I have successfully fetched prices for 700 symbols in each call. Hitting yahoo servers for each ticker is a pain. I fetch stock prices for all of 6500+ tickers everyday. Earlier it use to take 3 hours now it is less than 2 mins.....sweet
Source link for that code is here - http://www.jarloo.com/get-yahoo-finance-api-data-via-yql/
P.S. Please get a api key to work smoothly. The above url is a public link where tables are timed out most of the time. Once you get an api key then your url will be (minus "public")
http://query.yahooapis.com/v1/yql