Google Finance API Not Consistent - stockquotes

I'm writing some software to do charting and analysis of intraday stock data, and so far the only free (or even affordable) feed I've found which gives 15 minute data for the past week or so is Google Finance. But something I've noticed, which I don't understand and has caused many headaches, is that the responses from the API for 15 minute intervals seem to be very inconsistent.
So far I haven't seen this problem with the 30 minute interval, in this case the response is always correct. But if I specify an interval of 15 minutes (900 seconds), I get anywhere from 70 to 200 or more quotes back. The data is correct, but the responses seem to pretty much ignore the number of days I'm specifying. Also this happens for individual stocks, so it isn't a case of some stocks having missing data. Here's an example of an API request I'm sending:
https://www.google.com/finance/getprices?i=900&p=8d&f=d,o,h,l,c&q=INTC
If anyone could help I'd appreciate it, this API doesn't seem to be documented so it's been difficult to find any help with it.

Yes, google is not consistent in providing stock data. For the same reason i switched over to yahoo API, their data is pretty much consistent compared to google

Related

Understanding the Youtube Data API Quota limitations

I was wondering if I could get some help understanding the Youtube Data Api, specifically Google's website says: "Each project starts with 10,000 units per day, an amount sufficient for the overwhelming majority of our API users."
Is this 10,000 per person who signs in through your project then? Because 10,000 does not in any way seem sufficient! A youtube Search alone takes 100 units if I understand the documentation, so 5 people each doing 20 searches and your project is done for the day. This seems to make building a robust Youtube browser all but impossible for a user base bigger than 2.
As you can tell I'm a bit frustrated and sure I must be missing something. Any clarifications you can offer would be appreciated.
If you check the following screen
only one of the quotas says "per user" the others are project based quotas.
You have a quota of 10000 per day for the total project.
Quota cost is calculated by the request resource.
A youtube Search alone takes 100 units per request. if I understand the documentation, so 5 people each doing 20 searches and your project is done for the day.
10000 (limit) / 100 (quota cost) = 100 search requests pre day.
It sounds to me like you have understood the quota system perfectly. If you intend to make more requests than that you will need to request additional quota.

What's the most efficient way to handle quota for the YouTube Data API when developing a chat bot?

I'm currently developing a chat bot for one specific YouTube channel, which can already fetch messages from the currently active livechat. However I noticed my quota usage shooting up, so I took the "liberty" to calculate my quota cost.
My API call currently looks like this https://www.googleapis.com/youtube/v3/liveChat/messages?liveChatId=some_livechat_id&part=snippet,authorDetails&pageToken=pageTokenIfProvided, which uses up 5 units. I checked this by running one API call and comparing the quota usage before and after (so apologies, if this is inaccurate). The response contains pollingIntervalMillis set to 5086 milliseconds. Currently, my bot adds that interval to the current datetime and schedules the next fetch at that time (using Celery), so it currently fetches messages at a rate of 4-6 seconds. I'm gonna take the liberty and always wait for 6 seconds.
Calculating my API quota would result in a usage of 72.000 units per day:
10 requests per minute * 60 minutes * 24 hours = 14.400 requests per day
14.400 requests * 5 units per request = 72.000 units per day
This means that if I used the pollingIntervalMillis as a guideline for how often to request, I'd easily reach the maximum quota of 10.000 units by running the bot for 3 hours and 20 minutes. In order to not use up the quota by just fetching chat messages, I would need to run 1 API call per minute (1,3889 approximately). This is very unfeasible for a chatbot, since this is only for fetching messages and not even sending any messages to the chat.
So my question is: Is there maybe a more efficient way to fetch chat messages which won't use up the quota so much? Or will I only get this resolved by applying for a quota extension? And if this is only resolved by a quota extension, how much would I need to ask for reliably? Around 100k units? Even more?
I am also asking myself how something like Streamlabs Chatbot (previously known as AnkhBot) accomplishes this without hitting the quota limit despite thousands of users using their API client, their quota must probably be in the millions or billions.
And another question would be how I'd actually fill out the form, if the bot is still in this "early" state of development?
You pretty much hit the nail on the head. Services like Streamlabs are owned by larger companies, in their case Logitech. They not only have the money to throw around for things like increasing their API quota, but they also have professional relationships with companies like Google to decrease their per unit cost.
As for efficiency, the API costs are easily found in the documentation, but for live chat as you've found, you're going to be hitting the API for 5 units per hit. The only way to improve your overall daily cost with your calls is to perform them less frequently. While once per minute is clearly excessively long, once every 15-18 seconds could reduce the overall cost of your API quota increase, while making the chat bot adequately responsive.
Of course that all depends on your desired usage of the data, but still a recommendation if you're implementing the bot still in the realm of hobbyist usage.

Yahoo finance or google finance will block if i will subscribe all stocks?

I want to retrieve all stocks from few exchanges - by retrieve the stocks that inside those exchanges (by taking from http://www.nasdaq.com/screening/company-list.aspx).
And then I will quote for all stocks from google or Yahoo.
My question is if I will quote all of them for every 5 seconds or 10 seconds - will they block me?
What is the correct way for getting all stocks and they updated data?
Thanks!
David,
tl;dr - yahoo finace is OK (scraping 2,000 stocks) if you insert pauses in your code
I have some clumsy, but working code (my first attempt at scrapping) that pulls some data from Yahoo Finance. While I don't like the code and I will rewrite it for nasdaq.com in following weeks, I can tell you that I'm not getting blocked.
I have a few years old list of stocks for Russel 2000 so there are around 2,000 tickers I'm slowly going through and pulling some data from balance sheet. I'm using Selenium (see my question history, there is only one to see/get working code), code loads Chromium web browser (Linux) clicks on Balance sheet, scrape some data, clicks quarterly link, scraps more data and then closes the browser. For every ticker (stock).
Just to be on a safe side, I put several pauses into my code, for every scrap or navigation on site I added between 5 and 10 seconds. That way I'm slowly scraping data and Yahoo seems to be OK with this :-) It takes about one minute per ticker. I'm running this scrap job (for the first time!) now for over 30 hours lol and I'm currently at ticker that starts with T so I have few more hours to go.
I have read somewhere that some sites can spot this slow scraping also. So as an idea, instead of just hard code pause of say 7 seconds, you could run random number generator between IDK, 7-15 seconds and that way pauses will be more random and less prone to be spotted... Just a though Hope this helps a little bit even if with delay.
Ah, and if this answer does help you, please be so kind to mark it as solved and up vote it. Maybe I can get a point or two for it. My points are so low I can't even vote other posts that I like and that helped me.

Getting realtime twitter search results using the streaming API

I have an application where I need to get complete, realtime search results from twitter (preferably polling every 500ms or less). Based on my understanding, doing this using the search API will run into rate limits very quickly. However, the streaming API doesn't seem to support getting complete anything (only a 5% sample).
More specifically, I have a search query term which typically comes up with <20 matching tweets per hour, and I would like to be informed of these new tweets within 1-2 seconds, and it is considered a failure if I am not notified within 5 seconds. Due to the relatively low frequency of posting, missing even one tweet is very undesirable.
Is there any way I can realistically do this using twitter API, or is my only choice to write a browser extension to repeatedly refresh the search page?
The answer is "yes". Although you are rate limited (the limit is closer to 1% than 5%), that is only a cutoff based on your query results. Very roughly, you can stream about 60 tweets per second max. In your case, you say you expect under 20 tweets per hour, so you should have no problem getting all those tweets.
You also require a latency less than 5 seconds. In my experience latency has always been a second or two. I think you should be fine.

Getting as much tweets associated to a day's trends as possible

I am storing in a database, every 30 minutes, Twitter's trending topics of a country Y. No problem with that.
Now, I want to get as much tweets as possible matching those trending topics for research purposes.
Since I would like to study the patterns of the trends, I would like continuous tweet data of at least 3 days centered in the day the trend peak was detected, for every trending topic. In order to achieve that, I thought of doing the following:
Suppose I am in day X. I could retrieve the unique trends of day X-2, and for every trend, look for tweets matching the trend in the interval [X-3, X-1], that is 3 days. However, the problem here is Twitter rate limitations. If I have 100 trending topics in day X-2, and I make 20 GET search requests/trend, I would end up doing a total of 2,000 requests, which overpasses Twitter's 350 hourly rate limit. If make 300 req/hour, it would take more than 6 hours to get the data for only one day...
Does anybody know any other (better) way for getting tweets associated with trends?
Thanks in advance
Twitter Streaming API?
Twitter Streaming API doesn't deliver any past tweets. You only receive tweets starting from the time the server connection is established. The search API will return tweets matching the current query up to 7 days old in theory, but that is entirely up to Twitter’s current load. (Note*-At times this interval has been as short as 24 hours. In addition, you are limited by the ability to only receive up to 1,500 tweets regardless of how old they are.)
Is there any way to get more tweets from the streaming?
None that I know. But, do refer the below mentioned information if you are considering to switch among search or streaming API.
Please choose your case:
If you need real time data and your number of requests are high:
Go for Streaming API
The streaming API requires that you keep the connection active. This requires a server process with an infinite loop, to get the latest tweets.
Advantage
1)Lag in retrieving results: Tweets delivered with this method are basically real-time, with a lag of a second or two at most between the time the tweet is posted and it is received from the API
2)Not rate limited.
If you need aggregate data regardless of its time range and your number of requests are not high:
Go for Search API
The search API is the easier of the two methods to implement but it is rate limited .Each request will return up to 100 tweets, and you can use a page parameter to request up to 15 pages, giving you a theoretical maximum of 1,500 tweets for a single query.
Advantage
1)Finding tweets in the past:The search API wins by default in this area, because the streaming API doesn’t deliver any past tweets
2)Easier to implement

Resources