Yahoo finance.yahoo.com/quotes is redirecting - yahoo-finance

I use https://finance.yahoo.com/quotes/... for stock quotes and option data. It continues to work well when viewed in a Safari browser. I also take it in to an Objective C app as an NSURLSession dataTaskWithUrL. Last week (circa April 28, 2017)it started forcing a redirect by calling willPerformHTTPRedirection: but the redirect loops to itself and generates the "too many HTTP redirects" error.
Does anyone have a workaround for this?

I found that, Yahoo Finance changed their website address format from (4/1) "http://finance.yahoo.com/q/hp?s=AAPL&g=w"
to (4/8) "http://finance.yahoo.com/quote/AAPL/history?interval=1wk"
Then, on 4/21, the weekly historical data imported differently, excluding some advertisements lines between the stock symbol & pricing to "Date / Open ... Adjusted Close" heading along with the number of columns from date to volume.
On 5/5, Yahoo Finance Historical data would not open via the VBA programming. Other financial sites like MSN, Google, Market Watch, & Invetopedia continue to open using the same VBA code.
Finally, when I paste the newer Yahoo link listed above into a spreadsheet and then click on the link, the webpage is redirected to Apple's general Summary site, not the Historical Data tab.
Will try a few other lines of code, but after that, will look at changing to Investopedia.

Here's the solution I finally came up with:
In the original NSURLRequest, make it an NSURLMutableRequest and add:
[theRequest setValue:#"Mozilla......." forHTTPHeaderField:#"User-Agent"];
This 'spoofing the browser' often works with no redirects. But sometimes there is a redirect. Therefore I also added the method:
- (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task willPerformHTTPRedirection:(NSHTTPURLResponse *)redirectResponse newRequest:(NSURLRequest *)request completionHandler:(void (^)(NSURLRequest *))completionHandler{
and in the completion handler I examine the task.currentRequest valueForHTTPHeaderField:, set it to a different browser and generate a new NSURLRequest with the different browser. It seems to be working.

Based on the work of 2 other people I seem to have a solution to this problem. Actually there are 2 aspects to the issue of downloading historic data from Yahoo:
The redirection issue
Establishing a working cookie/crumb combination.
The solution which enables me to continue using my Excel spreadsheets and the associated VBA code is based on:
The work of Dennis Lee who demonstrates how to extract the cookie / crumb combination https://github.com/dennislwy/YahooFinanceAPI.
The work of Tim Hall who has written VBA-Web
https://vba-tools.github.io/VBA-Web/docs/#/WebClient/GetFullUrl

Related

Yahoo finance no longer work?

I used to get exhange rate from yahoo finance api by javascript with following url:
http://finance.yahoo.com/webservice/v1/symbols/jpy=x,eur=x/quote?format=json
but now it no longer work! I have search on the net but no luck.
Any help would be great, Thank!
Update: it is working if open with chrome mobile
Yes, it seems like Yahoo! has discontinued the (private, mostly-undocumented) Yahoo Finance API that many have been using for their currency data. All responses seem to be returning "Not a valid parameter". I suppose there's a chance they may switch it back on, but they don't officially support that API anywhere as far as I can tell.
I created Open Exchange Rates about five years ago, and our exchange rate API now supports a community of tens of thousands of developers - and their tens of millions of users - with accurate, up-to-date information.
Please feel welcome to check out our Forever Free service at https://openexchangerates.org.
Our API is in a simple, original JSON format, which has actually caught on as a standard method for displaying rates because it's so simple to work with (unlike the Yahoo API, which required you to parse the obscure nested objects to pull out the basic info you needed...)
If you need assistance porting from the deprecated Yahoo! API, we'll be happy to assist via email.
(I am the founder of Open Exchange Rates.)
indrakula is right, and their response helped me, but...
I also needed to retrieve exchange rate tickers (i.e. USDGBP=X). This was not trivial, and I had to do some searching. The URL format in this case is http://www.google.com/finance/info?q=CURRENCY%3aUSDGBP. This URL returns a JSON body and not something else as the alternative URLs mentioned in one of the comments for that reply. Also note the link with the parameter descriptions seems to be out-of-date, but I found most of them are self-explanatory. So don't rely on that link.
Note: I wanted to post this as a comment to indrakula's answer, but one needs 50 reputation to comment! I'm new! I tried to submit this as an edit to their answer but it was (rightly) refused.
use google http://www.google.com/finance/info?q=GOOGL
paramater description here http://www.networkerror.org/component/content/44.html?task=view

Google spreadsheet importHTML Could not fetch URL

Can someone confirm it for me?
I'm helping someone with the importHTML problem on Google spreadsheet. I'm not familiar with importHTML but I thought it should work.
=importhtml("http://www.stockq.org/","table",1)
I don't care which table I'm importing so long as it imports something. It's giving out error message Error: Could not fetch url: http://www.stockq.org/. But the web site is accessible in my browser. That's really bizarre.
My Google Spreadsheet can't cope with the Chinese characters but numbers recognisable by me on the web page are happily imported, as least for the middle table of the three, with:
=importhtml("http://www.stockq.org/","table",A12)
This is much what was I think mentioned by #DigitalSeraphim way back in September. To quote from an answer that was deleted (as not an answer?):
So, I have been building a page to help me keep up with mod updates for my minecraft server, using importxml heavily. I have found that I get the same error for some sites that load absolutely fine in the browser. Looking into it further, I found that the sites are reporting a 404 error, but actually returning the data requested. According to https://drupal.stackexchange.com/questions/110651/how-to-show-a-node-but-return-http-404-response, this is used to remove pages from search engines, as I had assumed. I don't think there is any way around this without some hackery... namely, setting up a "proxy" server that would "fix" the status.
However, it appears that the example you gave is now working, so maybe give it another try.
TL;DR
Use IMPORTXML with XPaths.
I encountered similar problem where I tried to switch between http and https. The work around worked occasionally but the result is not consistent (either way failed a lot).
Later I noticed there is another API named IMPORTXML (XML, not HTML here). With this one you can actually query the content from the same URL and apply XPath instead.
Therefore I would suggest to switch to use IMPORTXML. For example, the following formula
=IMPORTXML("http://www.stockq.org/index/IBOV.php", "//table[#class='indexpagetable']")
will give you all the tables that have class indexpagetable from the page of the given URL.
Note the XPath is slightly different in the spreadsheet, you can refer to the documents for more specifics.

twitter api 1.1 url count alternative

I've been using the old url api(v1) to get the count of a given url, lately I needed to get also the re-tweets and started searching about that.
this is the exact url I'm using right now:
http://urls.api.twitter.com/1/urls/count.json?url=http://google.com
As I viewed with some reading the v1 api is deprecated but at least it's still working.
I found some questions on the dev page of twitter:
https://dev.twitter.com/discussions/12643
those are a little old questions and have no specific solving to the problem. I mean, the most near solution was using the search api(search/tweets) which could be good but not a exactly replacement for the urls/count method.
Please note that Twitter's search service and, by extension, the
Search API is not meant to be an exhaustive source of Tweets. Not all
Tweets will be indexed or made available via the search interface.
also it has a limit for 100 results at maximum per 'page', even it throws the link to get the next set of objects, thats good but when the search reaches 1 million of results I'll need to get page over page to now how much tweets I got and having to do to much request to the api...
I sought some question over the dev page on twitter suggested using the stream api, I've tried using (statuses/filter) but that don't work very well given a URL as track param(which they said that is the keyword to track).
So, anyone who's been using the old urls/count has found a reliable alternative with the new apiv1.1, especiffically to get the tweets and re-tweets for a given url ?
The official suggestion by Twitter staff is that either the search/tweets endpoint (having just the last 7 days data) or the Streaming API be used (handling yourself the counters, making everything just too complicated for a d*mn counter).
As an extra warning, the old endpoint (http://urls.api.twitter.com/1/urls/count.json?url=YOUR_URL) will stop working on November 20th, and according to this blog post from Twitter there are no plans to replace it with anything in the short term and they are even removing the count from their own buttons.

Labview to google spreadsheet information transfer

I have been using LabVIEW to collect measurement data, and I would like to know if it is possible for LabVIEW to communicate the results to a Google Spreadsheet. If so, where could I find resources to learn how to make LabVIEW transmit information to the Google Spreadsheet ?
Thanks!
EDIT AND FOLLOW-UP- I used Jonathan's suggestion below and experimented with the LabVIEW http Post.vi. It's very simple, all you need to do is enter the URL of the Google form (replacing the final "viewform" with "formResponse") and a string with the data you want to enter (with rough syntax = ). A big thanks for that answer, it was really helpful !
However, when I try to use this method for a Google form with more than one page, the data isn't read properly... The form is still sent but every field not present on the first page of the form remains blank on the Spreadsheet. I feel that this is somehow linked to the fact that in the Google form, the URL of all the pages after page 1 are the URL of page 1 with the final "viewform" replaced with "formResponse". Is this what is causing the error or is it something else altogether, and how can I fix it ?
I can think of two ways to do this:
You can create a form in google spreadsheets. The form appears as an html document with standard tags. From here, I would use labview's http functionality to submit data to that form using a POST request. This would be the easiest way to get data in there.
Using the Google Apps API, you can manipulate google spreadsheets and dump data in there directly. This is going to be more complicated in terms of development time, but more configurable in the long run. https://developers.google.com/google-apps/spreadsheets/#what_can_this_api_do There are .net and java code examples throughout the documentation, so it would take some work to port this to LabVIEW, but it could be done.

how to bundle google API calls for Place Search Requests?

Hi I am using the "Place Search Requests" from google:
A Place Search request is an HTTP URL of the following form:
https://maps.googleapis.com/maps/api/place/search/output?parameters
But the problem is I have to make hundreds of queries to google and this makes the APP very very slow.
Could I somehow bundle the requests at once? For example, I can send all the names of the places to google and get the result back in a row?
Regards, Yashu
You can only do what the documentation says will work.
Side note: Bulk downloads of data are forbidden by the Terms. It's open to question why you need to make hundreds of queries, and indeed whether your use case is allowed.

Resources