How to get Yahoo Finance Historical Closing Price from URL - yahoo-finance

I need to fetch the closing price of given stock at specified dates from Yahoo Finance API. However, there does not seem to be any good documentation on how to achieve this.

Install the following packages:
pandas_datareader
fix_yahoo_finance
Example script:
import pandas_datareader.data as web
import fix_yahoo_finance as yf
yf.pdr_override()
print(web.get_data_yahoo('AAPL', start="2018-01-01", end="2018-01-07"))

Related

Import data from steam price history

I have this link with a lot of info:
https://steamcommunity.com/market/pricehistory/?currency=1&appid=570&market_hash_name=Exalted%20Fractal%20Horns%20of%20Inner%20Abysm
I want to import all the data to a google sheet
I am using IMPORTDATA Function but I think it doesn't support all the lenght of the data.
This is the code:
=IMPORTDATA("https://steamcommunity.com/market/pricehistory/?currency=1&appid=570&market_hash_name=Fractal%20Horns%20of%20Inner%20Abysm")
I am getting this error:
Error Could not fetch url: https://steamcommunity.com/market/pricehistory/?currency=1&appid=570&market_hash_name=Fractal%20Horns%20of%20Inner%20Abysm

Unable import text using importxml and xpath inside div

i'm Using Google Sheets with IMPORTXML to scrape a download count information from a japanese website via XPath in google sheet. I want to save the number/text inside this red box
here's the link
https://www.photo-ac.com/main/detail/4465781?title=%E3%82%A2%E3%82%B2%E3%83%8F%E8%9D%B6%E3%81%A8%E3%83%92%E3%83%A3%E3%82%AF%E3%83%8B%E3%83%81%E3%82%BD%E3%82%A6
here's my function
=IMPORTXML("https://www.photo-ac.com/main/detail/4465781?title=アゲハ蝶とヒャクニチソウ", "/html/body/div[17]/div/div/div/div[2]/div[7]/div[1]/div[1]/div/div[3]/div[2]/div[1]//text()")
the function doesn't work? why?
thank you
When I tested your formula, I confirmed that an error of Could not fetch url: occurred. But, fortunately, when Google Apps Script is used, I confirmed that the URL can be requested using UrlFetchApp. So, in this answer, I would like to propose to use Google Apps Script. The sample script is as follows.
Sample script:
Please copy and paste the following script to the script editor of Google Spreadsheet, and save it, and put a formula of =SAMPLE("URL") to a cell. If the function name is not found, please reopen the Google Spreadsheet and test it again. This script is used as the custom function.
function SAMPLE(url) {
const value = UrlFetchApp.fetch(url).getContentText().match(/ダウンロード:.+/);
if (!value) throw new Error("Value was not retrieved.");
return value;
}
Result:
When above script is used, the following result is obtained.
Note:
This sample script is for the current HTML of the URL of https://www.photo-ac.com/main/detail/4465781?title=アゲハ蝶とヒャクニチソウ. And, when the structure of HTML of the URL is changed, above script might not be able to be used. Please be careful this.
References:
Custom Functions in Google Sheets
fetch(url)

How to read google spreadsheet using google colab

We can list out spreadsheet present in google drive using below command
from google.colab import drive
drive.mount('/content/drive')
!ls -l /content/drive/'Shared drives'
but unable to read spreadsheet using below command
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
gc.open('/content/drive/'Shared drives/data.gsheet').data available
and also one more problem we have space in sheetname(data available) and we don't have access to change sheetname
I have refer link:: https://colab.research.google.com/notebooks/io.ipynb
Kindly help on it.
You can use gc.open_by_url('gsheets_url') to open the document (no need to mount the drive). For the sheet name, you can use gsheets.worksheet('sheet name').
So on your case it'd go something like:
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
# setup
gc = gspread.authorize(GoogleCredentials.get_application_default())
# read data and put it in a dataframe
gsheets = gc.open_by_url('your-link')
sheets = gsheets.worksheet('data available').get_all_values()
df = pd.DataFrame(sheets[1:], columns=sheets[0])
(credits to this post)
Update to the answer by Murilo Cunha, as it gives errors for authentication
from google.colab import auth
auth.authenticate_user()
import gspread
from google.auth import default
creds, _ = default()
gc = gspread.authorize(creds)
import pandas as pd
# read data and put it in a dataframe
gsheets = gc.open_by_url('Your link')
sheets = gsheets.worksheet('data available').get_all_values()
df = pd.DataFrame(sheets[1:], columns=sheets[0])
It's helpful if you could say what is the exact error you get when you run this script. However, I notice there are redundant quotes in following script,
gc.open('/content/drive/'Shared drives/data.gsheet')
replace extra above by following,
gc.open('/content/drive/Shared drives/data.gsheet')
Another option is to try gc.open_by_url() instead of gc.open()

Read site html from a site in a different geo region

I am using python and Beautiful soup to read html pages. Unfortunately some sites redirect to my Geo region (AU) so I can't retrieve the target countries version i.e. (UK, US, FR, NZ...)
I have tried using a VPN service but this requires me to manually change the region so I can't automate the process. I have tried using the python quartz.Coregraphics library to click the options on screen but this is temperamental.
Is there a way I can achieve this programmatically?
I have manage to nut this one out myself. Best answered by example for reading a uk based site.
import urllib2
url = 'Some-uk-url'
req = urllib2.Request(url)
req.add_header('Accept-Language', 'en-gb')
req.add_header('X-Forwarded-For', [a uk proxy ipaddress here])
htmltext = urllib2.urlopen(req).read()

Need an API to find a full company name given a ticker symbol [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I need a way from within client-side Javascript to find a full company name given a ticker symbol. I am aware of Yahoo Finance's interface at:
http://finance.yahoo.com/d/quotes.csv?s=TKR&f=n
and am able to access that via YQL (since this is cross-domain). However, that doesn't return the full company name, yet Yahoo Finance has such because it appears in their charts for the company and on their pages about the company.
I don't need for the solution to be via Yahoo Finance... just mention it here as I already know about it (and am accessing it for other data).
One of the community-provided YQL tables looks like it will work for you: yahoo.finance.stocks.
Example YQL query:
select CompanyName from yahoo.finance.stocks where symbol="TKR"
Update 2012-02-10: As firebush points out in the comments, this YQL community table (yahoo.finance.stocks) doesn't seem to be working correctly any more, probably because the HTML page structures on finance.yahoo.com have changed. This is a good example of the downside of any YQL tables that rely on HTML scraping rather than a true API. (Which for Yahoo Finance doesn't exist, unfortunately.)
It looks like the community table for Google Finance is still working, so this may be an alternative to try: select * from google.igoogle.stock where stock='TRK';
I have screen scrapped this information in the past either using Yahoo Finance or MSN Money. For instance you can get this information for ExxonMobil by going to (link). As far as an API you might need to build one yourself. For an API checkout Xignite.
You can use the "Company Search" operation in the Company Fundamentals API here: http://www.mergent.com/servius/
You can use Yahoo's look up service using Jonathan Christian's .NET api that is available on NuGet under "Yahoo Stock Quotes".
https://github.com/jchristian/yahoo_stock_quotes
//Create the quote service
var quote_service = new QuoteService();
//Get a quote
var quotes = quote_service.Quote("MSFT", "GOOG").Return(QuoteReturnParameter.Symbol,
QuoteReturnParameter.Name,
QuoteReturnParameter.LatestTradePrice,
QuoteReturnParameter.LatestTradeTime);
//Get info from the quotes
foreach (var quote in quotes)
{
Console.WriteLine("{0} - {1} - {2} - {3}", quote.Symbol, quote.Name, quote.LatestTradePrice, quote.LatestTradeTime);
}
EDIT: After posting this I tried this exact code and it was not working for me so instead I used the Yahoo Finance Managed Api however it's not available via NuGet. A good example of use here
QuotesDownload dl = new QuotesDownload();
DownloadClient<QuotesResult> baseDl = dl;
QuotesDownloadSettings settings = dl.Settings;
settings.IDs = new string[] { "MSFT", "GOOG", "YHOO" };
settings.Properties = new QuoteProperty[] { QuoteProperty.Symbol,
QuoteProperty.Name,
QuoteProperty.LastTradePriceOnly
};
SettingsBase baseSettings = baseDl.Settings;
Response<QuotesResult> resp = baseDl.Download();
Also if you just want to download the stuff stocktwits api has a download link for the symbology and industries under "Resources" http://stocktwits.com/developers/docs
It also possible to use Quandl.com resources. Their WIKI database contains 3339 major stocks and can be fetched via secwiki_tickers.csv file. For a plain file portfolio.lst storing the list of your tickers (stocks in US markets), e.g.:
AAPL
IBM
JNJ
MSFT
TXN
you can scan the .csv file for the name, e.g:
import pandas as pd
df = pd.read_csv('secwiki_tickers.csv')
dp = pd.read_csv('portfolio.lst',names=['pTicker'])
pTickers = dp.pTicker.values # converts into a list
tmpTickers = []
for i in range(len(pTickers)):
test = df[df.Ticker==pTickers[i]]
if not (test.empty):
print("%-10s%s" % (pTickers[i], list(test.Name.values)[0]))
what returns:
AAPL Apple Inc.
IBM International Business Machines Corporation
JNJ Johnson & Johnson
MSFT Microsoft Corporation
TXN Texas Instruments Inc.
It is possible to combine more stocks from other Quandl's resources. See the documentation online.

Resources