google sheets importxml resource at url not found - Yahoo Finance - google-sheets

I tried to get Walgreen's number of full-time employees from Yahoo Finance using importxml like so:
=importxml("https://finance.yahoo.com/quote/WBA/profile", "/html/body/div[1]/div/div/div[1]/div/div[3]/div[1]/div/div[1]/div/div/section/div[1]/div/div/p[2]/span[6]/span")
I have used the function successfully in getting other figures from Yahoo Finance. Example (market cap):
=mid(importxml("https://finance.yahoo.com/quote/WBA", "/html/body/div[1]/div/div/div[1]/div/div[3]/div[1]/div/div[1]/div/div/div/div[2]/div[2]/table/tbody/tr[1]/td[2]/span"),1,6)+0
But with the number of employees (and, by the way, also the trailing twelve months' (ttm) revenues) - I get this error.
Without VBA, with which I am not familiar, how can this be fixed?
Thanks!

This site is built client side by javascript, not server side. Therefore, native functions are inoperative.
You have to extract the json inside the source and parse it.
The object is named root.App.main inside the source.
To get employees for instance
function fullTimeEmployees(url='https://finance.yahoo.com/quote/WBA/profile'){
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/root.App.main = ([\s\S\w]+?);\n/)
if (!jsonString || jsonString.length == 1) return;
var data = JSON.parse(jsonString[1].trim())
Logger.log(data.context.dispatcher.stores.QuoteSummaryStore.assetProfile.fullTimeEmployees)
}

Related

How to get link to third party site in 'about channel' section via python

I want to display information about links in the YouTube profile in a text document, I tried to do it through the requests library, but Google gave links to privacy and security, I did not find information about this in the YouTube API documentation. Who knows, you can help with this
This isn't possible to get using the YouTube API, I actually found myself needing to do the same thing as yourself and was not able to because the YouTube API lacked the necessary functionality (Hopefully, It will be added soon!)
I see you mentioned Python, My only solution is in Node but I will do a large explanation and you can base your code off of it. In order to get the banner links without the YouTube API, we need to scrape the data, since YouTube uses client-side rendering we need to scrape the JSON configuration from the source.
There's a variable defined inside a script called ytInitialData which is a big JSON string with a massive amount of information about the channel, viewer, and YouTube configurations. We can find the banner links by parsing through this JSON link.
const request = require("request-promise").defaults({
simple: false,
resolveWithFullResponse: true
})
const getBannerLinks = async () => {
return request("https://www.youtube.com/user/pewdiepie").then(res => {
if (res.statusCode === 200) {
const parsed = res.body.split("var ytInitialData = ")[1].split(";</script>")[0]
const data = JSON.parse(parsed)
const links = data.header.c4TabbedHeaderRenderer.headerLinks.channelHeaderLinksRenderer
const allLinks = links.primaryLinks.concat(links.secondaryLinks || [])
const parsedLinks = allLinks.map(l => {
const url = new URLSearchParams(l.navigationEndpoint.commandMetadata.webCommandMetadata.url)
return {
link: url.get("q"),
name: l.title.simpleText,
icon: l.icon.thumbnails[0].url
}
})
return parsedLinks
} else {
// Error/ratelimit - Handle here
}
})
}
The way the links are scraped is as follows:
We make a HTTP request to the channel's URL
We parse the body to extract the JSON string that the banner links are inside using split
We parse the JSON string into a JSON object
We extract the links from their JSON section (It's a big JSON object data.header.c4TabbedHeaderRenderer.headerLinks.channelHeaderLinksRenderer
Because there are two types of links (Primary, the one that shows the text and secondary, links that don't show the text) we have to concatenate them together so we can map through them
We then map through the links and use URLSearchParams to extract the q query parameter since YouTube encrypts their outgoing links (Most likely for security reasons) and then extract the name and icon too using their appropriate objects.
This isn't a perfect solution, should YouTube update/change anything on their front end this could break your program easily. YouTube also has rate limits for their software if you're trying to mass scrape you'll run into 429/403 errors.

Yahoo Finance Current Quotes Error 999 No Definition Found [duplicate]

For months I've been using a url like this, from perl:
http://finance.yahoo.com/d/quotes.csv?s=$s&f=ynl1 #returns yield, name, price;
Today, 11/1/17, it suddenly returns a 999 error.
Is this a glitch, or has Yahoo terminated the service?
I get the error even if I enter the URL directly into a browser as, eg:
http://finance.yahoo.com/d/quotes.csv?s=INTC&f=ynl1
so it doesn't seem to be a 'crumb' problem.
Note: This is NOT a question which has been answered in the past!
It was working yesterday.That it happened on the first of the month is suspicious.
As noted in the other answers and elsewhere (e.g. https://stackoverflow.com/questions/47076404/currency-helper-of-yahoo-sorry-unable-to-process-request-at-this-time-erro/47096766#47096766), Yahoo has indeed ceased operation of the Yahoo Finance API. However, as a workaround, you can access a trove of financial information, in JSON format, for a given ticker symbol, by doing a HTTPS GET request to: https://finance.yahoo.com/quote/SYMBOL (e.g. https://finance.yahoo.com/quote/MSFT). If you do a GET request to the above URL, you'll see that the financial data is contained within the response in JSON format. The following python3 script shows how you can parse individual values that you may be interested in:
import requests
import json
symbol = 'MSFT'
url ='https://finance.yahoo.com/quote/' + symbol
resp = requests.get(url)
# parse the section from the html document containing the raw json data that we need
# you can write jsonstr to a file, then open the file in a web browser to browse the structure of the json data
r = str(resp.content, 'utf-8')
i1 = 0
i1 = r.find('root.App.main', i1)
i1 = r.find('{', i1)
i2 = r.find("\n", i1)
i2 = r.rfind(';', i1, i2)
jsonstr = r[i1:i2]
# load the raw json data into a python data object
data = json.loads(jsonstr)
# pull the values that we are interested in
name = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['shortName']
price = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketPrice']['raw']
change = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketChange']['raw']
shares_outstanding = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['sharesOutstanding']['raw']
market_cap = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['marketCap']['raw']
trailing_pe = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['trailingPE']['raw']
earnings_per_share = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['trailingEps']['raw']
forward_annual_dividend_rate = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendRate']['raw']
forward_annual_dividend_yield = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendYield']['raw']
# print the values
print('Symbol:', symbol)
print('Name:', name)
print('Price:', price)
print('Change:', change)
print('Shares Outstanding:', shares_outstanding)
print('Market Cap:', market_cap)
print('Trailing PE:', trailing_pe)
print('Earnings Per Share:', earnings_per_share)
print('Forward Annual Dividend Rate:', forward_annual_dividend_rate)
print('Forward_annual_dividend_yield:', forward_annual_dividend_yield)
Yahoo confirmed that they terminated the service:
It has come to our attention that this service is being used in violation of the Yahoo Terms of Service. As such, the service is being discontinued. For all future markets and equities data research, please refer to finance.yahoo.com .
There is still a way to get this data by querying some APIs used by the finance.yahoo.com page. Not sure if Yahoo will be supporting it long term as the previous API was (hopefully they will).
I adapted the method used by https://github.com/pstadler/ticker.sh into the following python hack that takes a list of symbols from the command line and outputs some of the variables as a csv:
#!/usr/bin/env python
import sys
import time
import requests
if len(sys.argv) < 2:
print("missing parameters: <symbol> ...")
exit()
apiEndpoint = "https://query1.finance.yahoo.com/v7/finance/quote"
fields = [
'symbol',
'regularMarketVolume',
'regularMarketPrice',
'regularMarketDayHigh',
'regularMarketDayLow',
'regularMarketTime',
'regularMarketChangePercent']
fields = ','.join(fields)
symbols = sys.argv[1:]
symbols = ','.join(symbols)
payload = {
'lang': 'en-US',
'region': 'US',
'corsDomain': 'finance.yahoo.com',
'fields': fields,
'symbols': symbols}
r = requests.get(apiEndpoint, params=payload)
for i in r.json()['quoteResponse']['result']:
if 'regularMarketPrice' in i:
a = []
a.append(i['symbol'])
a.append(i['regularMarketPrice'])
a.append(time.strftime(
'%Y-%m-%d %H:%M:%S', time.localtime(i['regularMarketTime'])))
a.append(i['regularMarketChangePercent'])
a.append(i['regularMarketVolume'])
a.append("{0:.2f} - {1:.2f}".format(
i['regularMarketDayLow'], i['regularMarketDayHigh']))
print(",".join([str(e) for e in a]))
Sample Run:
$ ./getquotePy.py AAPL GOOGL
AAPL,174.5342,2017-11-07 17:21:28,0.1630961,19905458,173.60 - 173.60
GOOGL,1048.6753,2017-11-07 17:21:22,0.5749836,840447,1043.00 - 1043.00
var API = "https://query1.finance.yahoo.com/v7/finance/quote?symbols=AAPL";
$.getJSON(API, function (json) {...});call throws this error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://www.microplan.at/sar' is therefore not allowed access.

Has Yahoo suddenly today terminated its finance download API?

For months I've been using a url like this, from perl:
http://finance.yahoo.com/d/quotes.csv?s=$s&f=ynl1 #returns yield, name, price;
Today, 11/1/17, it suddenly returns a 999 error.
Is this a glitch, or has Yahoo terminated the service?
I get the error even if I enter the URL directly into a browser as, eg:
http://finance.yahoo.com/d/quotes.csv?s=INTC&f=ynl1
so it doesn't seem to be a 'crumb' problem.
Note: This is NOT a question which has been answered in the past!
It was working yesterday.That it happened on the first of the month is suspicious.
As noted in the other answers and elsewhere (e.g. https://stackoverflow.com/questions/47076404/currency-helper-of-yahoo-sorry-unable-to-process-request-at-this-time-erro/47096766#47096766), Yahoo has indeed ceased operation of the Yahoo Finance API. However, as a workaround, you can access a trove of financial information, in JSON format, for a given ticker symbol, by doing a HTTPS GET request to: https://finance.yahoo.com/quote/SYMBOL (e.g. https://finance.yahoo.com/quote/MSFT). If you do a GET request to the above URL, you'll see that the financial data is contained within the response in JSON format. The following python3 script shows how you can parse individual values that you may be interested in:
import requests
import json
symbol = 'MSFT'
url ='https://finance.yahoo.com/quote/' + symbol
resp = requests.get(url)
# parse the section from the html document containing the raw json data that we need
# you can write jsonstr to a file, then open the file in a web browser to browse the structure of the json data
r = str(resp.content, 'utf-8')
i1 = 0
i1 = r.find('root.App.main', i1)
i1 = r.find('{', i1)
i2 = r.find("\n", i1)
i2 = r.rfind(';', i1, i2)
jsonstr = r[i1:i2]
# load the raw json data into a python data object
data = json.loads(jsonstr)
# pull the values that we are interested in
name = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['shortName']
price = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketPrice']['raw']
change = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketChange']['raw']
shares_outstanding = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['sharesOutstanding']['raw']
market_cap = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['marketCap']['raw']
trailing_pe = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['trailingPE']['raw']
earnings_per_share = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['trailingEps']['raw']
forward_annual_dividend_rate = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendRate']['raw']
forward_annual_dividend_yield = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendYield']['raw']
# print the values
print('Symbol:', symbol)
print('Name:', name)
print('Price:', price)
print('Change:', change)
print('Shares Outstanding:', shares_outstanding)
print('Market Cap:', market_cap)
print('Trailing PE:', trailing_pe)
print('Earnings Per Share:', earnings_per_share)
print('Forward Annual Dividend Rate:', forward_annual_dividend_rate)
print('Forward_annual_dividend_yield:', forward_annual_dividend_yield)
Yahoo confirmed that they terminated the service:
It has come to our attention that this service is being used in violation of the Yahoo Terms of Service. As such, the service is being discontinued. For all future markets and equities data research, please refer to finance.yahoo.com .
There is still a way to get this data by querying some APIs used by the finance.yahoo.com page. Not sure if Yahoo will be supporting it long term as the previous API was (hopefully they will).
I adapted the method used by https://github.com/pstadler/ticker.sh into the following python hack that takes a list of symbols from the command line and outputs some of the variables as a csv:
#!/usr/bin/env python
import sys
import time
import requests
if len(sys.argv) < 2:
print("missing parameters: <symbol> ...")
exit()
apiEndpoint = "https://query1.finance.yahoo.com/v7/finance/quote"
fields = [
'symbol',
'regularMarketVolume',
'regularMarketPrice',
'regularMarketDayHigh',
'regularMarketDayLow',
'regularMarketTime',
'regularMarketChangePercent']
fields = ','.join(fields)
symbols = sys.argv[1:]
symbols = ','.join(symbols)
payload = {
'lang': 'en-US',
'region': 'US',
'corsDomain': 'finance.yahoo.com',
'fields': fields,
'symbols': symbols}
r = requests.get(apiEndpoint, params=payload)
for i in r.json()['quoteResponse']['result']:
if 'regularMarketPrice' in i:
a = []
a.append(i['symbol'])
a.append(i['regularMarketPrice'])
a.append(time.strftime(
'%Y-%m-%d %H:%M:%S', time.localtime(i['regularMarketTime'])))
a.append(i['regularMarketChangePercent'])
a.append(i['regularMarketVolume'])
a.append("{0:.2f} - {1:.2f}".format(
i['regularMarketDayLow'], i['regularMarketDayHigh']))
print(",".join([str(e) for e in a]))
Sample Run:
$ ./getquotePy.py AAPL GOOGL
AAPL,174.5342,2017-11-07 17:21:28,0.1630961,19905458,173.60 - 173.60
GOOGL,1048.6753,2017-11-07 17:21:22,0.5749836,840447,1043.00 - 1043.00
var API = "https://query1.finance.yahoo.com/v7/finance/quote?symbols=AAPL";
$.getJSON(API, function (json) {...});call throws this error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://www.microplan.at/sar' is therefore not allowed access.

Get Google Sheets Last Edit date using Sheets API v4 (Java)

I'm using the Google Sheets API v4 in Android.
https://developers.google.com/sheets/api/quickstart/android
I need to know when the last modification to the sheet was made (including by user); I need this guy:
I'd like to do something like this:
Spreadsheet spreadsheet = sheetsService.spreadsheets().get(spreadsheetId).setIncludeGridData(true).execute();
Date date = spreadsheet.getProperties().getLastEditDate();
But, of course, no such getLastEditDate() property method exists. Is there a parameter or another API method to call to get this data?
Even better would be to get the modified date for each cell... but I'd settle for the date of the entire spreadsheet or sheet.
This is not available in the Sheets API, but you may be able to use the Drive API's files.get method, which includes a 'modifiedTime' in the response. (Note that by default it will not include the modified time, you have to explicitly ask for it in the 'fields' parameter.)
It looks like this cannot be done with Sheets API v4.
However...it does look like it can be done with the compatible Google Drive API v3.
Note: the best part about this solution was that I could use the same method of authentication and credential gathering for both APIs. E.g., once I had the code for getting the credentials, I could use it for both API's interchangeably and consecutively.
Here's what I did:
Added this to my build.gradle (shown below my Sheets API declaration)
compile('com.google.apis:google-api-services-sheets:v4-rev468-1.22.0') {
exclude group: 'org.apache.httpcomponents'
}
compile('com.google.apis:google-api-services-drive:v3-rev69-1.22.0') {
exclude group: 'org.apache.httpcomponents'
}
I was already using the EasyPermissions method for getting account and credentials. Great example here.
Then...
import com.google.api.services.drive.Drive;
...
protected Drive driveService = new Drive.Builder(transport, jsonFactory, credential)
.setApplicationName("My Application Name")
.build();
... async:
private DateTime getSheetInformation() throws IOException {
String spreadsheetId = settings.getSpreadsheetId();
Drive.Files.Get fileRequest = driveService.files().get(spreadsheetId).setFields("id, modifiedTime");
File file = fileRequest.execute();
if (file != null) {
return file.getModifiedTime();
}
return null;
}
The sheets api v3 will be deprecated in March 2020, when that happens, your best bet is to use the drive API.
https://developers.google.com/drive/api/v3/reference/files/list
you can pass

Getting top twitter trends by country

I know how to get trends using API, but i want it by country and top 10.
How can I ? Is that possible?
Tried this one but not working
http://api.twitter.com/1/trends/current.json?count=50
Note that Twitter API v1 is no longer functional. Read announcement
So you should use Twitter API 1.1.
REST method is this: GET trends/place
https://dev.twitter.com/docs/api/1.1/get/trends/place
You should authenticate with access tokens to reach this data.
Yes, you can.
First, figure out which countries you want to get data for.
Calling
https://dev.twitter.com/docs/api/1/get/trends/available
Will give you a list of all the countries Twitter has trends for.
Suppose you want the trends for the UK. The above tells us that the WOEID is 23424975.
To get the top ten trends for the UK, call
https://api.twitter.com/1/trends/23424975.json
you need to figure out the woeids first use this tool here http://sigizmund.info/woeidinfo/ and then it becomes as easy as a simple function
function get_trends($woeid){
return json_decode(file_get_contents("http://api.twitter.com/1/trends/".$woeid.".json?exclude=hashtags", true), false);
}
Here you go, I wrote a simple sample script for you. Check It Out Here Twitter Trends Getter
Hope it helps!
I'm a 'bit' late to the party on this one but you can use:
npm i twit --save
then,
const Twit = require('twit');
const config = require('./config');
const T = new Twit(config);
const params = {
id: '23424829',
id: '23424975',
id: '23424977'
// count: 3
};
T.get('trends/place', params, gotData, limit);
function gotData(err, data, response) {
var tweets = data;
console.log(JSON.stringify(tweets, undefined, 2));
}
You have to Complete Authentication using Api Key to Fetch Results in JSON.
Another Thing Keep in Mind Twitter Api is Limited.
If you are Making Website for Top Twitter Trends then Visit this Url https://twitter-trends.vlivetricks.com/, Right Click >> Copy Source Code and Replace only Trends Name using Your Json Variable.

Resources