I'm helping an organization out and they wish to have a website in which you input some numbers and a Google Sheets link should appear in which the numbers are put into some formula to give a result (i.e if you put in 5, 5, 3; (Num1 + Num2) * num3 it'd show a table in which the first two numbers are added then multiplied by the third:
Num1 Num2 Num2 Result
5 5 3 30
I have searched GitHub and the internet but I cannot seem to find a library that can create a Google Sheet from a website with given data input into a formula. Most stuff I found used the gsheet API and only modified an existing sheet. I found a Python (Flask) library called xlsxwriter and I was wondering if there is any way to convert an .xlsx to an online Google Sheet or if possible to just make a Google Sheet from my website.
(My website is in Flask as of right now but since I literally have no backend. If someone knows of a library that is in another web framework, I am willing to switch.)
Thank you, John D
Maybe I am missing something here but the regular Google Sheets API (which is available for Python) allows you to create blank spreadsheets with a specified title.
spreadsheet = {
'properties': {
'title': title
}
}
spreadsheet = service.spreadsheets().create(body=spreadsheet,
fields='spreadsheetId').execute()
print('Spreadsheet ID: {0}'.format(spreadsheet.get('spreadsheetId')))
https://developers.google.com/sheets/api/guides/create
https://github.com/gsuitedevs/python-samples/blob/master/sheets/snippets/spreadsheet_snippets.py
https://developers.google.com/sheets/api/quickstart/python
Here is a more complete sample based on the Google Quickstarts examples:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# If modifying these scopes, delete the file token.pickle! Adjust scopes as needed
SCOPES = ['https://www.googleapis.com/auth/spreadsheets']
def main():
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
spreadsheet = {
'properties': {
'title': 'New Test Sheet 2'
}
}
spreadsheet = service.spreadsheets().create(body=spreadsheet,
fields='spreadsheetId').execute()
#print('Spreadsheet ID: {0}'.format(spreadsheet.get('spreadsheetId')))
spreadsheet_id = spreadsheet.get('spreadsheetId')
range_name = 'Sheet1!A1:D5'
body = {
"majorDimension": "ROWS",
"values": [
["Item", "Cost", "Stocked", "Ship Date"],
["Wheel", "$20.50", "4", "3/1/2016"], # new row
["Door", "$15", "2", "3/15/2016"],
["Engine", "$100", "1", "30/20/2016"],
["Totals", "=SUM(B2:B4)", "=SUM(C2:C4)", "=MAX(D2:D4)"]
],
}
result = service.spreadsheets().values().update(
spreadsheetId=spreadsheet_id,
range=range_name,
body=body,
valueInputOption='USER_ENTERED'
).execute()
print(result)
if __name__ == '__main__':
main()
Related
is it possible to scrape “likes number” and “post number” from this website and import data on google sheet?
Because when i try i get empty data since the span of those data are basically all the same…
thanks for help
Edited:
As you even want to push that data to google sheet and read by back from their I could come up with the below solution you could modify according to your need.
First you need to install gspread library and follow this tutorial https://gspread.readthedocs.io/en/latest/oauth2.html to get the credentials to access the google sheets via api and then follow the below updated code.
Your sheet should be like this:
Code:
import requests
import gspread
headers = {'Accept': 'application/json', 'app-token': '33d57ade8c02dbc5a333db99ff9ae26a'}
gc = gspread.service_account(filename="credentials.json")
sh = gc.open("data")
for rownumber,rowvalues in enumerate(sh.sheet1.get_all_values(),1):
if len(rowvalues)==2:
if rowvalues[1]=='':
cookies = requests.post("https://onlyfans.com/api2/v2/init", headers=headers)
data = requests.get(f"https://onlyfans.com/api2/v2/users/{rowvalues[0]}", headers=headers, cookies=cookies)
if data.status_code == 200:
data = data.json()
sh.sheet1.update_cell(rownumber, 2, data["postsCount"])
else:
print(f"Check : {rowvalues}")
else:
cookies = requests.post("https://onlyfans.com/api2/v2/init", headers=headers)
data = requests.get(f"https://onlyfans.com/api2/v2/users/{rowvalues[0]}", headers=headers, cookies=cookies)
if data.status_code == 200:
data = data.json()
sh.sheet1.update_cell(rownumber, 2, data["postsCount"])
print(f"{rownumber} Processed")
Once you run this code you see will data has been updated in google sheets but before running this script follow the URL provided or else you will end up having errors.
Updated Gsheets:
Old:
Seeing to the network logs of that website I was able to extract your desired data by requests library and some of their API calls you check the data.json() dictionary for other data if required.
Follow the below code.
import requests
headers={'Accept': 'application/json', 'app-token': '33d57ade8c02dbc5a333db99ff9ae26a'}
cookies=requests.post("https://onlyfans.com/api2/v2/init",headers=headers)
data=requests.get("https://onlyfans.com/api2/v2/users/elettra_pink",headers=headers,cookies=cookies)
if data.status_code==200:
data=data.json()
print(f'Posts:{data["postsCount"]}\nPhotosCount:{data["photosCount"]}\nVideosCount:{data["videosCount"]}\nFavoritedCount:{data["favoritedCount"]}\nSubscribersCount:{data["subscribersCount"]}')
Output:
Let me know if you have any questions :)
import pandas as pd
from google.cloud import bigquery
import google.auth
# from google.cloud import bigquery
# Create credentials with Drive & BigQuery API scopes
# Both APIs must be enabled for your project before running this code
credentials, project = google.auth.default(scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/bigquery',
])
client = bigquery.Client(credentials=credentials, project=project)
# Configure the external data source and query job
external_config = bigquery.ExternalConfig('GOOGLE_SHEETS')
# Use a shareable link or grant viewing access to the email address you
# used to authenticate with BigQuery (this example Sheet is public)
sheet_url = (
'https://docs.google.com/spreadsheets'
'/d/1uknEkew2C3nh1JQgrNKjj3Lc45hvYI2EjVCcFRligl4/edit?usp=sharing')
external_config.source_uris = [sheet_url]
external_config.schema = [
bigquery.SchemaField('name', 'STRING'),
bigquery.SchemaField('post_abbr', 'STRING')
]
external_config.options.skip_leading_rows = 1 # optionally skip header row
table_id = 'BambooHRActiveRoster'
job_config = bigquery.QueryJobConfig()
job_config.table_definitions = {table_id: external_config}
# Get Top 10
sql = 'SELECT * FROM workforce.BambooHRActiveRoster LIMIT 10'
query_job = client.query(sql, job_config=job_config) # API request
top10 = list(query_job) # Waits for query to finish
print('There are {} states with names starting with W.'.format(
len(top10)))
The error I get is:
BadRequest: 400 Error while reading table: workforce.BambooHRActiveRoster, error message: Failed to read the spreadsheet. Errors: No OAuth token with Google Drive scope was found.
I can pull data in from a BigQuery table created from CSV upload, but when I have a BigQuery table created from a linked Google Sheet, I continue to receive this error.
I have tried to replicate the sample in Google's documentation (Creating and querying a temporary table):
https://cloud.google.com/bigquery/external-data-drive
You are authenticating as yourself, which is generally fine for BQ if you have the correct permissions. Using tables linked to Google Sheets often requires a service account. Create one (or have your BI/IT team create one), and then you will have to share the underlying Google Sheet with the service account. Finally, you will need to modify your python script to use the service account credentials and not your own.
The quick way around this is to use the BQ interface, select * from the Sheets-linked table, and save the results to a new table, and query that new table directly in your python script. This works well if this is a one-time upload/analysis. If the data in the sheets will be changing consistently and you will need to routinely query the data, this is not a long-term solution.
I solved problem by adding scope object to client.
from google.cloud import bigquery
import google.auth
credentials, project = google.auth.default(scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/bigquery',
])
CLIENT = bigquery.Client(project='project', credentials=credentials)
https://cloud.google.com/bigquery/external-data-drive
import pandas as pd
from google.oauth2 import service_account
from google.cloud import bigquery
#from oauth2client.service_account import ServiceAccountCredentials
SCOPES = ['https://www.googleapis.com/auth/drive','https://www.googleapis.com/auth/bigquery']
SERVICE_ACCOUNT_FILE = 'mykey.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
delegated_credentials = credentials.with_subject('myserviceaccountt#domain.iam.gserviceaccount.com')
client = bigquery.Client(credentials=delegated_credentials, project=project)
sql = 'SELECT * FROM `myModel`'
DF = client.query(sql).to_dataframe()
You can try to update your default credentials through the console:
gcloud auth application-default login --scopes=https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/cloud-platform
For months I've been using a url like this, from perl:
http://finance.yahoo.com/d/quotes.csv?s=$s&f=ynl1 #returns yield, name, price;
Today, 11/1/17, it suddenly returns a 999 error.
Is this a glitch, or has Yahoo terminated the service?
I get the error even if I enter the URL directly into a browser as, eg:
http://finance.yahoo.com/d/quotes.csv?s=INTC&f=ynl1
so it doesn't seem to be a 'crumb' problem.
Note: This is NOT a question which has been answered in the past!
It was working yesterday.That it happened on the first of the month is suspicious.
As noted in the other answers and elsewhere (e.g. https://stackoverflow.com/questions/47076404/currency-helper-of-yahoo-sorry-unable-to-process-request-at-this-time-erro/47096766#47096766), Yahoo has indeed ceased operation of the Yahoo Finance API. However, as a workaround, you can access a trove of financial information, in JSON format, for a given ticker symbol, by doing a HTTPS GET request to: https://finance.yahoo.com/quote/SYMBOL (e.g. https://finance.yahoo.com/quote/MSFT). If you do a GET request to the above URL, you'll see that the financial data is contained within the response in JSON format. The following python3 script shows how you can parse individual values that you may be interested in:
import requests
import json
symbol = 'MSFT'
url ='https://finance.yahoo.com/quote/' + symbol
resp = requests.get(url)
# parse the section from the html document containing the raw json data that we need
# you can write jsonstr to a file, then open the file in a web browser to browse the structure of the json data
r = str(resp.content, 'utf-8')
i1 = 0
i1 = r.find('root.App.main', i1)
i1 = r.find('{', i1)
i2 = r.find("\n", i1)
i2 = r.rfind(';', i1, i2)
jsonstr = r[i1:i2]
# load the raw json data into a python data object
data = json.loads(jsonstr)
# pull the values that we are interested in
name = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['shortName']
price = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketPrice']['raw']
change = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketChange']['raw']
shares_outstanding = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['sharesOutstanding']['raw']
market_cap = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['marketCap']['raw']
trailing_pe = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['trailingPE']['raw']
earnings_per_share = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['trailingEps']['raw']
forward_annual_dividend_rate = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendRate']['raw']
forward_annual_dividend_yield = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendYield']['raw']
# print the values
print('Symbol:', symbol)
print('Name:', name)
print('Price:', price)
print('Change:', change)
print('Shares Outstanding:', shares_outstanding)
print('Market Cap:', market_cap)
print('Trailing PE:', trailing_pe)
print('Earnings Per Share:', earnings_per_share)
print('Forward Annual Dividend Rate:', forward_annual_dividend_rate)
print('Forward_annual_dividend_yield:', forward_annual_dividend_yield)
Yahoo confirmed that they terminated the service:
It has come to our attention that this service is being used in violation of the Yahoo Terms of Service. As such, the service is being discontinued. For all future markets and equities data research, please refer to finance.yahoo.com .
There is still a way to get this data by querying some APIs used by the finance.yahoo.com page. Not sure if Yahoo will be supporting it long term as the previous API was (hopefully they will).
I adapted the method used by https://github.com/pstadler/ticker.sh into the following python hack that takes a list of symbols from the command line and outputs some of the variables as a csv:
#!/usr/bin/env python
import sys
import time
import requests
if len(sys.argv) < 2:
print("missing parameters: <symbol> ...")
exit()
apiEndpoint = "https://query1.finance.yahoo.com/v7/finance/quote"
fields = [
'symbol',
'regularMarketVolume',
'regularMarketPrice',
'regularMarketDayHigh',
'regularMarketDayLow',
'regularMarketTime',
'regularMarketChangePercent']
fields = ','.join(fields)
symbols = sys.argv[1:]
symbols = ','.join(symbols)
payload = {
'lang': 'en-US',
'region': 'US',
'corsDomain': 'finance.yahoo.com',
'fields': fields,
'symbols': symbols}
r = requests.get(apiEndpoint, params=payload)
for i in r.json()['quoteResponse']['result']:
if 'regularMarketPrice' in i:
a = []
a.append(i['symbol'])
a.append(i['regularMarketPrice'])
a.append(time.strftime(
'%Y-%m-%d %H:%M:%S', time.localtime(i['regularMarketTime'])))
a.append(i['regularMarketChangePercent'])
a.append(i['regularMarketVolume'])
a.append("{0:.2f} - {1:.2f}".format(
i['regularMarketDayLow'], i['regularMarketDayHigh']))
print(",".join([str(e) for e in a]))
Sample Run:
$ ./getquotePy.py AAPL GOOGL
AAPL,174.5342,2017-11-07 17:21:28,0.1630961,19905458,173.60 - 173.60
GOOGL,1048.6753,2017-11-07 17:21:22,0.5749836,840447,1043.00 - 1043.00
var API = "https://query1.finance.yahoo.com/v7/finance/quote?symbols=AAPL";
$.getJSON(API, function (json) {...});call throws this error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://www.microplan.at/sar' is therefore not allowed access.
For months I've been using a url like this, from perl:
http://finance.yahoo.com/d/quotes.csv?s=$s&f=ynl1 #returns yield, name, price;
Today, 11/1/17, it suddenly returns a 999 error.
Is this a glitch, or has Yahoo terminated the service?
I get the error even if I enter the URL directly into a browser as, eg:
http://finance.yahoo.com/d/quotes.csv?s=INTC&f=ynl1
so it doesn't seem to be a 'crumb' problem.
Note: This is NOT a question which has been answered in the past!
It was working yesterday.That it happened on the first of the month is suspicious.
As noted in the other answers and elsewhere (e.g. https://stackoverflow.com/questions/47076404/currency-helper-of-yahoo-sorry-unable-to-process-request-at-this-time-erro/47096766#47096766), Yahoo has indeed ceased operation of the Yahoo Finance API. However, as a workaround, you can access a trove of financial information, in JSON format, for a given ticker symbol, by doing a HTTPS GET request to: https://finance.yahoo.com/quote/SYMBOL (e.g. https://finance.yahoo.com/quote/MSFT). If you do a GET request to the above URL, you'll see that the financial data is contained within the response in JSON format. The following python3 script shows how you can parse individual values that you may be interested in:
import requests
import json
symbol = 'MSFT'
url ='https://finance.yahoo.com/quote/' + symbol
resp = requests.get(url)
# parse the section from the html document containing the raw json data that we need
# you can write jsonstr to a file, then open the file in a web browser to browse the structure of the json data
r = str(resp.content, 'utf-8')
i1 = 0
i1 = r.find('root.App.main', i1)
i1 = r.find('{', i1)
i2 = r.find("\n", i1)
i2 = r.rfind(';', i1, i2)
jsonstr = r[i1:i2]
# load the raw json data into a python data object
data = json.loads(jsonstr)
# pull the values that we are interested in
name = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['shortName']
price = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketPrice']['raw']
change = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketChange']['raw']
shares_outstanding = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['sharesOutstanding']['raw']
market_cap = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['marketCap']['raw']
trailing_pe = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['trailingPE']['raw']
earnings_per_share = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['trailingEps']['raw']
forward_annual_dividend_rate = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendRate']['raw']
forward_annual_dividend_yield = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendYield']['raw']
# print the values
print('Symbol:', symbol)
print('Name:', name)
print('Price:', price)
print('Change:', change)
print('Shares Outstanding:', shares_outstanding)
print('Market Cap:', market_cap)
print('Trailing PE:', trailing_pe)
print('Earnings Per Share:', earnings_per_share)
print('Forward Annual Dividend Rate:', forward_annual_dividend_rate)
print('Forward_annual_dividend_yield:', forward_annual_dividend_yield)
Yahoo confirmed that they terminated the service:
It has come to our attention that this service is being used in violation of the Yahoo Terms of Service. As such, the service is being discontinued. For all future markets and equities data research, please refer to finance.yahoo.com .
There is still a way to get this data by querying some APIs used by the finance.yahoo.com page. Not sure if Yahoo will be supporting it long term as the previous API was (hopefully they will).
I adapted the method used by https://github.com/pstadler/ticker.sh into the following python hack that takes a list of symbols from the command line and outputs some of the variables as a csv:
#!/usr/bin/env python
import sys
import time
import requests
if len(sys.argv) < 2:
print("missing parameters: <symbol> ...")
exit()
apiEndpoint = "https://query1.finance.yahoo.com/v7/finance/quote"
fields = [
'symbol',
'regularMarketVolume',
'regularMarketPrice',
'regularMarketDayHigh',
'regularMarketDayLow',
'regularMarketTime',
'regularMarketChangePercent']
fields = ','.join(fields)
symbols = sys.argv[1:]
symbols = ','.join(symbols)
payload = {
'lang': 'en-US',
'region': 'US',
'corsDomain': 'finance.yahoo.com',
'fields': fields,
'symbols': symbols}
r = requests.get(apiEndpoint, params=payload)
for i in r.json()['quoteResponse']['result']:
if 'regularMarketPrice' in i:
a = []
a.append(i['symbol'])
a.append(i['regularMarketPrice'])
a.append(time.strftime(
'%Y-%m-%d %H:%M:%S', time.localtime(i['regularMarketTime'])))
a.append(i['regularMarketChangePercent'])
a.append(i['regularMarketVolume'])
a.append("{0:.2f} - {1:.2f}".format(
i['regularMarketDayLow'], i['regularMarketDayHigh']))
print(",".join([str(e) for e in a]))
Sample Run:
$ ./getquotePy.py AAPL GOOGL
AAPL,174.5342,2017-11-07 17:21:28,0.1630961,19905458,173.60 - 173.60
GOOGL,1048.6753,2017-11-07 17:21:22,0.5749836,840447,1043.00 - 1043.00
var API = "https://query1.finance.yahoo.com/v7/finance/quote?symbols=AAPL";
$.getJSON(API, function (json) {...});call throws this error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://www.microplan.at/sar' is therefore not allowed access.
I want to extract the Title of YouTube's videos. How can I do this?
Thanks.
Easiest way to obtain information about a youtube video afaik is to parse the string retrieved from: http://youtube.com/get_video_info?video_id=XXXXXXXX
Using something like PHP's parse_str(), you can obtain a nice array of nearly anything about the video:
$content = file_get_contents("http://youtube.com/get_video_info?video_id=".$id);
parse_str($content, $ytarr);
echo $ytarr['title'];
That will print the title for the video using $id as the video's id.
Hello In python3 i founded 2 ways
1) without API KEY
import urllib.request
import json
import urllib
import pprint
#change to yours VideoID or change url inparams
VideoID = "SZj6rAYkYOg"
params = {"format": "json", "url": "https://www.youtube.com/watch?v=%s" % VideoID}
url = "https://www.youtube.com/oembed"
query_string = urllib.parse.urlencode(params)
url = url + "?" + query_string
with urllib.request.urlopen(url) as response:
response_text = response.read()
data = json.loads(response_text.decode())
pprint.pprint(data)
print(data['title'])
example results:
{'author_name': 'Google Developers',
'author_url': 'https://www.youtube.com/user/GoogleDevelopers',
'height': 270,
'html': '<iframe width="480" height="270" '
'src="https://www.youtube.com/embed/SZj6rAYkYOg?feature=oembed" '
'frameborder="0" allow="autoplay; encrypted-media" '
'allowfullscreen></iframe>',
'provider_name': 'YouTube',
'provider_url': 'https://www.youtube.com/',
'thumbnail_height': 360,
'thumbnail_url': 'https://i.ytimg.com/vi/SZj6rAYkYOg/hqdefault.jpg',
'thumbnail_width': 480,
'title': 'Google I/O 101: Google APIs: Getting Started Quickly',
'type': 'video',
'version': '1.0',
'width': 480}
Google I/O 101: Google APIs: Getting Started Quickly
2) Using Google API - required APIKEY
import urllib.request
import json
import urllib
import pprint
APIKEY = "YOUR_GOOGLE_APIKEY"
VideoID = "YOUR_VIDEO_ID"
params = {'id': VideoID, 'key': APIKEY,
'fields': 'items(id,snippet(channelId,title,categoryId),statistics)',
'part': 'snippet,statistics'}
url = 'https://www.googleapis.com/youtube/v3/videos'
query_string = urllib.parse.urlencode(params)
url = url + "?" + query_string
with urllib.request.urlopen(url) as response:
response_text = response.read()
data = json.loads(response_text.decode())
pprint.pprint(data)
print("TITLE: %s " % data['items'][0]['snippet']['title'])
example results:
{'items': [{'id': 'SZj6rAYkYOg',
'snippet': {'categoryId': '28',
'channelId': 'UC_x5XG1OV2P6uZZ5FSM9Ttw',
'title': 'Google I/O 101: Google APIs: Getting '
'Started Quickly'},
'statistics': {'commentCount': '36',
'dislikeCount': '20',
'favoriteCount': '0',
'likeCount': '418',
'viewCount': '65783'}}]}
TITLE: Google I/O 101: Google APIs: Getting Started Quickly
Using JavaScript data API:
var loadInfo = function (videoId) {
var gdata = document.createElement("script");
gdata.src = "http://gdata.youtube.com/feeds/api/videos/" + videoId + "?v=2&alt=jsonc&callback=storeInfo";
var body = document.getElementsByTagName("body")[0];
body.appendChild(gdata);
};
var storeInfo = function (info) {
console.log(info.data.title);
};
Then you just need to call loadInfo(videoId).
More informations are available on the API documentation.
One way to do this would be to retrieve the video from youtube as shown here
Then extract the title out of the atom feed sent by youtube. A sample feed is shown here
I'll lay out the process as outlined by the YouTube API v3 documentation.
Make a / login to the Google account that you want to be associated with your YouTube API use.
Create a new project at https://console.developers.google.com/apis/credentials.
On the upper left, next to the Google APIs logo, go to Select a project and Create project +.
Wait a moment for the creation to finish.
Make a new API key. You'll need it to access video info under v3.
If you're not already there, go to Credentials under the navigator on the left hand side, APIs and Services > Credentials.
Under the Credentials tab, click Create Credentials and select API key.
Copy the API key to your clipboard.
Providing a video ID and your newly created API key, go to this link to see your work in action: https://www.googleapis.com/youtube/v3/videos?id=<YOUR VIDEO ID HERE>&key=<YOUR API KEY HERE>%20&part=snippet (no angle brackets)
For more info on what you can access, see here: https://developers.google.com/youtube/v3/getting-started#partial. For convenience, I'll copy one of their examples here (Example 4). The fields and part parameters in the URL are key here.
Example
The URL is, well, what URL you can go to through your browser to check it out. In return, you should get what's under API response:.
URL: https://www.googleapis.com/youtube/v3/videos?id=7lCDEYXw3mM&key=YOUR_API_KEY
&fields=items(id,snippet(channelId,title,categoryId),statistics)&part=snippet,statistics
Description: This example modifies the fields parameter from example 3
so that in the API response, each video resource's snippet
object only includes the channelId, title,
and categoryId properties.
API response:
{
"videos": [
{
"id": "7lCDEYXw3mM",
"snippet": {
"channelId": "UC_x5XG1OV2P6uZZ5FSM9Ttw",
"title": "Google I/O 101: Q&A On Using Google APIs",
"categoryId": "28"
},
"statistics": {
"viewCount": "3057",
"likeCount": "25",
"dislikeCount": "0",
"favoriteCount": "17",
"commentCount": "12"
}
}
]
}
This gives you video info in the .json file format. If your project is to access this info through JavaScript, you may be going here next: How to get JSON from URL in Javascript?.
I believe the best way is to use youTube's gdata, and then grab info from XML that is returned
http://gdata.youtube.com/feeds/api/videos/6_Ukfpsb8RI
Update:
There is a newer API out now which you should use instead
https://developers.google.com/youtube/v3/getting-started
URL: https://www.googleapis.com/youtube/v3/videos?id=7lCDEYXw3mM&key=YOUR_API_KEY
&fields=items(id,snippet(channelId,title,categoryId),statistics)&part=snippet,statistics
Description: This example modifies the fields parameter from example 3 so that in the API response, each video resource's snippet object only includes the channelId, title, and categoryId properties.
API response:
{
"videos": [
{
"id": "7lCDEYXw3mM",
"snippet": {
"channelId": "UC_x5XG1OV2P6uZZ5FSM9Ttw",
"title": "Google I/O 101: Q&A On Using Google APIs",
"categoryId": "28"
},
"statistics": {
"viewCount": "3057",
"likeCount": "25",
"dislikeCount": "0",
"favoriteCount": "17",
"commentCount": "12"
}
}
]
}
With bash, wget and lynx:
#!/bin/bash
read -e -p "Youtube address? " address
page=$(wget "$address" -O - 2>/dev/null)
title=$(echo "$page" | grep " - ")
title="$(lynx --dump -force-html <(echo "<html><body>
$title
</body></html>")| grep " - ")"
title="${title/* - /}"
echo "$title"
// This is the youtube video URL: http://www.youtube.com/watch?v=nOHHta68DdU
$code = "nOHHta68DdU";
// Get video feed info (xml) from youtube, but only the title | http://php.net/manual/en/function.file-get-contents.php
$video_feed = file_get_contents("http://gdata.youtube.com/feeds/api/videos?v=2&q=".$code."&max-results=1&fields=entry(title)&prettyprint=true");
// xml to object | http://php.net/manual/en/function.simplexml-load-string.php
$video_obj = simplexml_load_string($video_feed);
// Get the title string to a variable
$video_str = $video_obj->entry->title;
// Output
echo $video_str;
If python batch processing script is appreciated: I used BeautifulSoup to easily parse the title from HTML, urllib to download the HTML and unicodecsv libraries in order to save all the characters from Youtube title.
The only thing you need to do is to place csv with single (named) column url with URLs of the Youtube videos in the same folder as the script is and name it yt-urls.csv and run the script. You will get file yt-urls-titles.csv containing the URLs and its titles.
#!/usr/bin/python
from bs4 import BeautifulSoup
import urllib
import unicodecsv as csv
with open('yt-urls-titles.csv', 'wb') as f:
resultcsv = csv.DictWriter(f, delimiter=';', quotechar='"',fieldnames=['url','title'])
with open('yt-urls.csv', 'rb') as f:
inputcsv = csv.DictReader(f, delimiter=';', quotechar='"')
resultcsv.writeheader()
for row in inputcsv:
soup = BeautifulSoup(urllib.urlopen(row['url']).read(), "html.parser")
resultcsv.writerow({'url': row['url'],'title': soup.title.string})
If you have youtube-dl, it's as simple as:
youtube-dl --get-title https://www.youtube.com/watch?v=dQw4w9WgXcQ
Here's some cut and paste code for ColdFusion:
http://trycf.com/gist/f296d14e456a7c925d23a1282daa0b90
It works on CF9 (and likely, earlier versions) using YouTube API v3, which requires an API key.
I left some comments and diag stuff in it, for anyone who wants to dig deeper. Hope it helps someone.
You can do using Json to get the all info about video
$jsonURL = file_get_contents("https://www.googleapis.com/youtube/v3/videos?id={Your_Video_ID_Here}&key={Your_API_KEY}8&part=snippet");
$json = json_decode($jsonURL);
$vtitle = $json->{'items'}[0]->{'snippet'}->{'title'};
$vdescription = $json->{'items'}[0]->{'snippet'}->{'description'};
$vvid = $json->{'items'}[0]->{'id'};
$vdate = $json->{'items'}[0]->{'snippet'}->{'publishedAt'};
$vthumb = $json->{'items'}[0]->{'snippet'}->{'thumbnails'}->{'high'}->{'url'};
I hope it will solve your problem.
using python i got itimport pafy url = "https://www.youtube.com/watch?v=bMt47wvK6u0" video = pafy.new(url) print(video.title)
If you are familiar with java, try the Jsoup parser.
Document document = Jsoup.connect("http://www.youtube.com/ABDCEF").get();
document.title();
Try this, I am getting name and url of each video in a playlist, you can modify this code as per your requirement.
$Playlist = ((Invoke-WebRequest "https://www.youtube.com/watch?v=HKkRbc6W6NA&list=PLz9M61O0WZqSUvHzPHVVC4IcqA8qe5K3r&
index=1").Links | Where {$_.class -match "playlist-video"}).href
$Fname = ((Invoke-WebRequest "https://www.youtube.com/watch?v=HKkRbc6W6NA&list=PLz9M61O0WZqSUvHzPHVVC4IcqA8qe5K3r&ind
ex=1").Links | Where {$_.class -match "playlist-video"}).outerText
$FinalText=""
For($i=0;$i -lt $playlist.Length;$i++)
{
Write-Output("'"+($Fname[$i].split("|")[0]).split("|")[0]+"'+"+"https://www.youtube.com"+$Playlist[$i])
}
JavaX now ships with this function. Showing a video's thumbnail and title, for example, is a two-liner:
SS map = youtubeVideoInfo("https://www.youtube.com/watch?v=4If_vFZdFTk"));
showImage(map.get("title"), loadImage(map.get("thumbnail_url")));
Example
Similarly to Matej M, but more simply:
import requests
from bs4 import BeautifulSoup
def get_video_name(id: str):
"""
Return the name of the video as it appears on YouTube, given the video id.
"""
r = requests.get(f'https://youtube.com/watch?v={id}')
r.raise_for_status()
soup = BeautifulSoup(r.content, "lxml")
return soup.title.string
if __name__ == '__main__':
js = get_video_name("RJqimlFcJsM")
print('\n\n')
print(js)
I make a little bit of reinvention of excellent Porto's answer here, and wrote this snippet on Python:
import urllib, urllib.request, json
input = "C:\\urls.txt"
output = "C:\\tracks.csv"
urls=[line.strip() for line in open(input)]
for url in urls:
ID = url.split('=')
VideoID = ID[1]
params = {"format": "json", "url": "https://www.youtube.com/watch?v=%s" % VideoID}
url = "https://www.youtube.com/oembed"
query_string = urllib.parse.urlencode(params)
url = url + "?" + query_string
with urllib.request.urlopen(url) as response:
response_text = response.read()
try:
data = json.loads(response_text.decode())
except ValueError as e:
continue # skip faulty url
if data is not None:
author = data['author_name'].split(' - ')
author = author[0].rstrip()
f = open(output, "a", encoding='utf-8')
print(author, ',', data['title'], sep="", file=f)
It eats a plain text file with Youtube URL list:
https://www.youtube.com/watch?v=F_Vfgdfgg
https://www.youtube.com/watch?v=RndfgdfN8
...
and returns a CSV file with Artist-Title pairs:
Beyonce,Pretty hurts
Justin Timberlake,Cry me a river
There are two modules that can help you which is pafy & youtube-dl. First install this module using pip. Pafy is using youtube-dl in the background to fetch the video information, you can also download videos using pafy and youtube-dl.
pip install youtube_dl
pip install pafy
Now you need to follow this code, I assume that you have the URL of any youtube video.
import pafy
def fetch_yt_video(link):
video = pafy.new(link)
print('Video Title is: ',video.title)
fetch_yt_video('https://youtu.be/CLUsplI4xMU')
The output is
Video Title is: Make the perfect resume | For freshers & experienced | Step by step tutorial with free format