Get title from YouTube videos - youtube

I want to extract the Title of YouTube's videos. How can I do this?
Thanks.

Easiest way to obtain information about a youtube video afaik is to parse the string retrieved from: http://youtube.com/get_video_info?video_id=XXXXXXXX
Using something like PHP's parse_str(), you can obtain a nice array of nearly anything about the video:
$content = file_get_contents("http://youtube.com/get_video_info?video_id=".$id);
parse_str($content, $ytarr);
echo $ytarr['title'];
That will print the title for the video using $id as the video's id.

Hello In python3 i founded 2 ways
1) without API KEY
import urllib.request
import json
import urllib
import pprint
#change to yours VideoID or change url inparams
VideoID = "SZj6rAYkYOg"
params = {"format": "json", "url": "https://www.youtube.com/watch?v=%s" % VideoID}
url = "https://www.youtube.com/oembed"
query_string = urllib.parse.urlencode(params)
url = url + "?" + query_string
with urllib.request.urlopen(url) as response:
response_text = response.read()
data = json.loads(response_text.decode())
pprint.pprint(data)
print(data['title'])
example results:
{'author_name': 'Google Developers',
'author_url': 'https://www.youtube.com/user/GoogleDevelopers',
'height': 270,
'html': '<iframe width="480" height="270" '
'src="https://www.youtube.com/embed/SZj6rAYkYOg?feature=oembed" '
'frameborder="0" allow="autoplay; encrypted-media" '
'allowfullscreen></iframe>',
'provider_name': 'YouTube',
'provider_url': 'https://www.youtube.com/',
'thumbnail_height': 360,
'thumbnail_url': 'https://i.ytimg.com/vi/SZj6rAYkYOg/hqdefault.jpg',
'thumbnail_width': 480,
'title': 'Google I/O 101: Google APIs: Getting Started Quickly',
'type': 'video',
'version': '1.0',
'width': 480}
Google I/O 101: Google APIs: Getting Started Quickly
2) Using Google API - required APIKEY
import urllib.request
import json
import urllib
import pprint
APIKEY = "YOUR_GOOGLE_APIKEY"
VideoID = "YOUR_VIDEO_ID"
params = {'id': VideoID, 'key': APIKEY,
'fields': 'items(id,snippet(channelId,title,categoryId),statistics)',
'part': 'snippet,statistics'}
url = 'https://www.googleapis.com/youtube/v3/videos'
query_string = urllib.parse.urlencode(params)
url = url + "?" + query_string
with urllib.request.urlopen(url) as response:
response_text = response.read()
data = json.loads(response_text.decode())
pprint.pprint(data)
print("TITLE: %s " % data['items'][0]['snippet']['title'])
example results:
{'items': [{'id': 'SZj6rAYkYOg',
'snippet': {'categoryId': '28',
'channelId': 'UC_x5XG1OV2P6uZZ5FSM9Ttw',
'title': 'Google I/O 101: Google APIs: Getting '
'Started Quickly'},
'statistics': {'commentCount': '36',
'dislikeCount': '20',
'favoriteCount': '0',
'likeCount': '418',
'viewCount': '65783'}}]}
TITLE: Google I/O 101: Google APIs: Getting Started Quickly

Using JavaScript data API:
var loadInfo = function (videoId) {
var gdata = document.createElement("script");
gdata.src = "http://gdata.youtube.com/feeds/api/videos/" + videoId + "?v=2&alt=jsonc&callback=storeInfo";
var body = document.getElementsByTagName("body")[0];
body.appendChild(gdata);
};
var storeInfo = function (info) {
console.log(info.data.title);
};
Then you just need to call loadInfo(videoId).
More informations are available on the API documentation.

One way to do this would be to retrieve the video from youtube as shown here
Then extract the title out of the atom feed sent by youtube. A sample feed is shown here

I'll lay out the process as outlined by the YouTube API v3 documentation.
Make a / login to the Google account that you want to be associated with your YouTube API use.
Create a new project at https://console.developers.google.com/apis/credentials.
On the upper left, next to the Google APIs logo, go to Select a project and Create project +.
Wait a moment for the creation to finish.
Make a new API key. You'll need it to access video info under v3.
If you're not already there, go to Credentials under the navigator on the left hand side, APIs and Services > Credentials.
Under the Credentials tab, click Create Credentials and select API key.
Copy the API key to your clipboard.
Providing a video ID and your newly created API key, go to this link to see your work in action: https://www.googleapis.com/youtube/v3/videos?id=<YOUR VIDEO ID HERE>&key=<YOUR API KEY HERE>%20&part=snippet (no angle brackets)
For more info on what you can access, see here: https://developers.google.com/youtube/v3/getting-started#partial. For convenience, I'll copy one of their examples here (Example 4). The fields and part parameters in the URL are key here.
Example
The URL is, well, what URL you can go to through your browser to check it out. In return, you should get what's under API response:.
URL: https://www.googleapis.com/youtube/v3/videos?id=7lCDEYXw3mM&key=YOUR_API_KEY
&fields=items(id,snippet(channelId,title,categoryId),statistics)&part=snippet,statistics
Description: This example modifies the fields parameter from example 3
so that in the API response, each video resource's snippet
object only includes the channelId, title,
and categoryId properties.
API response:
{
"videos": [
{
"id": "7lCDEYXw3mM",
"snippet": {
"channelId": "UC_x5XG1OV2P6uZZ5FSM9Ttw",
"title": "Google I/O 101: Q&A On Using Google APIs",
"categoryId": "28"
},
"statistics": {
"viewCount": "3057",
"likeCount": "25",
"dislikeCount": "0",
"favoriteCount": "17",
"commentCount": "12"
}
}
]
}
This gives you video info in the .json file format. If your project is to access this info through JavaScript, you may be going here next: How to get JSON from URL in Javascript?.

I believe the best way is to use youTube's gdata, and then grab info from XML that is returned
http://gdata.youtube.com/feeds/api/videos/6_Ukfpsb8RI
Update:
There is a newer API out now which you should use instead
https://developers.google.com/youtube/v3/getting-started
URL: https://www.googleapis.com/youtube/v3/videos?id=7lCDEYXw3mM&key=YOUR_API_KEY
&fields=items(id,snippet(channelId,title,categoryId),statistics)&part=snippet,statistics
Description: This example modifies the fields parameter from example 3 so that in the API response, each video resource's snippet object only includes the channelId, title, and categoryId properties.
API response:
{
"videos": [
{
"id": "7lCDEYXw3mM",
"snippet": {
"channelId": "UC_x5XG1OV2P6uZZ5FSM9Ttw",
"title": "Google I/O 101: Q&A On Using Google APIs",
"categoryId": "28"
},
"statistics": {
"viewCount": "3057",
"likeCount": "25",
"dislikeCount": "0",
"favoriteCount": "17",
"commentCount": "12"
}
}
]
}

With bash, wget and lynx:
#!/bin/bash
read -e -p "Youtube address? " address
page=$(wget "$address" -O - 2>/dev/null)
title=$(echo "$page" | grep " - ")
title="$(lynx --dump -force-html <(echo "<html><body>
$title
</body></html>")| grep " - ")"
title="${title/* - /}"
echo "$title"

// This is the youtube video URL: http://www.youtube.com/watch?v=nOHHta68DdU
$code = "nOHHta68DdU";
// Get video feed info (xml) from youtube, but only the title | http://php.net/manual/en/function.file-get-contents.php
$video_feed = file_get_contents("http://gdata.youtube.com/feeds/api/videos?v=2&q=".$code."&max-results=1&fields=entry(title)&prettyprint=true");
// xml to object | http://php.net/manual/en/function.simplexml-load-string.php
$video_obj = simplexml_load_string($video_feed);
// Get the title string to a variable
$video_str = $video_obj->entry->title;
// Output
echo $video_str;

If python batch processing script is appreciated: I used BeautifulSoup to easily parse the title from HTML, urllib to download the HTML and unicodecsv libraries in order to save all the characters from Youtube title.
The only thing you need to do is to place csv with single (named) column url with URLs of the Youtube videos in the same folder as the script is and name it yt-urls.csv and run the script. You will get file yt-urls-titles.csv containing the URLs and its titles.
#!/usr/bin/python
from bs4 import BeautifulSoup
import urllib
import unicodecsv as csv
with open('yt-urls-titles.csv', 'wb') as f:
resultcsv = csv.DictWriter(f, delimiter=';', quotechar='"',fieldnames=['url','title'])
with open('yt-urls.csv', 'rb') as f:
inputcsv = csv.DictReader(f, delimiter=';', quotechar='"')
resultcsv.writeheader()
for row in inputcsv:
soup = BeautifulSoup(urllib.urlopen(row['url']).read(), "html.parser")
resultcsv.writerow({'url': row['url'],'title': soup.title.string})

If you have youtube-dl, it's as simple as:
youtube-dl --get-title https://www.youtube.com/watch?v=dQw4w9WgXcQ

Here's some cut and paste code for ColdFusion:
http://trycf.com/gist/f296d14e456a7c925d23a1282daa0b90
It works on CF9 (and likely, earlier versions) using YouTube API v3, which requires an API key.
I left some comments and diag stuff in it, for anyone who wants to dig deeper. Hope it helps someone.

You can do using Json to get the all info about video
$jsonURL = file_get_contents("https://www.googleapis.com/youtube/v3/videos?id={Your_Video_ID_Here}&key={Your_API_KEY}8&part=snippet");
$json = json_decode($jsonURL);
$vtitle = $json->{'items'}[0]->{'snippet'}->{'title'};
$vdescription = $json->{'items'}[0]->{'snippet'}->{'description'};
$vvid = $json->{'items'}[0]->{'id'};
$vdate = $json->{'items'}[0]->{'snippet'}->{'publishedAt'};
$vthumb = $json->{'items'}[0]->{'snippet'}->{'thumbnails'}->{'high'}->{'url'};
I hope it will solve your problem.

using python i got itimport pafy url = "https://www.youtube.com/watch?v=bMt47wvK6u0" video = pafy.new(url) print(video.title)

If you are familiar with java, try the Jsoup parser.
Document document = Jsoup.connect("http://www.youtube.com/ABDCEF").get();
document.title();

Try this, I am getting name and url of each video in a playlist, you can modify this code as per your requirement.
$Playlist = ((Invoke-WebRequest "https://www.youtube.com/watch?v=HKkRbc6W6NA&list=PLz9M61O0WZqSUvHzPHVVC4IcqA8qe5K3r&
index=1").Links | Where {$_.class -match "playlist-video"}).href
$Fname = ((Invoke-WebRequest "https://www.youtube.com/watch?v=HKkRbc6W6NA&list=PLz9M61O0WZqSUvHzPHVVC4IcqA8qe5K3r&ind
ex=1").Links | Where {$_.class -match "playlist-video"}).outerText
$FinalText=""
For($i=0;$i -lt $playlist.Length;$i++)
{
Write-Output("'"+($Fname[$i].split("|")[0]).split("|")[0]+"'+"+"https://www.youtube.com"+$Playlist[$i])
}

JavaX now ships with this function. Showing a video's thumbnail and title, for example, is a two-liner:
SS map = youtubeVideoInfo("https://www.youtube.com/watch?v=4If_vFZdFTk"));
showImage(map.get("title"), loadImage(map.get("thumbnail_url")));
Example

Similarly to Matej M, but more simply:
import requests
from bs4 import BeautifulSoup
def get_video_name(id: str):
"""
Return the name of the video as it appears on YouTube, given the video id.
"""
r = requests.get(f'https://youtube.com/watch?v={id}')
r.raise_for_status()
soup = BeautifulSoup(r.content, "lxml")
return soup.title.string
if __name__ == '__main__':
js = get_video_name("RJqimlFcJsM")
print('\n\n')
print(js)

I make a little bit of reinvention of excellent Porto's answer here, and wrote this snippet on Python:
import urllib, urllib.request, json
input = "C:\\urls.txt"
output = "C:\\tracks.csv"
urls=[line.strip() for line in open(input)]
for url in urls:
ID = url.split('=')
VideoID = ID[1]
params = {"format": "json", "url": "https://www.youtube.com/watch?v=%s" % VideoID}
url = "https://www.youtube.com/oembed"
query_string = urllib.parse.urlencode(params)
url = url + "?" + query_string
with urllib.request.urlopen(url) as response:
response_text = response.read()
try:
data = json.loads(response_text.decode())
except ValueError as e:
continue # skip faulty url
if data is not None:
author = data['author_name'].split(' - ')
author = author[0].rstrip()
f = open(output, "a", encoding='utf-8')
print(author, ',', data['title'], sep="", file=f)
It eats a plain text file with Youtube URL list:
https://www.youtube.com/watch?v=F_Vfgdfgg
https://www.youtube.com/watch?v=RndfgdfN8
...
and returns a CSV file with Artist-Title pairs:
Beyonce,Pretty hurts
Justin Timberlake,Cry me a river

There are two modules that can help you which is pafy & youtube-dl. First install this module using pip. Pafy is using youtube-dl in the background to fetch the video information, you can also download videos using pafy and youtube-dl.
pip install youtube_dl
pip install pafy
Now you need to follow this code, I assume that you have the URL of any youtube video.
import pafy
def fetch_yt_video(link):
video = pafy.new(link)
print('Video Title is: ',video.title)
fetch_yt_video('https://youtu.be/CLUsplI4xMU')
The output is
Video Title is: Make the perfect resume | For freshers & experienced | Step by step tutorial with free format

Related

Scrape from website with all same span name

is it possible to scrape “likes number” and “post number” from this website and import data on google sheet?
Because when i try i get empty data since the span of those data are basically all the same…
thanks for help
Edited:
As you even want to push that data to google sheet and read by back from their I could come up with the below solution you could modify according to your need.
First you need to install gspread library and follow this tutorial https://gspread.readthedocs.io/en/latest/oauth2.html to get the credentials to access the google sheets via api and then follow the below updated code.
Your sheet should be like this:
Code:
import requests
import gspread
headers = {'Accept': 'application/json', 'app-token': '33d57ade8c02dbc5a333db99ff9ae26a'}
gc = gspread.service_account(filename="credentials.json")
sh = gc.open("data")
for rownumber,rowvalues in enumerate(sh.sheet1.get_all_values(),1):
if len(rowvalues)==2:
if rowvalues[1]=='':
cookies = requests.post("https://onlyfans.com/api2/v2/init", headers=headers)
data = requests.get(f"https://onlyfans.com/api2/v2/users/{rowvalues[0]}", headers=headers, cookies=cookies)
if data.status_code == 200:
data = data.json()
sh.sheet1.update_cell(rownumber, 2, data["postsCount"])
else:
print(f"Check : {rowvalues}")
else:
cookies = requests.post("https://onlyfans.com/api2/v2/init", headers=headers)
data = requests.get(f"https://onlyfans.com/api2/v2/users/{rowvalues[0]}", headers=headers, cookies=cookies)
if data.status_code == 200:
data = data.json()
sh.sheet1.update_cell(rownumber, 2, data["postsCount"])
print(f"{rownumber} Processed")
Once you run this code you see will data has been updated in google sheets but before running this script follow the URL provided or else you will end up having errors.
Updated Gsheets:
Old:
Seeing to the network logs of that website I was able to extract your desired data by requests library and some of their API calls you check the data.json() dictionary for other data if required.
Follow the below code.
import requests
headers={'Accept': 'application/json', 'app-token': '33d57ade8c02dbc5a333db99ff9ae26a'}
cookies=requests.post("https://onlyfans.com/api2/v2/init",headers=headers)
data=requests.get("https://onlyfans.com/api2/v2/users/elettra_pink",headers=headers,cookies=cookies)
if data.status_code==200:
data=data.json()
print(f'Posts:{data["postsCount"]}\nPhotosCount:{data["photosCount"]}\nVideosCount:{data["videosCount"]}\nFavoritedCount:{data["favoritedCount"]}\nSubscribersCount:{data["subscribersCount"]}')
Output:
Let me know if you have any questions :)

How to make a flask website that generates dynamic google sheets

I'm helping an organization out and they wish to have a website in which you input some numbers and a Google Sheets link should appear in which the numbers are put into some formula to give a result (i.e if you put in 5, 5, 3; (Num1 + Num2) * num3 it'd show a table in which the first two numbers are added then multiplied by the third:
Num1 Num2 Num2 Result
5 5 3 30
I have searched GitHub and the internet but I cannot seem to find a library that can create a Google Sheet from a website with given data input into a formula. Most stuff I found used the gsheet API and only modified an existing sheet. I found a Python (Flask) library called xlsxwriter and I was wondering if there is any way to convert an .xlsx to an online Google Sheet or if possible to just make a Google Sheet from my website.
(My website is in Flask as of right now but since I literally have no backend. If someone knows of a library that is in another web framework, I am willing to switch.)
Thank you, John D
Maybe I am missing something here but the regular Google Sheets API (which is available for Python) allows you to create blank spreadsheets with a specified title.
spreadsheet = {
'properties': {
'title': title
}
}
spreadsheet = service.spreadsheets().create(body=spreadsheet,
fields='spreadsheetId').execute()
print('Spreadsheet ID: {0}'.format(spreadsheet.get('spreadsheetId')))
https://developers.google.com/sheets/api/guides/create
https://github.com/gsuitedevs/python-samples/blob/master/sheets/snippets/spreadsheet_snippets.py
https://developers.google.com/sheets/api/quickstart/python
Here is a more complete sample based on the Google Quickstarts examples:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# If modifying these scopes, delete the file token.pickle! Adjust scopes as needed
SCOPES = ['https://www.googleapis.com/auth/spreadsheets']
def main():
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
spreadsheet = {
'properties': {
'title': 'New Test Sheet 2'
}
}
spreadsheet = service.spreadsheets().create(body=spreadsheet,
fields='spreadsheetId').execute()
#print('Spreadsheet ID: {0}'.format(spreadsheet.get('spreadsheetId')))
spreadsheet_id = spreadsheet.get('spreadsheetId')
range_name = 'Sheet1!A1:D5'
body = {
"majorDimension": "ROWS",
"values": [
["Item", "Cost", "Stocked", "Ship Date"],
["Wheel", "$20.50", "4", "3/1/2016"], # new row
["Door", "$15", "2", "3/15/2016"],
["Engine", "$100", "1", "30/20/2016"],
["Totals", "=SUM(B2:B4)", "=SUM(C2:C4)", "=MAX(D2:D4)"]
],
}
result = service.spreadsheets().values().update(
spreadsheetId=spreadsheet_id,
range=range_name,
body=body,
valueInputOption='USER_ENTERED'
).execute()
print(result)
if __name__ == '__main__':
main()

Has Yahoo suddenly today terminated its finance download API?

For months I've been using a url like this, from perl:
http://finance.yahoo.com/d/quotes.csv?s=$s&f=ynl1 #returns yield, name, price;
Today, 11/1/17, it suddenly returns a 999 error.
Is this a glitch, or has Yahoo terminated the service?
I get the error even if I enter the URL directly into a browser as, eg:
http://finance.yahoo.com/d/quotes.csv?s=INTC&f=ynl1
so it doesn't seem to be a 'crumb' problem.
Note: This is NOT a question which has been answered in the past!
It was working yesterday.That it happened on the first of the month is suspicious.
As noted in the other answers and elsewhere (e.g. https://stackoverflow.com/questions/47076404/currency-helper-of-yahoo-sorry-unable-to-process-request-at-this-time-erro/47096766#47096766), Yahoo has indeed ceased operation of the Yahoo Finance API. However, as a workaround, you can access a trove of financial information, in JSON format, for a given ticker symbol, by doing a HTTPS GET request to: https://finance.yahoo.com/quote/SYMBOL (e.g. https://finance.yahoo.com/quote/MSFT). If you do a GET request to the above URL, you'll see that the financial data is contained within the response in JSON format. The following python3 script shows how you can parse individual values that you may be interested in:
import requests
import json
symbol = 'MSFT'
url ='https://finance.yahoo.com/quote/' + symbol
resp = requests.get(url)
# parse the section from the html document containing the raw json data that we need
# you can write jsonstr to a file, then open the file in a web browser to browse the structure of the json data
r = str(resp.content, 'utf-8')
i1 = 0
i1 = r.find('root.App.main', i1)
i1 = r.find('{', i1)
i2 = r.find("\n", i1)
i2 = r.rfind(';', i1, i2)
jsonstr = r[i1:i2]
# load the raw json data into a python data object
data = json.loads(jsonstr)
# pull the values that we are interested in
name = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['shortName']
price = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketPrice']['raw']
change = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketChange']['raw']
shares_outstanding = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['sharesOutstanding']['raw']
market_cap = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['marketCap']['raw']
trailing_pe = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['trailingPE']['raw']
earnings_per_share = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['trailingEps']['raw']
forward_annual_dividend_rate = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendRate']['raw']
forward_annual_dividend_yield = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendYield']['raw']
# print the values
print('Symbol:', symbol)
print('Name:', name)
print('Price:', price)
print('Change:', change)
print('Shares Outstanding:', shares_outstanding)
print('Market Cap:', market_cap)
print('Trailing PE:', trailing_pe)
print('Earnings Per Share:', earnings_per_share)
print('Forward Annual Dividend Rate:', forward_annual_dividend_rate)
print('Forward_annual_dividend_yield:', forward_annual_dividend_yield)
Yahoo confirmed that they terminated the service:
It has come to our attention that this service is being used in violation of the Yahoo Terms of Service. As such, the service is being discontinued. For all future markets and equities data research, please refer to finance.yahoo.com .
There is still a way to get this data by querying some APIs used by the finance.yahoo.com page. Not sure if Yahoo will be supporting it long term as the previous API was (hopefully they will).
I adapted the method used by https://github.com/pstadler/ticker.sh into the following python hack that takes a list of symbols from the command line and outputs some of the variables as a csv:
#!/usr/bin/env python
import sys
import time
import requests
if len(sys.argv) < 2:
print("missing parameters: <symbol> ...")
exit()
apiEndpoint = "https://query1.finance.yahoo.com/v7/finance/quote"
fields = [
'symbol',
'regularMarketVolume',
'regularMarketPrice',
'regularMarketDayHigh',
'regularMarketDayLow',
'regularMarketTime',
'regularMarketChangePercent']
fields = ','.join(fields)
symbols = sys.argv[1:]
symbols = ','.join(symbols)
payload = {
'lang': 'en-US',
'region': 'US',
'corsDomain': 'finance.yahoo.com',
'fields': fields,
'symbols': symbols}
r = requests.get(apiEndpoint, params=payload)
for i in r.json()['quoteResponse']['result']:
if 'regularMarketPrice' in i:
a = []
a.append(i['symbol'])
a.append(i['regularMarketPrice'])
a.append(time.strftime(
'%Y-%m-%d %H:%M:%S', time.localtime(i['regularMarketTime'])))
a.append(i['regularMarketChangePercent'])
a.append(i['regularMarketVolume'])
a.append("{0:.2f} - {1:.2f}".format(
i['regularMarketDayLow'], i['regularMarketDayHigh']))
print(",".join([str(e) for e in a]))
Sample Run:
$ ./getquotePy.py AAPL GOOGL
AAPL,174.5342,2017-11-07 17:21:28,0.1630961,19905458,173.60 - 173.60
GOOGL,1048.6753,2017-11-07 17:21:22,0.5749836,840447,1043.00 - 1043.00
var API = "https://query1.finance.yahoo.com/v7/finance/quote?symbols=AAPL";
$.getJSON(API, function (json) {...});call throws this error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://www.microplan.at/sar' is therefore not allowed access.

I am not able to insert full url in Kingsoft Writer

Here is my url which i have to insert/make hyperlink of .
http://192.168.10.183:8080/xyzadsfgrghrh//Request?Key=eform_renderer&OwnerType=Prj&OwnerID=79061&ItemType=Aci&ItemID=20032785&ActionToPerform=View&PopupMode=Y&WindowOpenOnSave=Y&ALLOWREADACCESS=Y&FromECRGetUrl=Y&GetUrlReqParams=&ParentItemId=20032785&childProcessId=qq0k5l04Gd92FFB384jODPfA93D93D.
The limit is of 255 characters.So how can i insert such a long url in my Kingsoft Writer ?
To get the shortened url dynamically from google API, first go here create your key or alternatively you can create one from credentails page for authorization so that google can verify you can send the shortened url.
Here is the sample code in python as how to proceed:
import requests
url = "https://www.googleapis.com/urlshortener/v1/url?key=YOUR_KEY"
headers = { 'Content-Type': 'application/json'}
payload = {"longUrl": "YOUR URL"}
r = requests.post(url, headers= headers, json=payload)
After success you will receive response like as mentioned on google-
{
"kind": "urlshortener#url",
"id": "YOUR SHORTENED URL",
"longUrl": "YOUR LONG URL"
}
Hope it helps.

YouTube API to fetch all videos on a channel

We need a video list by channel name of YouTube (using the API).
We can get a channel list (only channel name) by using the below API:
https://gdata.youtube.com/feeds/api/channels?v=2&q=tendulkar
Below is a direct link of channels
https://www.youtube.com/channel/UCqAEtEr0A0Eo2IVcuWBfB9g
Or
WWW.YouTube.com/channel/HC-8jgBP-4rlI
Now, we need videos of channel >> UCqAEtEr0A0Eo2IVcuWBfB9g or HC-8jgBP-4rlI.
We tried
https://gdata.youtube.com/feeds/api/videos?v=2&uploader=partner&User=UC7Xayrf2k0NZiz3S04WuDNQ
https://gdata.youtube.com/feeds/api/videos?v=2&uploader=partner&q=UC7Xayrf2k0NZiz3S04WuDNQ
But, it does not help.
We need all the videos posted on the channel. Videos uploaded to a channel can be from multiple users thus I don't think providing a user parameter would help...
You need to look at the YouTube Data API. You will find there documentation about how the API can be accessed. You can also find client libraries.
You could also make the requests yourself. Here is an example URL that retrieves the latest videos from a channel:
https://www.googleapis.com/youtube/v3/search?key={your_key_here}&channelId={channel_id_here}&part=snippet,id&order=date&maxResults=20
After that you will receive a JSON with video ids and details, and you can construct your video URL like this:
http://www.youtube.com/watch?v={video_id_here}
First, you need to get the ID of the playlist that represents the uploads from the user/channel:
https://developers.google.com/youtube/v3/docs/channels/list#try-it
You can specify the username with the forUsername={username} param, or specify mine=true to get your own (you need to authenticate first). Include part=contentDetails to see the playlists.
GET https://www.googleapis.com/youtube/v3/channels?part=contentDetails&forUsername=jambrose42&key={YOUR_API_KEY}
In the result "relatedPlaylists" will include "likes" and "uploads" playlists. Grab that "upload" playlist ID.
Also note the upload playlist id is your channelId prefixed with UU instead of UC.
Next, get a list of videos in that playlist:
https://developers.google.com/youtube/v3/docs/playlistItems/list#try-it
Just drop in the playlistId!
GET https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails&maxResults=50&playlistId=UUpRmvjdu3ixew5ahydZ67uA&key={YOUR_API_KEY}
Here is a video from Google Developers showing how to list all videos in a channel in v3 of the YouTube API.
There are two steps:
Query Channels to get the "uploads" Id. eg https://www.googleapis.com/youtube/v3/channels?id={channel Id}&key={API key}&part=contentDetails
Use this "uploads" Id to query PlaylistItems to get the list of videos. eg https://www.googleapis.com/youtube/v3/playlistItems?playlistId={"uploads" Id}&key={API key}&part=snippet&maxResults=50
To get channels list :
Get Channels list by forUserName:
https://www.googleapis.com/youtube/v3/channels?part=snippet,contentDetails,statistics&forUsername=Apple&key=
Get channels list by channel id:
https://www.googleapis.com/youtube/v3/channels/?part=snippet,contentDetails,statistics&id=UCE_M8A5yxnLfW0KghEeajjw&key=
Get Channel sections:
https://www.googleapis.com/youtube/v3/channelSections?part=snippet,contentDetails&channelId=UCE_M8A5yxnLfW0KghEeajjw&key=
To get Playlists :
Get Playlists by Channel ID:
https://www.googleapis.com/youtube/v3/playlists?part=snippet,contentDetails&channelId=UCq-Fj5jknLsUf-MWSy4_brA&maxResults=50&key=
Get Playlists by Channel ID with pageToken:
https://www.googleapis.com/youtube/v3/playlists?part=snippet,contentDetails&channelId=UCq-Fj5jknLsUf-MWSy4_brA&maxResults=50&key=&pageToken=CDIQAA
To get PlaylistItems :
Get PlaylistItems list by PlayListId:
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet,contentDetails&maxResults=25&playlistId=PLHFlHpPjgk70Yv3kxQvkDEO5n5tMQia5I&key=
To get videos :
Get videos list by video id:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=YxLCwfA1cLw&key=
Get videos list by multiple videos id:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=YxLCwfA1cLw,Qgy6LaO3SB0,7yPJXGO2Dcw&key=
Get comments list
Get Comment list by video ID:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&videoId=el****kQak&key=A**********k
Get Comment list by channel ID:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&channelId=U*****Q&key=AI********k
Get Comment list by allThreadsRelatedToChannelId:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&allThreadsRelatedToChannelId=UC*****ntcQ&key=AI*****k
Here all api's are Get approach.
Based on channel id we con't get all videos directly, that's the important point here.
For integration https://developers.google.com/youtube/v3/quickstart/ios?ver=swift
Here is the code that will return all video ids under your channel
<?php
$baseUrl = 'https://www.googleapis.com/youtube/v3/';
// https://developers.google.com/youtube/v3/getting-started
$apiKey = 'API_KEY';
// If you don't know the channel ID see below
$channelId = 'CHANNEL_ID';
$params = [
'id'=> $channelId,
'part'=> 'contentDetails',
'key'=> $apiKey
];
$url = $baseUrl . 'channels?' . http_build_query($params);
$json = json_decode(file_get_contents($url), true);
$playlist = $json['items'][0]['contentDetails']['relatedPlaylists']['uploads'];
$params = [
'part'=> 'snippet',
'playlistId' => $playlist,
'maxResults'=> '50',
'key'=> $apiKey
];
$url = $baseUrl . 'playlistItems?' . http_build_query($params);
$json = json_decode(file_get_contents($url), true);
$videos = [];
foreach($json['items'] as $video)
$videos[] = $video['snippet']['resourceId']['videoId'];
while(isset($json['nextPageToken'])){
$nextUrl = $url . '&pageToken=' . $json['nextPageToken'];
$json = json_decode(file_get_contents($nextUrl), true);
foreach($json['items'] as $video)
$videos[] = $video['snippet']['resourceId']['videoId'];
}
print_r($videos);
Note: You can get channel id at
https://www.youtube.com/account_advanced after logged in.
Below is a Python alternative that does not require any special packages. By providing the channel id it returns a list of video links for that channel. Please note that you need an API Key for it to work.
import urllib
import json
def get_all_video_in_channel(channel_id):
api_key = YOUR API KEY
base_video_url = 'https://www.youtube.com/watch?v='
base_search_url = 'https://www.googleapis.com/youtube/v3/search?'
first_url = base_search_url+'key={}&channelId={}&part=snippet,id&order=date&maxResults=25'.format(api_key, channel_id)
video_links = []
url = first_url
while True:
inp = urllib.urlopen(url)
resp = json.load(inp)
for i in resp['items']:
if i['id']['kind'] == "youtube#video":
video_links.append(base_video_url + i['id']['videoId'])
try:
next_page_token = resp['nextPageToken']
url = first_url + '&pageToken={}'.format(next_page_token)
except:
break
return video_links
Thanks to the references shared here and elsewhere, I've made an online script / tool that one can use to obtain all videos of a channel.
It combines API calls to youtube.channels.list, playlistItems, videos. It uses recursive functions to make the asynchronous callbacks run the next iteration upon getting a valid response.
This also serves to limit the actual number of requests made at a time, hence keeping you safe from violating YouTube API rules. Sharing shortened snippets and then a link to the full code. I got around the 50 max results per call limitation by using the nextPageToken value that comes in the response to fetch the next 50 results and so on.
function getVideos(nextPageToken, vidsDone, params) {
$.getJSON("https://www.googleapis.com/youtube/v3/playlistItems", {
key: params.accessKey,
part: "snippet",
maxResults: 50,
playlistId: params.playlistId,
fields: "items(snippet(publishedAt, resourceId/videoId, title)), nextPageToken",
pageToken: ( nextPageToken || '')
},
function(data) {
// commands to process JSON variable, extract the 50 videos info
if ( vidsDone < params.vidslimit) {
// Recursive: the function is calling itself if
// all videos haven't been loaded yet
getVideos( data.nextPageToken, vidsDone, params);
}
else {
// Closing actions to do once we have listed the videos needed.
}
});
}
This got a basic listing of the videos, including id, title, date of publishing and similar. But to get more detail of each video like view counts and likes, one has to make API calls to videos.
// Looping through an array of video id's
function fetchViddetails(i) {
$.getJSON("https://www.googleapis.com/youtube/v3/videos", {
key: document.getElementById("accesskey").value,
part: "snippet,statistics",
id: vidsList[i]
}, function(data) {
// Commands to process JSON variable, extract the video
// information and push it to a global array
if (i < vidsList.length - 1) {
fetchViddetails(i+1) // Recursive: calls itself if the
// list isn't over.
}
});
See the full code here, and live version here. (Edit: fixed github link)
Edit: Dependencies: JQuery, Papa.parse
Short answer:
Here's a library called scrapetube That can help with that.
pip install scrapetube
import scrapetube
import simplejson as json
videos = scrapetube.get_channel("UC9-y-6csu5WGm29I7JiwpnA")
for video in videos:
print(video['videoId'])
print(video['title']['runs'][0]['text'])
print(video['publishedTimeText']['simpleText'])
print('\r\n')
# DEBUG: print(json.dumps(video))
Long answer:
The module mentioned above was created by me due to a lack of any other solutions. Here's what i tried:
Selenium. It worked but had three big drawbacks: 1. It requires a web browser and driver to be installed. 2. has big CPU and memory requirements. 3. can't handle big channels.
Using youtube-dl. Like this:
import youtube_dl
youtube_dl_options = {
'skip_download': True,
'ignoreerrors': True
}
with youtube_dl.YoutubeDL(youtube_dl_options) as ydl:
videos = ydl.extract_info(f'https://www.youtube.com/channel/{channel_id}/videos')
This also works for small channels, but for bigger ones i would get blocked by youtube for making so many requests in such a short time (because youtube-dl downloads more info for every video in the channel).
So i made the library scrapetube which uses the web API to get all the videos.
Try with like the following. It may help you.
https://gdata.youtube.com/feeds/api/videos?author=cnn&v=2&orderby=updated&alt=jsonc&q=news
Here author as you can specify your channel name and "q" as you can give your search key word.
Since everyone answering this question has problems due to the 500 video limit here's an alternate solution using youtube_dl in Python 3. Also, no API key is needed.
Install youtube_dl: sudo pip3 install youtube-dl
Find out your target channel's channel id. The ID is going to start with UC. Replace the C for Channel with U for Upload (i.e. UU...), this is the upload playlist.
Use the playlist downloader feature from youtube-dl. Ideally you do NOT want to download every video in the playlist which is the default, but only the metadata.
Example (warning -- takes tens of minutes):
import youtube_dl, pickle
# UCVTyTA7-g9nopHeHbeuvpRA is the channel id (1517+ videos)
PLAYLIST_ID = 'UUVTyTA7-g9nopHeHbeuvpRA' # Late Night with Seth Meyers
with youtube_dl.YoutubeDL({'ignoreerrors': True}) as ydl:
playd = ydl.extract_info(PLAYLIST_ID, download=False)
with open('playlist.pickle', 'wb') as f:
pickle.dump(playd, f, pickle.HIGHEST_PROTOCOL)
vids = [vid for vid in playd['entries'] if 'A Closer Look' in vid['title']]
print(sum('Trump' in vid['title'] for vid in vids), '/', len(vids))
Just in three steps:
Subscriptions: list ->
https://www.googleapis.com/youtube/v3/subscriptions?part=snippet&maxResults=50&mine=true&access_token={oauth_token}
Channels: list ->
https://www.googleapis.com/youtube/v3/channels?part=contentDetails&id={channel_id}&key={YOUR_API_KEY}
PlaylistItems: list ->
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId={playlist_id}&key={YOUR_API_KEY}
Recently I had to retrieve all videos from a channel, and according to YouTube developer documentation:
https://developers.google.com/youtube/v3/docs/playlistItems/list
function playlistItemsListByPlaylistId($service, $part, $params) {
$params = array_filter($params);
$response = $service->playlistItems->listPlaylistItems(
$part,
$params
);
print_r($response);
}
playlistItemsListByPlaylistId($service,
'snippet,contentDetails',
array('maxResults' => 25, 'playlistId' => 'id of "uploads" playlist'));
Where $service is your Google_Service_YouTube object.
So you have to fetch information from the channel to retrieve the "uploads" playlist that actually has all the videos uploaded by the channel: https://developers.google.com/youtube/v3/docs/channels/list
If new with this API, I highly recommend to turn the code sample from the default snippet to the full sample.
So the basic code to retrieve all videos from a channel can be:
class YouTube
{
const DEV_KEY = 'YOUR_DEVELOPPER_KEY';
private $client;
private $youtube;
private $lastChannel;
public function __construct()
{
$this->client = new Google_Client();
$this->client->setDeveloperKey(self::DEV_KEY);
$this->youtube = new Google_Service_YouTube($this->client);
$this->lastChannel = false;
}
public function getChannelInfoFromName($channel_name)
{
if ($this->lastChannel && $this->lastChannel['modelData']['items'][0]['snippet']['title'] == $channel_name)
{
return $this->lastChannel;
}
$this->lastChannel = $this->youtube->channels->listChannels('snippet, contentDetails, statistics', array(
'forUsername' => $channel_name,
));
return ($this->lastChannel);
}
public function getVideosFromChannelName($channel_name, $max_result = 5)
{
$this->getChannelInfoFromName($channel_name);
$params = [
'playlistId' => $this->lastChannel['modelData']['items'][0]['contentDetails']['relatedPlaylists']['uploads'],
'maxResults'=> $max_result,
];
return ($this->youtube->playlistItems->listPlaylistItems('snippet,contentDetails', $params));
}
}
$yt = new YouTube();
echo '<pre>' . print_r($yt->getVideosFromChannelName('CHANNEL_NAME'), true) . '</pre>';
Using API version 2, which is deprecated, the URL for uploads (of channel UCqAEtEr0A0Eo2IVcuWBfB9g) is:
https://gdata.youtube.com/feeds/users/UCqAEtEr0A0Eo2IVcuWBfB9g/uploads
There is an API version 3.
From https://stackoverflow.com/a/65440501/2585501:
This method is especially useful if a) the channel has more than 50 videos or if b) desire youtube video ids formatted in a flat txt list:
Obtain a Youtube API v3 key (see https://stackoverflow.com/a/65440324/2585501)
Obtain the Youtube Channel ID of the channel (see https://stackoverflow.com/a/16326307/2585501)
Obtain the Uploads Playlist ID of the channel: https://www.googleapis.com/youtube/v3/channels?id={channel Id}&key={API key}&part=contentDetails (based on https://www.youtube.com/watch?v=RjUlmco7v2M)
Install youtube-dl (e.g. pip3 install --upgrade youtube-dl or sudo apt-get install youtube-dl)
Download the Uploads Playlist using youtube-dl: youtube-dl -j --flat-playlist "https://<yourYoutubePlaylist>" | jq -r '.id' | sed 's_^_https://youtu.be/_' > videoList.txt (see https://superuser.com/questions/1341684/youtube-dl-how-download-only-the-playlist-not-the-files-therein)
Posting long after the original question was asked, but I made a python package that does this using a very simple API. It gets all the videos uploaded to a channel, but I'm not sure about this part (included in the original question):
Videos uploaded to a channel can be from multiple users thus I don't think providing a user parameter would help...
Maybe YouTube changed in the 8 years since this question was posted, but if it didn't, the package I made might not cover this case.
To use the API:
pip3 install -U yt-videos-list # macOS
pip install -U yt-videos-list # Windows
# if that doesn't work, try
python3 -m pip install -U yt-videos-list # macOS
python -m pip install -U yt-videos-list # Windows
Then open up a python interpreter
python3 # macOS
python # Windows
and run the program:
from yt_videos_list import ListCreator
lc = ListCreator()
help(lc) # display API information - shows available parameters and functions
my_url = 'https://www.youtube.com/user/1veritasium'
lc.create_list_for(url=my_url)
Python documentation (will be updated most frequently, so check this page for updates!)
Repository homepage
PyPI page
Sample solution in Python. Help taken from this video: video
Like many other answers, upload id is to be retrieved from the channel id first.
import urllib.request
import json
key = "YOUR_YOUTUBE_API_v3_BROWSER_KEY"
#List of channels : mention if you are pasting channel id or username - "id" or "forUsername"
ytids = [["bbcnews","forUsername"],["UCjq4pjKj9X4W9i7UnYShpVg","id"]]
newstitles = []
for ytid,ytparam in ytids:
urld = "https://www.googleapis.com/youtube/v3/channels?part=contentDetails&"+ytparam+"="+ytid+"&key="+key
with urllib.request.urlopen(urld) as url:
datad = json.loads(url.read())
uploadsdet = datad['items']
#get upload id from channel id
uploadid = uploadsdet[0]['contentDetails']['relatedPlaylists']['uploads']
#retrieve list
urld = "https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails&maxResults=50&playlistId="+uploadid+"&key="+key
with urllib.request.urlopen(urld) as url:
datad = json.loads(url.read())
for data in datad['items']:
ntitle = data['snippet']['title']
nlink = data['contentDetails']['videoId']
newstitles.append([nlink,ntitle])
for link,title in newstitles:
print(link, title)
That's my Python solution, using Google API.
Observations:
Create a .env file to store your API Developer Key, and put it in your .gitignore file
The parameter "forUserName" should be set with the name of the Youtube Channel (username). Alternatively, you can use the channel id, setting the parameter "id", instead of "forUserName".
The object "playlistItem" gives you access to each video. I'm showing only its title but there are many other properties.
import os
import googleapiclient.discovery
from decouple import config
def main():
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtube"
api_version = "v3"
DEVELOPER_KEY = config('API_KEY')
youtube = googleapiclient.discovery.build(
api_service_name, api_version, developerKey = DEVELOPER_KEY)
request = youtube.channels().list(
part="contentDetails",
forUsername="username",
# id="oiwuereru8987",
)
response = request.execute()
for item in response['items']:
playlistId = item['contentDetails']['relatedPlaylists']['uploads']
nextPageToken = ''
while (nextPageToken != None):
playlistResponse = youtube.playlistItems().list(
part='snippet',
playlistId=playlistId,
maxResults=25,
pageToken=nextPageToken
)
playlistResponse = playlistResponse.execute()
print(playlistResponse.keys())
for idx, playlistItem in enumerate(playlistResponse['items']):
print(idx, playlistItem['snippet']['title'])
if 'nextPageToken' in playlistResponse.keys():
nextPageToken = playlistResponse['nextPageToken']
else:
nextPageToken = None
if __name__ == "__main__":
main()
Example for the .env file
API_KEY=<Key_Here>
Using the gapi JavaScript API, you can do this
<script src="https://apis.google.com/js/api.js"></script>
const start = () => {
gapi.client
.init({
apiKey: "your_youtubeApiKey",
discoveryDocs: ["https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest"],
scope: "https://www.googleapis.com/auth/youtube.readonly",
})
.then(() => {
console.log("gapi.client initiated");
})
.then(() =>
gapi.client.youtube.channels.list({
part: "snippet,contentDetails,statistics",
id: "youtube_channelId",
// forUsername: 'Bankless',
})
)
.then(
(res) =>
// get the youtube related playlist id
res.result.items[0].contentDetails.relatedPlaylists.uploads
)
.then((playlistId) =>
gapi.client.youtube.playlistItems.list({
part: "snippet",
playlistId,
maxResults: 50,
})
)
.then((res) =>
// get youtube videos snippets
res.result.items.map((item) => item.snippet)
)
.then((snippets) =>
snippets.map((snippet) => {
const { title, description, resourceId } = snippet;
const { videoId } = resourceId;
return { title, description, videoId };
})
)
.then((videos) => {
console.log(videos);
})
.catch((err) => console.error(err));
};
gapi.load("client", start);
Docs:
https://github.com/google/google-api-javascript-client
https://developers.google.com/youtube/v3/guides/auth/client-side-web-apps#callinganapi
You have to get the channel_id of the video you want to get the data from.
For getting the channel_id using the video_id, you can use the videos:list endpoint of the YouTube Data API - add video_id in Id parameter. example.
Then, with the channel_id, change the second character to "U" :
This modified id is the Uploads playlist of that said YouTube channel.
With this Uploads playlist_id, you can use the Playlistitem:list endpoint of the YouTube Data API to retrieve all the uploaded videos from the channel.
In the part parameter add "id,snippet,contentDetails,status".
and in playlistID add the modified channel ID.
and then execute.

Resources