I'm helping an organization out and they wish to have a website in which you input some numbers and a Google Sheets link should appear in which the numbers are put into some formula to give a result (i.e if you put in 5, 5, 3; (Num1 + Num2) * num3 it'd show a table in which the first two numbers are added then multiplied by the third:
Num1 Num2 Num2 Result
5 5 3 30
I have searched GitHub and the internet but I cannot seem to find a library that can create a Google Sheet from a website with given data input into a formula. Most stuff I found used the gsheet API and only modified an existing sheet. I found a Python (Flask) library called xlsxwriter and I was wondering if there is any way to convert an .xlsx to an online Google Sheet or if possible to just make a Google Sheet from my website.
(My website is in Flask as of right now but since I literally have no backend. If someone knows of a library that is in another web framework, I am willing to switch.)
Thank you, John D
Maybe I am missing something here but the regular Google Sheets API (which is available for Python) allows you to create blank spreadsheets with a specified title.
spreadsheet = {
'properties': {
'title': title
}
}
spreadsheet = service.spreadsheets().create(body=spreadsheet,
fields='spreadsheetId').execute()
print('Spreadsheet ID: {0}'.format(spreadsheet.get('spreadsheetId')))
https://developers.google.com/sheets/api/guides/create
https://github.com/gsuitedevs/python-samples/blob/master/sheets/snippets/spreadsheet_snippets.py
https://developers.google.com/sheets/api/quickstart/python
Here is a more complete sample based on the Google Quickstarts examples:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# If modifying these scopes, delete the file token.pickle! Adjust scopes as needed
SCOPES = ['https://www.googleapis.com/auth/spreadsheets']
def main():
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
spreadsheet = {
'properties': {
'title': 'New Test Sheet 2'
}
}
spreadsheet = service.spreadsheets().create(body=spreadsheet,
fields='spreadsheetId').execute()
#print('Spreadsheet ID: {0}'.format(spreadsheet.get('spreadsheetId')))
spreadsheet_id = spreadsheet.get('spreadsheetId')
range_name = 'Sheet1!A1:D5'
body = {
"majorDimension": "ROWS",
"values": [
["Item", "Cost", "Stocked", "Ship Date"],
["Wheel", "$20.50", "4", "3/1/2016"], # new row
["Door", "$15", "2", "3/15/2016"],
["Engine", "$100", "1", "30/20/2016"],
["Totals", "=SUM(B2:B4)", "=SUM(C2:C4)", "=MAX(D2:D4)"]
],
}
result = service.spreadsheets().values().update(
spreadsheetId=spreadsheet_id,
range=range_name,
body=body,
valueInputOption='USER_ENTERED'
).execute()
print(result)
if __name__ == '__main__':
main()
For months I've been using a url like this, from perl:
http://finance.yahoo.com/d/quotes.csv?s=$s&f=ynl1 #returns yield, name, price;
Today, 11/1/17, it suddenly returns a 999 error.
Is this a glitch, or has Yahoo terminated the service?
I get the error even if I enter the URL directly into a browser as, eg:
http://finance.yahoo.com/d/quotes.csv?s=INTC&f=ynl1
so it doesn't seem to be a 'crumb' problem.
Note: This is NOT a question which has been answered in the past!
It was working yesterday.That it happened on the first of the month is suspicious.
As noted in the other answers and elsewhere (e.g. https://stackoverflow.com/questions/47076404/currency-helper-of-yahoo-sorry-unable-to-process-request-at-this-time-erro/47096766#47096766), Yahoo has indeed ceased operation of the Yahoo Finance API. However, as a workaround, you can access a trove of financial information, in JSON format, for a given ticker symbol, by doing a HTTPS GET request to: https://finance.yahoo.com/quote/SYMBOL (e.g. https://finance.yahoo.com/quote/MSFT). If you do a GET request to the above URL, you'll see that the financial data is contained within the response in JSON format. The following python3 script shows how you can parse individual values that you may be interested in:
import requests
import json
symbol = 'MSFT'
url ='https://finance.yahoo.com/quote/' + symbol
resp = requests.get(url)
# parse the section from the html document containing the raw json data that we need
# you can write jsonstr to a file, then open the file in a web browser to browse the structure of the json data
r = str(resp.content, 'utf-8')
i1 = 0
i1 = r.find('root.App.main', i1)
i1 = r.find('{', i1)
i2 = r.find("\n", i1)
i2 = r.rfind(';', i1, i2)
jsonstr = r[i1:i2]
# load the raw json data into a python data object
data = json.loads(jsonstr)
# pull the values that we are interested in
name = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['shortName']
price = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketPrice']['raw']
change = data['context']['dispatcher']['stores']['QuoteSummaryStore']['price']['regularMarketChange']['raw']
shares_outstanding = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['sharesOutstanding']['raw']
market_cap = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['marketCap']['raw']
trailing_pe = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['trailingPE']['raw']
earnings_per_share = data['context']['dispatcher']['stores']['QuoteSummaryStore']['defaultKeyStatistics']['trailingEps']['raw']
forward_annual_dividend_rate = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendRate']['raw']
forward_annual_dividend_yield = data['context']['dispatcher']['stores']['QuoteSummaryStore']['summaryDetail']['dividendYield']['raw']
# print the values
print('Symbol:', symbol)
print('Name:', name)
print('Price:', price)
print('Change:', change)
print('Shares Outstanding:', shares_outstanding)
print('Market Cap:', market_cap)
print('Trailing PE:', trailing_pe)
print('Earnings Per Share:', earnings_per_share)
print('Forward Annual Dividend Rate:', forward_annual_dividend_rate)
print('Forward_annual_dividend_yield:', forward_annual_dividend_yield)
Yahoo confirmed that they terminated the service:
It has come to our attention that this service is being used in violation of the Yahoo Terms of Service. As such, the service is being discontinued. For all future markets and equities data research, please refer to finance.yahoo.com .
There is still a way to get this data by querying some APIs used by the finance.yahoo.com page. Not sure if Yahoo will be supporting it long term as the previous API was (hopefully they will).
I adapted the method used by https://github.com/pstadler/ticker.sh into the following python hack that takes a list of symbols from the command line and outputs some of the variables as a csv:
#!/usr/bin/env python
import sys
import time
import requests
if len(sys.argv) < 2:
print("missing parameters: <symbol> ...")
exit()
apiEndpoint = "https://query1.finance.yahoo.com/v7/finance/quote"
fields = [
'symbol',
'regularMarketVolume',
'regularMarketPrice',
'regularMarketDayHigh',
'regularMarketDayLow',
'regularMarketTime',
'regularMarketChangePercent']
fields = ','.join(fields)
symbols = sys.argv[1:]
symbols = ','.join(symbols)
payload = {
'lang': 'en-US',
'region': 'US',
'corsDomain': 'finance.yahoo.com',
'fields': fields,
'symbols': symbols}
r = requests.get(apiEndpoint, params=payload)
for i in r.json()['quoteResponse']['result']:
if 'regularMarketPrice' in i:
a = []
a.append(i['symbol'])
a.append(i['regularMarketPrice'])
a.append(time.strftime(
'%Y-%m-%d %H:%M:%S', time.localtime(i['regularMarketTime'])))
a.append(i['regularMarketChangePercent'])
a.append(i['regularMarketVolume'])
a.append("{0:.2f} - {1:.2f}".format(
i['regularMarketDayLow'], i['regularMarketDayHigh']))
print(",".join([str(e) for e in a]))
Sample Run:
$ ./getquotePy.py AAPL GOOGL
AAPL,174.5342,2017-11-07 17:21:28,0.1630961,19905458,173.60 - 173.60
GOOGL,1048.6753,2017-11-07 17:21:22,0.5749836,840447,1043.00 - 1043.00
var API = "https://query1.finance.yahoo.com/v7/finance/quote?symbols=AAPL";
$.getJSON(API, function (json) {...});call throws this error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://www.microplan.at/sar' is therefore not allowed access.
We need a video list by channel name of YouTube (using the API).
We can get a channel list (only channel name) by using the below API:
https://gdata.youtube.com/feeds/api/channels?v=2&q=tendulkar
Below is a direct link of channels
https://www.youtube.com/channel/UCqAEtEr0A0Eo2IVcuWBfB9g
Or
WWW.YouTube.com/channel/HC-8jgBP-4rlI
Now, we need videos of channel >> UCqAEtEr0A0Eo2IVcuWBfB9g or HC-8jgBP-4rlI.
We tried
https://gdata.youtube.com/feeds/api/videos?v=2&uploader=partner&User=UC7Xayrf2k0NZiz3S04WuDNQ
https://gdata.youtube.com/feeds/api/videos?v=2&uploader=partner&q=UC7Xayrf2k0NZiz3S04WuDNQ
But, it does not help.
We need all the videos posted on the channel. Videos uploaded to a channel can be from multiple users thus I don't think providing a user parameter would help...
You need to look at the YouTube Data API. You will find there documentation about how the API can be accessed. You can also find client libraries.
You could also make the requests yourself. Here is an example URL that retrieves the latest videos from a channel:
https://www.googleapis.com/youtube/v3/search?key={your_key_here}&channelId={channel_id_here}&part=snippet,id&order=date&maxResults=20
After that you will receive a JSON with video ids and details, and you can construct your video URL like this:
http://www.youtube.com/watch?v={video_id_here}
First, you need to get the ID of the playlist that represents the uploads from the user/channel:
https://developers.google.com/youtube/v3/docs/channels/list#try-it
You can specify the username with the forUsername={username} param, or specify mine=true to get your own (you need to authenticate first). Include part=contentDetails to see the playlists.
GET https://www.googleapis.com/youtube/v3/channels?part=contentDetails&forUsername=jambrose42&key={YOUR_API_KEY}
In the result "relatedPlaylists" will include "likes" and "uploads" playlists. Grab that "upload" playlist ID.
Also note the upload playlist id is your channelId prefixed with UU instead of UC.
Next, get a list of videos in that playlist:
https://developers.google.com/youtube/v3/docs/playlistItems/list#try-it
Just drop in the playlistId!
GET https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails&maxResults=50&playlistId=UUpRmvjdu3ixew5ahydZ67uA&key={YOUR_API_KEY}
Here is a video from Google Developers showing how to list all videos in a channel in v3 of the YouTube API.
There are two steps:
Query Channels to get the "uploads" Id. eg https://www.googleapis.com/youtube/v3/channels?id={channel Id}&key={API key}&part=contentDetails
Use this "uploads" Id to query PlaylistItems to get the list of videos. eg https://www.googleapis.com/youtube/v3/playlistItems?playlistId={"uploads" Id}&key={API key}&part=snippet&maxResults=50
To get channels list :
Get Channels list by forUserName:
https://www.googleapis.com/youtube/v3/channels?part=snippet,contentDetails,statistics&forUsername=Apple&key=
Get channels list by channel id:
https://www.googleapis.com/youtube/v3/channels/?part=snippet,contentDetails,statistics&id=UCE_M8A5yxnLfW0KghEeajjw&key=
Get Channel sections:
https://www.googleapis.com/youtube/v3/channelSections?part=snippet,contentDetails&channelId=UCE_M8A5yxnLfW0KghEeajjw&key=
To get Playlists :
Get Playlists by Channel ID:
https://www.googleapis.com/youtube/v3/playlists?part=snippet,contentDetails&channelId=UCq-Fj5jknLsUf-MWSy4_brA&maxResults=50&key=
Get Playlists by Channel ID with pageToken:
https://www.googleapis.com/youtube/v3/playlists?part=snippet,contentDetails&channelId=UCq-Fj5jknLsUf-MWSy4_brA&maxResults=50&key=&pageToken=CDIQAA
To get PlaylistItems :
Get PlaylistItems list by PlayListId:
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet,contentDetails&maxResults=25&playlistId=PLHFlHpPjgk70Yv3kxQvkDEO5n5tMQia5I&key=
To get videos :
Get videos list by video id:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=YxLCwfA1cLw&key=
Get videos list by multiple videos id:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=YxLCwfA1cLw,Qgy6LaO3SB0,7yPJXGO2Dcw&key=
Get comments list
Get Comment list by video ID:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&videoId=el****kQak&key=A**********k
Get Comment list by channel ID:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&channelId=U*****Q&key=AI********k
Get Comment list by allThreadsRelatedToChannelId:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&allThreadsRelatedToChannelId=UC*****ntcQ&key=AI*****k
Here all api's are Get approach.
Based on channel id we con't get all videos directly, that's the important point here.
For integration https://developers.google.com/youtube/v3/quickstart/ios?ver=swift
Here is the code that will return all video ids under your channel
<?php
$baseUrl = 'https://www.googleapis.com/youtube/v3/';
// https://developers.google.com/youtube/v3/getting-started
$apiKey = 'API_KEY';
// If you don't know the channel ID see below
$channelId = 'CHANNEL_ID';
$params = [
'id'=> $channelId,
'part'=> 'contentDetails',
'key'=> $apiKey
];
$url = $baseUrl . 'channels?' . http_build_query($params);
$json = json_decode(file_get_contents($url), true);
$playlist = $json['items'][0]['contentDetails']['relatedPlaylists']['uploads'];
$params = [
'part'=> 'snippet',
'playlistId' => $playlist,
'maxResults'=> '50',
'key'=> $apiKey
];
$url = $baseUrl . 'playlistItems?' . http_build_query($params);
$json = json_decode(file_get_contents($url), true);
$videos = [];
foreach($json['items'] as $video)
$videos[] = $video['snippet']['resourceId']['videoId'];
while(isset($json['nextPageToken'])){
$nextUrl = $url . '&pageToken=' . $json['nextPageToken'];
$json = json_decode(file_get_contents($nextUrl), true);
foreach($json['items'] as $video)
$videos[] = $video['snippet']['resourceId']['videoId'];
}
print_r($videos);
Note: You can get channel id at
https://www.youtube.com/account_advanced after logged in.
Below is a Python alternative that does not require any special packages. By providing the channel id it returns a list of video links for that channel. Please note that you need an API Key for it to work.
import urllib
import json
def get_all_video_in_channel(channel_id):
api_key = YOUR API KEY
base_video_url = 'https://www.youtube.com/watch?v='
base_search_url = 'https://www.googleapis.com/youtube/v3/search?'
first_url = base_search_url+'key={}&channelId={}&part=snippet,id&order=date&maxResults=25'.format(api_key, channel_id)
video_links = []
url = first_url
while True:
inp = urllib.urlopen(url)
resp = json.load(inp)
for i in resp['items']:
if i['id']['kind'] == "youtube#video":
video_links.append(base_video_url + i['id']['videoId'])
try:
next_page_token = resp['nextPageToken']
url = first_url + '&pageToken={}'.format(next_page_token)
except:
break
return video_links
Thanks to the references shared here and elsewhere, I've made an online script / tool that one can use to obtain all videos of a channel.
It combines API calls to youtube.channels.list, playlistItems, videos. It uses recursive functions to make the asynchronous callbacks run the next iteration upon getting a valid response.
This also serves to limit the actual number of requests made at a time, hence keeping you safe from violating YouTube API rules. Sharing shortened snippets and then a link to the full code. I got around the 50 max results per call limitation by using the nextPageToken value that comes in the response to fetch the next 50 results and so on.
function getVideos(nextPageToken, vidsDone, params) {
$.getJSON("https://www.googleapis.com/youtube/v3/playlistItems", {
key: params.accessKey,
part: "snippet",
maxResults: 50,
playlistId: params.playlistId,
fields: "items(snippet(publishedAt, resourceId/videoId, title)), nextPageToken",
pageToken: ( nextPageToken || '')
},
function(data) {
// commands to process JSON variable, extract the 50 videos info
if ( vidsDone < params.vidslimit) {
// Recursive: the function is calling itself if
// all videos haven't been loaded yet
getVideos( data.nextPageToken, vidsDone, params);
}
else {
// Closing actions to do once we have listed the videos needed.
}
});
}
This got a basic listing of the videos, including id, title, date of publishing and similar. But to get more detail of each video like view counts and likes, one has to make API calls to videos.
// Looping through an array of video id's
function fetchViddetails(i) {
$.getJSON("https://www.googleapis.com/youtube/v3/videos", {
key: document.getElementById("accesskey").value,
part: "snippet,statistics",
id: vidsList[i]
}, function(data) {
// Commands to process JSON variable, extract the video
// information and push it to a global array
if (i < vidsList.length - 1) {
fetchViddetails(i+1) // Recursive: calls itself if the
// list isn't over.
}
});
See the full code here, and live version here. (Edit: fixed github link)
Edit: Dependencies: JQuery, Papa.parse
Short answer:
Here's a library called scrapetube That can help with that.
pip install scrapetube
import scrapetube
import simplejson as json
videos = scrapetube.get_channel("UC9-y-6csu5WGm29I7JiwpnA")
for video in videos:
print(video['videoId'])
print(video['title']['runs'][0]['text'])
print(video['publishedTimeText']['simpleText'])
print('\r\n')
# DEBUG: print(json.dumps(video))
Long answer:
The module mentioned above was created by me due to a lack of any other solutions. Here's what i tried:
Selenium. It worked but had three big drawbacks: 1. It requires a web browser and driver to be installed. 2. has big CPU and memory requirements. 3. can't handle big channels.
Using youtube-dl. Like this:
import youtube_dl
youtube_dl_options = {
'skip_download': True,
'ignoreerrors': True
}
with youtube_dl.YoutubeDL(youtube_dl_options) as ydl:
videos = ydl.extract_info(f'https://www.youtube.com/channel/{channel_id}/videos')
This also works for small channels, but for bigger ones i would get blocked by youtube for making so many requests in such a short time (because youtube-dl downloads more info for every video in the channel).
So i made the library scrapetube which uses the web API to get all the videos.
Try with like the following. It may help you.
https://gdata.youtube.com/feeds/api/videos?author=cnn&v=2&orderby=updated&alt=jsonc&q=news
Here author as you can specify your channel name and "q" as you can give your search key word.
Since everyone answering this question has problems due to the 500 video limit here's an alternate solution using youtube_dl in Python 3. Also, no API key is needed.
Install youtube_dl: sudo pip3 install youtube-dl
Find out your target channel's channel id. The ID is going to start with UC. Replace the C for Channel with U for Upload (i.e. UU...), this is the upload playlist.
Use the playlist downloader feature from youtube-dl. Ideally you do NOT want to download every video in the playlist which is the default, but only the metadata.
Example (warning -- takes tens of minutes):
import youtube_dl, pickle
# UCVTyTA7-g9nopHeHbeuvpRA is the channel id (1517+ videos)
PLAYLIST_ID = 'UUVTyTA7-g9nopHeHbeuvpRA' # Late Night with Seth Meyers
with youtube_dl.YoutubeDL({'ignoreerrors': True}) as ydl:
playd = ydl.extract_info(PLAYLIST_ID, download=False)
with open('playlist.pickle', 'wb') as f:
pickle.dump(playd, f, pickle.HIGHEST_PROTOCOL)
vids = [vid for vid in playd['entries'] if 'A Closer Look' in vid['title']]
print(sum('Trump' in vid['title'] for vid in vids), '/', len(vids))
Just in three steps:
Subscriptions: list ->
https://www.googleapis.com/youtube/v3/subscriptions?part=snippet&maxResults=50&mine=true&access_token={oauth_token}
Channels: list ->
https://www.googleapis.com/youtube/v3/channels?part=contentDetails&id={channel_id}&key={YOUR_API_KEY}
PlaylistItems: list ->
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId={playlist_id}&key={YOUR_API_KEY}
Recently I had to retrieve all videos from a channel, and according to YouTube developer documentation:
https://developers.google.com/youtube/v3/docs/playlistItems/list
function playlistItemsListByPlaylistId($service, $part, $params) {
$params = array_filter($params);
$response = $service->playlistItems->listPlaylistItems(
$part,
$params
);
print_r($response);
}
playlistItemsListByPlaylistId($service,
'snippet,contentDetails',
array('maxResults' => 25, 'playlistId' => 'id of "uploads" playlist'));
Where $service is your Google_Service_YouTube object.
So you have to fetch information from the channel to retrieve the "uploads" playlist that actually has all the videos uploaded by the channel: https://developers.google.com/youtube/v3/docs/channels/list
If new with this API, I highly recommend to turn the code sample from the default snippet to the full sample.
So the basic code to retrieve all videos from a channel can be:
class YouTube
{
const DEV_KEY = 'YOUR_DEVELOPPER_KEY';
private $client;
private $youtube;
private $lastChannel;
public function __construct()
{
$this->client = new Google_Client();
$this->client->setDeveloperKey(self::DEV_KEY);
$this->youtube = new Google_Service_YouTube($this->client);
$this->lastChannel = false;
}
public function getChannelInfoFromName($channel_name)
{
if ($this->lastChannel && $this->lastChannel['modelData']['items'][0]['snippet']['title'] == $channel_name)
{
return $this->lastChannel;
}
$this->lastChannel = $this->youtube->channels->listChannels('snippet, contentDetails, statistics', array(
'forUsername' => $channel_name,
));
return ($this->lastChannel);
}
public function getVideosFromChannelName($channel_name, $max_result = 5)
{
$this->getChannelInfoFromName($channel_name);
$params = [
'playlistId' => $this->lastChannel['modelData']['items'][0]['contentDetails']['relatedPlaylists']['uploads'],
'maxResults'=> $max_result,
];
return ($this->youtube->playlistItems->listPlaylistItems('snippet,contentDetails', $params));
}
}
$yt = new YouTube();
echo '<pre>' . print_r($yt->getVideosFromChannelName('CHANNEL_NAME'), true) . '</pre>';
Using API version 2, which is deprecated, the URL for uploads (of channel UCqAEtEr0A0Eo2IVcuWBfB9g) is:
https://gdata.youtube.com/feeds/users/UCqAEtEr0A0Eo2IVcuWBfB9g/uploads
There is an API version 3.
From https://stackoverflow.com/a/65440501/2585501:
This method is especially useful if a) the channel has more than 50 videos or if b) desire youtube video ids formatted in a flat txt list:
Obtain a Youtube API v3 key (see https://stackoverflow.com/a/65440324/2585501)
Obtain the Youtube Channel ID of the channel (see https://stackoverflow.com/a/16326307/2585501)
Obtain the Uploads Playlist ID of the channel: https://www.googleapis.com/youtube/v3/channels?id={channel Id}&key={API key}&part=contentDetails (based on https://www.youtube.com/watch?v=RjUlmco7v2M)
Install youtube-dl (e.g. pip3 install --upgrade youtube-dl or sudo apt-get install youtube-dl)
Download the Uploads Playlist using youtube-dl: youtube-dl -j --flat-playlist "https://<yourYoutubePlaylist>" | jq -r '.id' | sed 's_^_https://youtu.be/_' > videoList.txt (see https://superuser.com/questions/1341684/youtube-dl-how-download-only-the-playlist-not-the-files-therein)
Posting long after the original question was asked, but I made a python package that does this using a very simple API. It gets all the videos uploaded to a channel, but I'm not sure about this part (included in the original question):
Videos uploaded to a channel can be from multiple users thus I don't think providing a user parameter would help...
Maybe YouTube changed in the 8 years since this question was posted, but if it didn't, the package I made might not cover this case.
To use the API:
pip3 install -U yt-videos-list # macOS
pip install -U yt-videos-list # Windows
# if that doesn't work, try
python3 -m pip install -U yt-videos-list # macOS
python -m pip install -U yt-videos-list # Windows
Then open up a python interpreter
python3 # macOS
python # Windows
and run the program:
from yt_videos_list import ListCreator
lc = ListCreator()
help(lc) # display API information - shows available parameters and functions
my_url = 'https://www.youtube.com/user/1veritasium'
lc.create_list_for(url=my_url)
Python documentation (will be updated most frequently, so check this page for updates!)
Repository homepage
PyPI page
Sample solution in Python. Help taken from this video: video
Like many other answers, upload id is to be retrieved from the channel id first.
import urllib.request
import json
key = "YOUR_YOUTUBE_API_v3_BROWSER_KEY"
#List of channels : mention if you are pasting channel id or username - "id" or "forUsername"
ytids = [["bbcnews","forUsername"],["UCjq4pjKj9X4W9i7UnYShpVg","id"]]
newstitles = []
for ytid,ytparam in ytids:
urld = "https://www.googleapis.com/youtube/v3/channels?part=contentDetails&"+ytparam+"="+ytid+"&key="+key
with urllib.request.urlopen(urld) as url:
datad = json.loads(url.read())
uploadsdet = datad['items']
#get upload id from channel id
uploadid = uploadsdet[0]['contentDetails']['relatedPlaylists']['uploads']
#retrieve list
urld = "https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails&maxResults=50&playlistId="+uploadid+"&key="+key
with urllib.request.urlopen(urld) as url:
datad = json.loads(url.read())
for data in datad['items']:
ntitle = data['snippet']['title']
nlink = data['contentDetails']['videoId']
newstitles.append([nlink,ntitle])
for link,title in newstitles:
print(link, title)
That's my Python solution, using Google API.
Observations:
Create a .env file to store your API Developer Key, and put it in your .gitignore file
The parameter "forUserName" should be set with the name of the Youtube Channel (username). Alternatively, you can use the channel id, setting the parameter "id", instead of "forUserName".
The object "playlistItem" gives you access to each video. I'm showing only its title but there are many other properties.
import os
import googleapiclient.discovery
from decouple import config
def main():
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtube"
api_version = "v3"
DEVELOPER_KEY = config('API_KEY')
youtube = googleapiclient.discovery.build(
api_service_name, api_version, developerKey = DEVELOPER_KEY)
request = youtube.channels().list(
part="contentDetails",
forUsername="username",
# id="oiwuereru8987",
)
response = request.execute()
for item in response['items']:
playlistId = item['contentDetails']['relatedPlaylists']['uploads']
nextPageToken = ''
while (nextPageToken != None):
playlistResponse = youtube.playlistItems().list(
part='snippet',
playlistId=playlistId,
maxResults=25,
pageToken=nextPageToken
)
playlistResponse = playlistResponse.execute()
print(playlistResponse.keys())
for idx, playlistItem in enumerate(playlistResponse['items']):
print(idx, playlistItem['snippet']['title'])
if 'nextPageToken' in playlistResponse.keys():
nextPageToken = playlistResponse['nextPageToken']
else:
nextPageToken = None
if __name__ == "__main__":
main()
Example for the .env file
API_KEY=<Key_Here>
Using the gapi JavaScript API, you can do this
<script src="https://apis.google.com/js/api.js"></script>
const start = () => {
gapi.client
.init({
apiKey: "your_youtubeApiKey",
discoveryDocs: ["https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest"],
scope: "https://www.googleapis.com/auth/youtube.readonly",
})
.then(() => {
console.log("gapi.client initiated");
})
.then(() =>
gapi.client.youtube.channels.list({
part: "snippet,contentDetails,statistics",
id: "youtube_channelId",
// forUsername: 'Bankless',
})
)
.then(
(res) =>
// get the youtube related playlist id
res.result.items[0].contentDetails.relatedPlaylists.uploads
)
.then((playlistId) =>
gapi.client.youtube.playlistItems.list({
part: "snippet",
playlistId,
maxResults: 50,
})
)
.then((res) =>
// get youtube videos snippets
res.result.items.map((item) => item.snippet)
)
.then((snippets) =>
snippets.map((snippet) => {
const { title, description, resourceId } = snippet;
const { videoId } = resourceId;
return { title, description, videoId };
})
)
.then((videos) => {
console.log(videos);
})
.catch((err) => console.error(err));
};
gapi.load("client", start);
Docs:
https://github.com/google/google-api-javascript-client
https://developers.google.com/youtube/v3/guides/auth/client-side-web-apps#callinganapi
You have to get the channel_id of the video you want to get the data from.
For getting the channel_id using the video_id, you can use the videos:list endpoint of the YouTube Data API - add video_id in Id parameter. example.
Then, with the channel_id, change the second character to "U" :
This modified id is the Uploads playlist of that said YouTube channel.
With this Uploads playlist_id, you can use the Playlistitem:list endpoint of the YouTube Data API to retrieve all the uploaded videos from the channel.
In the part parameter add "id,snippet,contentDetails,status".
and in playlistID add the modified channel ID.
and then execute.