I have a script that uses the YouTube API (v3) to find a video of a music from the name of the artist and the name of the music.
This works, however, in some cases, the first choice (sorted by relevance) is not the official video VEVO.
I tried adding VEVO in my query (after name of the artist and name of the music), but when there is no video VEVO, the API returns no results.
Is it possible to force to choose VEVO videos, if they exist?
Thank you.
Vincent
var request = gapi.client.youtube.search.list
({
q: artiste+' '+track,
part: 'snippet',
order: 'relevance'
});
request.execute(function(response)
{
idVideo=response.result.items[0].id.videoId;
});
This is the part that allows to select the id of a video based on the artist's name and the name of the music
UPDATE: I don't think the syndicated video suggesting I put below would work well but I'll leave it there just in case you want to explore it. What might work better, again not guaranteed but should be more accurate just hoping for the best would be to simply sort it by viewCount instead of relevance... Generally speaking, the VEVO videos have the most views.
Example: https://developers.google.com/apis-explorer/#p/youtube/v3/youtube.search.list?part=snippet&order=viewCount&q=nicki+minaj+anaconda&type=video&_h=3&
GET https://www.googleapis.com/youtube/v3/search?part=snippet&order=viewCount&q=nicki+minaj+anaconda&type=video&key={YOUR_API_KEY}
--
ORIGINAL ANSWER
I haven't been able to test it yet and it won't necessarily restrict it to ONLY vevo videos but you can try the syndicated option https://developers.google.com/youtube/v3/docs/search/list#videoSyndicated
string
The videoSyndicated parameter lets you to restrict a search to only videos that can be played outside youtube.com. If you specify a value for this parameter, you must also set the type parameter's value to video.
Acceptable values are:
any – Return all videos, syndicated or not.
true – Only retrieve syndicated videos.
If that returns nothing, than do the same search without syndicated and use the first option from that.
It is actually pretty easy. What you need to do is add 'VEVO' to you search query. This will make sure that anything from a VEVO channel will be the first result. It should look something like this.
var request = gapi.client.youtube.search.list
({
q: artiste+' '+track + 'VEVO',
part: 'snippet',
order: 'relevance'
});
If you wan't to make sure you are getting a VEVO video the easiest thing to do is parse the channel title to make sure it contains the word "VEVO". The Code would then look something like this
var request = gapi.client.youtube.search.list
({
q: artiste+' '+track + 'VEVO',
part: 'snippet',
order: 'relevance'
});
var obj = JSON.parse(result.content);
var findChannelTitle = obj.items[0].snippet.channelTitle;
var isVevo = findChannelTitle.match(/VEVO/g); //checks to see if this is VEVO content. We only wan't to use Vevo videos.
if (isVevo){ //returns true if VEVO is found in the channel title
var youtubeVideoId = obj.items[0].id.videoId; //finds the video ID
return youtubeVideoId;
}else{
return null;
}
Related
I just want to fetch all my liked videos ~25k items. as far as my research goes this is not possible via the YouTube v3 API.
I have already found multiple issues (issue, issue) on the same problem, though some claim to have fixed it, but it only works for them as they don't have < 5000 items in their liked video list.
playlistItems list API endpoint with playlist id set to "liked videos" (LL) has a limit of 5000.
videos list API endpoint has a limit of 1000.
Unfortunately those endpoints don't provide me with parameters that I could use to paginate the requests myself (e.g. give me all the liked videos between date x and y), so I'm forced to take the provided order (which I can't get past 5k entries).
Is there any possibility I can fetch all my likes via the API?
more thoughts to the reply from #Yarin_007
if there are deleted videos in the timeline they appear as "Liked https://...url" , the script doesnt like that format and fails as the underlying elements dont have the same structure as existing videos
can be easily fixed with a try catch
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
try {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
....
}
}
catch {
console.log("error, prolly deleted video")
}
})
return liked_videos;
}
to scroll down to the bottom of the page ive used this simple script, no need to spin up something big
var millisecondsToWait = 1000;
setInterval(function() {
window.scrollTo(0, document.body.scrollHeight);
console.log("scrolling")
}, millisecondsToWait);
when more ppl want to retrive this kind of data, one could think about building a proper script that is more convenient to use. If you check the network requests you can find the desired data in the response of requests called batchexecute. One could copy the authentification of one of them provide them to a script that queries those endpoints and prepares the data like the other script i currently manually inject.
Hmm. perhaps Google Takeout?
I have verified the youtube data contains a csv called "liked videos.csv". The header is Video Id,Time Added, and the rows are
dQw4w9WgXcQ,2022-12-18 23:42:19 UTC
prvXCuEA1lw,2022-12-24 13:22:13 UTC
for example.
So you would need to retrieve video metadata per video ID. Not too bad though.
Note: the export could take a while, especially with 25k videos. (select only YouTube data)
I also had an idea that involves scraping the actual liked videos page (which would save you 25k HTTP Requests). But I'm unsure if it breaks with more than 5000 songs. (also, emulating the POST requests on that page may prove quite difficult, albeit not impossible. (they fetch /browse?key=..., and have some kind of obfuscated / encrypted base64 strings in the request-body, among other parameters)
EDIT:
Look. There's probably a normal way to get a complete dump of all you google data. (i mean, other than takeout. Email them? idk.)
anyway, the following is the other idea...
Follow this deep link to your liked videos history.
Scroll to the bottom... maybe with selenium, maybe with autoit, maybe put something on the "end" key of your keyboard until you reach your first liked video.
Hit f12 and run this in the developer console
// https://www.youtube.com/watch?v=eZPXmCIQW5M
// https://myactivity.google.com/page?utm_source=my-activity&hl=en&page=youtube_likes
// go over all "cards" in the activity webpage. (after scrolling down to the absolute bottom of it)
// create a dictionary - the key is the Video ID, the value is a list of the video's properties
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
// horrible parsing. your mileage may vary. I Tried to avoid using any gibberish class names.
let a_links = card.querySelectorAll("a")
let details = a_links[0];
let url = details.href.split("?v=")[1]
let video_length = a_links[3].innerText;
let time = a_links[2].parentElement.innerText.split(" • ")[0];
let title = details.innerText;
let date = card.closest("[data-date]").getAttribute("data-date")
liked_videos[url] = [title,video_length, date, time];
// console.log(title, video_length, date, time, url);
}
})
return liked_videos;
}
// https://stackoverflow.com/questions/57709550/how-to-download-text-from-javascript-variable-on-all-browsers
function download(filename, text, type = "text/plain") {
// Create an invisible A element
const a = document.createElement("a");
a.style.display = "none";
document.body.appendChild(a);
// Set the HREF to a Blob representation of the data to be downloaded
a.href = window.URL.createObjectURL(
new Blob([text], { type })
);
// Use download attribute to set set desired file name
a.setAttribute("download", filename);
// Trigger the download by simulating click
a.click();
// Cleanup
window.URL.revokeObjectURL(a.href);
document.body.removeChild(a);
}
function main() {
// gather relevant elements
var all_cards = document.querySelectorAll("div[aria-label='Card showing an activity from YouTube']")
var liked_videos = collector(all_cards)
// download json
download("liked_videos.json", JSON.stringify(liked_videos))
}
main()
Basically it gathers all the liked videos' details and creates a key: video_ID - Value: [title,video_length, date, time] object for each liked video.
It then automatically downloads the json as a file.
I am using YouTube data API and trying to differentiate prior livestreams vs premiered content. The liveStreamingDetails in the video list is populated for both livestreams and premiered content. Is there a way I can differentiate between the two?
Below is my python code for getting live stream start time. If its not populated, then I know that video is not live stream. But the problem is that this value is getting populated for premiered content as well.
vid_request = youtube.videos().list(part = 'contentDetails, statistics, snippet, liveStreamingDetails, status',id = ','.join(vid_ids))
vid_response = vid_request.execute()
for videoitem in vid_response['items']:
try:
livestreamStartTime = videoitem['liveStreamingDetails']['actualStartTime']
except:
livestreamStartTime = ''
Any pointers on what could work would really help?
I had studied hardly the documentation on https://developers.google.com/youtube/v3/revision_history#november-19-2015 about how to Set localized titles and descriptions.
But when you try it, it seems impossible, even if you use the "app" of the api on https://developers.google.com/youtube/v3/docs/videos/update#prubalo you always get the same error with the parameter part. I set that parameter with the value "snippet", like you have to do. But it doesn't work, I tried with the rest of values or possible combinations and..it doesn't work.
Can someone give me an example of the code (i prefer python) or the request http ??
Please be sure you code o request http really works...even i found any mistakes on the examples on the documentation like 5 opening parenthesis and 4 closing parenthesis...
Following is an PHP code example. The concept is same, hope you can do it in the Phython.
Please make sure you set the default language of the video (snippet.defaultLanguage) before adding localisations.
// Call the API's videos.list method to retrieve the video resource.
// Part should be 'localizations' not 'snippet' because you are updating the localisation
$listResponse = $youtube->videos->listVideos('localizations', array('id' => 'YOUR_VIDEO_ID'));
// Since the request specified a video ID, the response only contains one video resource.
$video = $listResponse[0];
// Set the localisations array for the video localisation
// You can retrieve the language list from following API - https://developers.google.com/youtube/v3/docs/i18nLanguages/list
$video['localizations'] = array(
'ta' => array(
'title' => 'TITLE_IN_GIVEN_LANG',
'description' => 'DESC_IN_GIVEN_LANG'));
// Update the video resource by calling the videos.update() method.
$updateResponse = $youtube->videos->update('localizations', $video);
Update - Example of updating localisation of video using google developer console
$http.get("https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId=PLFgquLnL59alCl_2TQvOiD5Vgm1hCaGSI&key={mykey}&maxResults=10")
I used the playlistItems but couldn't get the statistic part which contain duration of the video. Do I need to call twice? Get the video Id and make another call? or I'm missing something in this case?
For whatever reason, playlistItems do not include some things like statistics or category. You'll need to make a separate call using the video ID and https://developers.google.com/youtube/v3/docs/videos/list in order to get those fields.
This is how I do it (using Python but you can adapt it for whatever language you are using with http requests and JSON parsing)
url = "https://www.googleapis.com/youtube/v3/videos?id=" + videoId
+ "&key=" + DEVELOPER_KEY + "&part=snippet,contentDetails"
r = requests.get(url)
metadata = r.json()["items"][0]
channelName = metadata["snippet"]["channelTitle"]
publishedTime = metadata["snippet"]["publishedAt"]
duration = metadata["contentDetails"]["duration"]
duration is in a strange format that looks like
PT4M11S
meaning 4 minutes 11 seconds. You will have to "parse" this.
When searching for a playlist item with a specific video ID, the youtube-api seems to only search among the 50 latest playlist items.
I have over 1000 items on my playlist and searching for a item for a certain video ID yields no results. I am therefor forced to fetch all the items on the playlist and iterate overthem my self to find the specific item.
// This only searches among the first 50 playlist items
$videoId = 'ASDFQWER';
$playlistId = 'ASDFASDFQWERQWER';
$part = "contentDetails";
$response = $youtube->playlistItems->listPlaylistItems(
$part,
array(
"playlistId" => $playlistId,
"videoId" => $videoId,
)
);
$playlistItem = reset($response->getItems());
// This works, but is slow and awkward
$videoId = 'ASDFQWER';
$playlistId = 'ASDFASDFQWERQWER';
$part = "contentDetails";
$playlistItems = array();
$nextPageToken = NULL;
do {
$response = $youtube->playlistItems->listPlaylistItems(
$part,
array(
"playlistId" => $playlistId,
"maxResults" => 50,
"pageToken" => $nextPageToken,
)
);
$nextPageToken = $response->getNextPageToken();
$playlistItems = array_merge($playlistItems, $response->getItems());
} while ($nextPageToken);
$hasVideoId = function($playlistItem) use ($videoId){
$idOnItem = $playlistItem->getContentDetails()->getVideoId();
return ($videoId == $idOnItem);
};
$playlistItem = reset(array_filter($playlistItems, $hasVideoId));
I would prefer to use the video's ID for doing the search at YouTube's end rather than fetching all the items in chuncks of 50 items and searching all those.
The point in all this is to automate the removal of old videos. If I only delete the video there will be invalid items on my playlist. And that's why I need to find these playlist items, so I can remove them along with the videos.
I first opened an issue at GitHub for the PHP-client, but this really doesn't seem to be an issue on the client.
Any help is much appreciated.
You're right in that the issue doesn't lie in the client; all clients as well as the API explorer exhibit it as well. My guess is that this is a by-product of the fact that playlistItems, as opposed to most of the objects that the API can access, have an order to them, and so this behavior implies that the underlying logic is one that iterates sequentially through the playlist.
However, here are a couple of things to think about. First of all, if all you're doing is grabbing the contentDetails, I'm not sure I understand the purpose of the code snippet, as the contentDetails part for a playlistItem object only returns the videoId (which you already have). It may be, of course, that you've just simplified things for this post, and you're actually going to be retrieving other parts. But is there something unique that's requiring you to use the playlistItems endpoint? Since you already have the videoId, you could just use the videos->list endpoint and get nearly all the info that you get with the playlistItems ... in fact, the only thing you'd be missing is the position of the video in the playlist (but if THAT'S what you're after, then your second example above is probably your best bet right now).