How to implement deep linking in Roku VideoPlayer Sample channel - deep-linking

We have developed a Roku channel using Roku VideoPlayer Sample Channel (https://github.com/rokudev/videoplayer-channel). Although a recent submittal to Roku was rejected for not providing a deep linking capability. The main.brs provides means to parse a request for a deep link which I've been able to implement to obtain my contentID and mediaType based on a curl command as follows:
curl -d '' 'http://192.168.1.24:8060/launch/dev?MediaType=special&contentID=49479'
The main.brs comments say to
launch/prep the content mapped to the contentID here.
We're using the xml files to provide the Roku "categories" screen and the screen for the listing after selecting an item from the categories screen (including the springboard screen). Within these xml files, we tag the contentID and mediaType of every video item.
I'm fairly new to the Roku development. While we've been able to create channels previously using their video channel template, I don't know how to "launch/prep the content mapped to the contentID". I've searched and tried various other calls (ie - playMedia(ContentID, Screen)), but I get errors on the debugger relating to "function call operator () attempted on non-function".
I would appreciate some instruction on how to jump to the springboard of the video based on the value of the contentID passed using the deep linking command. Or a means to play the video based on the contentID in the xml file.
Here's my main.brs:
sub Main(input as Dynamic)
print "################"
print "Start of Channel"
print "################"
' Add deep linking support here. Input is an associative array containing
' parameters that the client defines. Examples include "options, contentID, etc."
' See guide here: https://sdkdocs.roku.com/display/sdkdoc/External+Control+Guide
' For example, if a user clicks on an ad for a movie that your app provides,
' you will have mapped that movie to a contentID and you can parse that ID
' out from the input parameter here.
' Call the service provider API to look up
' the content details, or right data from feed for id
if input <> invalid
print "Received Input -- write code here to check it!"
if input.reason <> invalid
if input.reason = "ad" then
print "Channel launched from ad click"
'do ad stuff here
end if
end if
if input.contentID <> invalid
m.contentID = input.contentID
print "contentID is: " + input.contentID
print "mediaType is: " + input.mediaType
'launch/prep the content mapped to the contentID here
end if
end if
showHeroScreen(input)
end sub
' Initializes the scene and shows the main homepage.
' Handles closing of the channel.
sub showHeroScreen(input as object)
print "main.brs - [showHeroScreen]"
screen = CreateObject("roSGScreen")
m.port = CreateObject("roMessagePort")
screen.setMessagePort(m.port)
scene = screen.CreateScene("VideoScene")
m.global = screen.getGlobalNode()
'Deep link params
m.global.addFields({ input: input })
screen.show()
while(true)
msg = wait(0, m.port)
msgType = type(msg)
if msgType = "roSGScreenEvent"
if msg.isScreenClosed() then return
end if
end while
end sub
I'm thinking if I can get the params for deep linking properly set prior to the screen.show call, it should work? I am able to output the outputID and mediaType values using the debugger when using the curl to call the deep link, but it just goes to the home screen without cueing up the video that was deep linked.
Any help is appreciated.

Please check my GitHub REPO for simple Deep Linking Example.
Little explanation:
First I int my main.brs like this:
sub Main(args as Dynamic) as Void
showSGScreen(args)
end sub
"args" will be provided by Roku Firmware!
In my showSGScreen Sub, you will find:
'DeepLinking
if args.contentId <> invalid AND args.mediaType <> invalid
m.scene.contentDLId = args.contentId
m.scene.mediaDPType = args.mediaType
m.scene.deepLinkingLand = true
end if
Now check out my audiocontent.xml file:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<AudioContent>
<audio actors = "Artist - Unknown" album ="Album - Unknown" title = "KISS FM - LIVE STREAM" streamformat = "es.aac-adts" url = "http://80.86.106.143:9128/kissfm.aacp" HDPosterUrl = "pkg:/images/radio_stations_img/kiss_fm.png" Rating = "true" TrackIDAudio = "1" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Artist - Unknown" album ="Album - Unknown" title = "TDI Radio - LIVE STREAM" streamformat = "mp3" url = "http://streaming.tdiradio.com:8000/tdiradiob" HDPosterUrl = "pkg:/images/radio_stations_img/tdi_radio.png" Rating = "true" TrackIDAudio = "2" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Artist - Unknown" album ="Album - Unknown" title = "Polskie Radio - LIVE STREAM" streamformat = "hls" url = "http://stream85.polskieradio.pl/omniaaxe/k11.stream/playlist.m3u8" HDPosterUrl = "pkg:/images/radio_stations_img/polskie_radio.png" Rating = "true" TrackIDAudio = "3" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Edgar Allan Yo and Shporky Pork" album ="Mony Pajton EP" title = "Niko kao Ja - MP3" streamformat = "mp3" url = "pkg:/mp3/Edgar_Allan_Yo_and_Shporky_Pork_Niko_Kao_Ja.mp3" HDPosterUrl = "pkg:/images/mp3_img/mony_pajton.png" Rating = "false" TrackIDAudio = "4" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Edgar Allan Yo and Shporky Pork" album ="Mony Pajton EP" title = "Zatvaramo Krug - MP3" streamformat = "mp3" url = "pkg:/mp3/Edgar_Allan_Yo_and_Shporky_Pork_Zatvaramo_Krug.mp3" HDPosterUrl = "pkg:/images/mp3_img/mony_pajton.png" Rating = "false" TrackIDAudio = "5" ShortDescriptionLine1 = "short-form"/>
</AudioContent>
Notice "TrackIDAudio" property.
Now go to HomeSceen.brs file and take a look at this:
if m.audiolist.content.getChild(index).TrackIDAudio = m.contentId
m.audiolist.jumpToItem = index
m.audioPlayer.itemContent = m.audiolist.content.getChild(index)
m.audioPlayer.audioControl = "play"
m.audioPlayer.setFocus(true)
m.audioPlayer.visible = true
exit for
end if
So here is where I check if m.contentId(this is actually args.contentId that we added back in the main.brs file to the HomeScene's top ) is the same as the TrackIDAudio from my LabelList. If they are the same, App Will play my item!
One thing, you can test your deep linking with this TOOL.
Also you should now that you will need to send Roku XML feed of your contentID's and MediaTypes so that they can link those items with your app when published.
So user click's on your item in Roku search, Roku sends contentId and MediaType that you have provided to them in the XML feed to the Roku firmware, ROKU firmware than puts contentId and MediaType in args and after that you do what you need with it.
Deep linking tool mimics this.
Also you can find some more info about deeplinking on this link
***One item in my audiocontent.xml has a bad link. I did this intentionally so I could show what will happen if you play corrupted link. So please do not worry about it when playing around with the Repo.
***Please check this, regarding changes in your question:
Ok, you can do it like this aswell:
Delete this line m.global.addFields({ input: input }) and add this two:
m.global.addFields({inputContentID: input.contentID})
m.global.addFields({inputMediaType: input.mediaType})
Now you can check if this m.global variables are empty and if not, you can start your video immediately:
You can check this when content for video Player is ready to be played by video player:
if Len(m.global.inputContentID) > 0 AND Len(m.global.inputMediaType) > 0
//loop through your content and find an item that has same inputContentID as the one saved in m.global.inputContentID variable.If you find it, play the video.
m.Video.control = "play"
m.global.inputContentID = ""
m.global.inputMediaType = ""
end if

Related

YouTube API - retrieve more than 5k items

I just want to fetch all my liked videos ~25k items. as far as my research goes this is not possible via the YouTube v3 API.
I have already found multiple issues (issue, issue) on the same problem, though some claim to have fixed it, but it only works for them as they don't have < 5000 items in their liked video list.
playlistItems list API endpoint with playlist id set to "liked videos" (LL) has a limit of 5000.
videos list API endpoint has a limit of 1000.
Unfortunately those endpoints don't provide me with parameters that I could use to paginate the requests myself (e.g. give me all the liked videos between date x and y), so I'm forced to take the provided order (which I can't get past 5k entries).
Is there any possibility I can fetch all my likes via the API?
more thoughts to the reply from #Yarin_007
if there are deleted videos in the timeline they appear as "Liked https://...url" , the script doesnt like that format and fails as the underlying elements dont have the same structure as existing videos
can be easily fixed with a try catch
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
try {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
....
}
}
catch {
console.log("error, prolly deleted video")
}
})
return liked_videos;
}
to scroll down to the bottom of the page ive used this simple script, no need to spin up something big
var millisecondsToWait = 1000;
setInterval(function() {
window.scrollTo(0, document.body.scrollHeight);
console.log("scrolling")
}, millisecondsToWait);
when more ppl want to retrive this kind of data, one could think about building a proper script that is more convenient to use. If you check the network requests you can find the desired data in the response of requests called batchexecute. One could copy the authentification of one of them provide them to a script that queries those endpoints and prepares the data like the other script i currently manually inject.
Hmm. perhaps Google Takeout?
I have verified the youtube data contains a csv called "liked videos.csv". The header is Video Id,Time Added, and the rows are
dQw4w9WgXcQ,2022-12-18 23:42:19 UTC
prvXCuEA1lw,2022-12-24 13:22:13 UTC
for example.
So you would need to retrieve video metadata per video ID. Not too bad though.
Note: the export could take a while, especially with 25k videos. (select only YouTube data)
I also had an idea that involves scraping the actual liked videos page (which would save you 25k HTTP Requests). But I'm unsure if it breaks with more than 5000 songs. (also, emulating the POST requests on that page may prove quite difficult, albeit not impossible. (they fetch /browse?key=..., and have some kind of obfuscated / encrypted base64 strings in the request-body, among other parameters)
EDIT:
Look. There's probably a normal way to get a complete dump of all you google data. (i mean, other than takeout. Email them? idk.)
anyway, the following is the other idea...
Follow this deep link to your liked videos history.
Scroll to the bottom... maybe with selenium, maybe with autoit, maybe put something on the "end" key of your keyboard until you reach your first liked video.
Hit f12 and run this in the developer console
// https://www.youtube.com/watch?v=eZPXmCIQW5M
// https://myactivity.google.com/page?utm_source=my-activity&hl=en&page=youtube_likes
// go over all "cards" in the activity webpage. (after scrolling down to the absolute bottom of it)
// create a dictionary - the key is the Video ID, the value is a list of the video's properties
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
// horrible parsing. your mileage may vary. I Tried to avoid using any gibberish class names.
let a_links = card.querySelectorAll("a")
let details = a_links[0];
let url = details.href.split("?v=")[1]
let video_length = a_links[3].innerText;
let time = a_links[2].parentElement.innerText.split(" • ")[0];
let title = details.innerText;
let date = card.closest("[data-date]").getAttribute("data-date")
liked_videos[url] = [title,video_length, date, time];
// console.log(title, video_length, date, time, url);
}
})
return liked_videos;
}
// https://stackoverflow.com/questions/57709550/how-to-download-text-from-javascript-variable-on-all-browsers
function download(filename, text, type = "text/plain") {
// Create an invisible A element
const a = document.createElement("a");
a.style.display = "none";
document.body.appendChild(a);
// Set the HREF to a Blob representation of the data to be downloaded
a.href = window.URL.createObjectURL(
new Blob([text], { type })
);
// Use download attribute to set set desired file name
a.setAttribute("download", filename);
// Trigger the download by simulating click
a.click();
// Cleanup
window.URL.revokeObjectURL(a.href);
document.body.removeChild(a);
}
function main() {
// gather relevant elements
var all_cards = document.querySelectorAll("div[aria-label='Card showing an activity from YouTube']")
var liked_videos = collector(all_cards)
// download json
download("liked_videos.json", JSON.stringify(liked_videos))
}
main()
Basically it gathers all the liked videos' details and creates a key: video_ID - Value: [title,video_length, date, time] object for each liked video.
It then automatically downloads the json as a file.

Google Workspace Add-ons Quickstart Cats stopped working on iOS gmail

I've started to receive reports that the add-on we have been using for a while stopped working on iOS mobile. I was able to confirm this and started to dig into the issues. I've been able to reproduce the error with the "Quickstart: Cats Google Workspace Add-on" using multiple google accounts (gmail and workspace). It appears that cards, navigation, and notifications being returned from a function called by a button in card do not render on iOS Gmail. The code appears to work on desktop. I was able to demo this with the Quickstart tutorial. I removed the need for whitelist urls on mobile using the following createCatCard that shows the current time instead of a cat picture. This project works on desktop but not on iOS Gmail.
I've reported this to google via their issue tracker (https://issuetracker.google.com/issues/210484310) but they state that "The Issue Tracker is intended for reporting issues and asking for features related to development with/on Google products, therefore it is not the best place to ask for support on this matter."
Does anyone know if iOS Gmail add-ons are still supported and if there is a workaround for this behavior?
/**
* Creates a card with an image of a cat, overlayed with the text.
* #param {String} text The text to overlay on the image.
* #param {Boolean} isHomepage True if the card created here is a homepage;
* false otherwise. Defaults to false.
* #return {CardService.Card} The assembled card.
*/
function createCatCard(text, isHomepage) {
// Explicitly set the value of isHomepage as false if null or undefined.
if (!isHomepage) {
isHomepage = false;
}
// Use the "Cat as a service" API to get the cat image. Add a "time" URL
// parameter to act as a cache buster.
var now = new Date();
// Replace formward slashes in the text, as they break the CataaS API.
var caption = text.replace(/\//g, ' ');
var d = new Date();
var textLabel = CardService.newTextParagraph().setText("Current Time." + d.toLocaleTimeString());
// Create a button that changes the cat image when pressed.
// Note: Action parameter keys and values must be strings.
var action = CardService.newAction()
.setFunctionName('onChangeCat')
.setParameters({text: text, isHomepage: isHomepage.toString()});
var button = CardService.newTextButton()
.setText('Change cat')
.setOnClickAction(action)
.setTextButtonStyle(CardService.TextButtonStyle.FILLED);
var buttonSet = CardService.newButtonSet()
.addButton(button);
// Assemble the widgets and return the card.
var section = CardService.newCardSection()
.addWidget(textLabel)
.addWidget(buttonSet);
var card = CardService.newCardBuilder()
.addSection(section);
if (!isHomepage) {
// Create the header shown when the card is minimized,
// but only when this card is a contextual card. Peek headers
// are never used by non-contexual cards like homepages.
var peekHeader = CardService.newCardHeader()
.setTitle('Contextual Cat')
.setImageUrl('https://www.gstatic.com/images/icons/material/system/1x/pets_black_48dp.png')
.setSubtitle(text);
card.setPeekCardHeader(peekHeader)
}
return card.build();
}

Building a playlist to control playback options for each media file individually in VLC/Lua

I have a playlist containing video files. For each playlist item I want to control the playlist mode for whether each track should "repeat", "play-and-stop" etc. in VLC using a Lua script.
file:///data/video1.mp4,repeat
file:///data/video2.mp4,play-and-stop
The aim is that some video tracks should repeat indefinitely until the user manually advances to the next track. Other tracks in the playlist should play once and then advance to the next track, or play-and-stop and wait for the user to interact before play commences again.
I currently have the following code adapted from here, however I'm unable to apply the playlist options to each track individually (the options apply to the whole playlist). Is there any way I can extend my script to achieve this?
function probe()
return string.match(vlc.path, "%.myplaylist$")
end
function parse()
playlist = {}
while true do
playlist_item = {}
line = vlc.readline()
if line == nil then
break
-- parse playlist line into two tokens splitting on comma
values = {}
i=0
for word in string.gmatch(line, '([^,]+)') do
values[i]=word
i=i+1
end
playlist_item.path = values[0]
playback_mode = values[1]
playlist_item.options = {}
table.insert(playlist_item.options, "fullscreen")
table.insert(playlist_item.options, playback_mode)
-- add the item to the playlist
table.insert( playlist, playlist_item )
end
return playlist
end
Adding "video options" to playlist_item.options is working, but adding "playlist options" on a per track basis does not. I'm unsure how to proceed as I can only return an entire playlist.
If you are interested you can solve the repeat problem by creating a playlist in VLC and saving it as a XSPF file. Then you need to edit the file with notepad and add this inside the extension tag of the track you want to repeat:
<vlc:option>input-repeat=9999</vlc:option>
Example:
<track>
<location>file:///C:/Users/Notebook/Desktop/17-LOOP.mp4</location>
<duration>10048</duration>
<extension application="http://www.videolan.org/vlc/playlist/0">
<vlc:id>1</vlc:id>
<vlc:option>input-repeat=9999</vlc:option>
</extension>
</track>
By doing this the moment this file is played in the playlist will repeat 9999 times(You can change this if the file is too short) or until you press next. Then the playlist will continue.
Given a custom playlist
file:///data/video1.mp4,repeat
file:///data/video2.mp4,play-once
I completed the playlist script in the original question above by adding the repeat/play-once information to the track metadata.
playlist_item.meta = { ["Playback mode"] = playback_mode }
The final step was to create an extension (similar to the Song Tracker extension) that listens to the input_changed event and uses the "Playback mode" track metadata to toggle vlc.playlist.repeat_() accordingly.
function activate()
update_playback_mode()
end
function input_changed()
update_playback_mode()
end
function update_playback_mode()
if vlc.input.is_playing() then
local item = vlc.item or vlc.input.item()
if item then
local meta = item:metas()
if meta then
local repeat_track = meta["Playback mode"]
if repeat_track == nil then
repeat_track = false
elseif string.lower(repeat_track) == "repeat" then
repeat_track = true
else
repeat_track = false
end
local player_mode = vlc.playlist.repeat_()
-- toggle playlist.repeat_() as required
if player_mode and not repeat_track then
vlc.playlist.repeat_()
elseif not player_mode and repeat_track then
vlc.playlist.repeat_()
end
return true
end
end

Youtube Channel Gallery getting no longer supported video from YouTube

Suddenly the feed on our homepage has the - This device is no longer supported - video from YouTube. Everyone is seeing this no matter what device they are on, so we think it doesn't like something about the plugin now after YouTube changed and knew they would not be seen on certain devices and televisions: https://youtube.com/devicesupport
What is the fix for this? Are you pushing out an update to address this? Thanks.
https://wordpress.org/plugins/youtube-channel-gallery/
I would recommend taking a read through this thread in the support forum for the plugin:
https://wordpress.org/support/topic/getting-no-longer-supported-video-from-youtube
There are some different solutions, depending on whether you are using the widget or shortcode, but a few different approaches came up depending on what is easiest for you - personally, I favor this one:
open the file wp-content/plugins/youtube-channel-gallery.php
goto line 622 and paste this (directly under the foreach-line): if ($entry->title == 'https://youtube.com/devicesupport') { continue; }
let the plugin display 1 more video than before (maxitems)
What it does is: just throws away the "device support" video from the video feed. so there's one video less now, this is why you have to add 1 to the maxitems.
*Credit goes to Wordpress.org forums member "koem"
I had the same issue for the youtube videos and I fixed as follows:
After the installation of the youtube plugin in our site, we can see that the folder with the file name youtube-feeder.php
There is a function named "getFeedUrl" and this function is calling from the function
So the line is:
$dataURL ="To get the above URL please follow the below step";
https://developers.google.com/apis-explorer/#p/youtube/v3/youtube.channels.list
put the part as contentDetails and forUsername as [channel id] eg:Google
then execute then find the "uploads": "UUK8sQmJBp8GCxrOtXWBpyEA" from the list (ID will be different for your channel)
then go to the following:
https://developers.google.com/apis-explorer/#p/youtube/v3/youtube.playlistItems.list
Then put the part as snippet and paste the copied uploads id in the playlistId then execute. you can see the result and you just copy the Request URL from the result.
Note: you need to register your application for API_KEY
developers.google.com/youtube/registering_an_application
Second:
You need to change the parsing section also:
OLD code:
$entries = array();
if($data && is_array($data['feed']['entry']))
{
foreach($data['feed']['entry'] as $vid)
{
$vid = $vid['media$group'];
$url = $vid['media$content'][0]['url'];
$url = substr($url, 0, stripos($url, '?'));
$entries[] = array(
'id' => $vid['yt$videoid']['$t'],
'url' => $url,
'title' => $vid['media$title']['$t'],
'date' => $vid['yt$uploaded']['$t'],
'content' => $vid['media$description']['$t']
);
}
}
New Modification:
$entries = array();
if($data && is_array($data['items']))
{
foreach($data['items'] as $vid)
{
//var_dump($data['items']);
$vid = $vid['snippet'];
$url = "https://www.youtube.com/watch?v="+$vid['resourceId']['videoId'];
//$url = substr($url, 0, stripos($url, '?'));
$entries[] = array(
'id' => $vid['resourceId']['videoId'],
'url' => $url,
'title' => $vid['title'],
'date' => $vid['publishedAt'],
'content' => $vid['description']
);
}
}
Then check it out, If its not coming the exact data, please comment the following line also.
`if(!$entries = $this->getCachedYoutubeData($id, $cache, $type, $orderby))`
Please write here if you have.
Youtube API 3 is very different from API 2. I just switched plugins - wpyoutubevideogallery.com - easy to setup and fully functional with youtube API 3. Also free.

youtube api - loop throug huge data

I need to retrieve the 'view count' for each video channel , and I’m using this library .
this is my code
okay the code works fine and print me the view count foreach video , except that i got these warnings with some other videos without printing the view count
A PHP Error was encountered Severity: Warning Message:
simplexml_load_string() [function.simplexml-load-string]: Entity:
line 547: parser error : attributes construct error
Message:
simplexml_load_string() [function.simplexml-load-string]:
outube_gdata'/>
Message:
simplexml_load_string() [function.simplexml-load-string]: ^
Message:
simplexml_load_string() [function.simplexml-load-string]: Entity:
line 547: parser error : Couldn't find end of Start Tag link line 547
Message: simplexml_load_string() [function.simplexml-load-string]:
outube_gdata'/>
how can i deal with this large number of videos and channels without causing this warning msgs and lost in time , cause if i tried the same code on one channel with fewer videos i got no errors
$channels=array('google','apple','mac','xyz','abc','test');
for ($j=0; $j<count($channels) $j++)
{
$JSON = file_get_contents("https://gdata.youtube.com/feeds/api/users/".$channels[$j]."/uploads?v=2&alt=jsonc&max-results=0");
$JSON_Data = json_decode($JSON);
$total_videos = $JSON_Data->{'data'}->{'totalItems'};
for($i=1; $i<=$total_videos; )
{
$this->get_userfeed($channels[$j],$maxresult=20,$start=$i);
$i+=20;
}
}
public function get_userfeed($ch_id,$maxresult=10,$start=0,$do=null)
{
$output = $this->youtube->getUserUploads($ch_id, array('max-results'=>$maxresult,'start-index'=>$start));
$xml = simplexml_load_string($output);
// single entry for testing
foreach($xml->entry as $entry)
{
foreach($entry->id as $key=>$val)
{
$id = explode('videos/', (string)$val);
$JSON = file_get_contents("https://gdata.youtube.com/feeds/api/videos/".$id[1]."?v=2&alt=json");
$JSON_Data = json_decode($JSON);
$v_count = $JSON_Data->{'entry'}->{'yt$statistics'}->{'viewCount'};
if($v_count == NULL) $v_count =0;
echo $v_count;
// store the v_count into database
}
}
}
You're doing a few things wrong.
First off, if you want to minimize the number of calls you're making to the API, you should be setting max-results=50, which is the largest value that the API supports.
Second, I don't understand why you're making individual calls to http://.../videos/VIDEO_ID to retrieve the statistics for each video, since that information is already returned as part of the video entries you're getting from the http://.../users/USER_ID/uploads feed. You can just store the values returned by that feed and avoid having to make all those additional calls to retrieve each video.
Finally, the underlying issue is almost certainly that you're running into quota errors, and you can read more about them at http://apiblog.youtube.com/2010/02/best-practices-for-avoiding-quota.html
Taking any of the steps I mention should cut down on the total requests that you're making and potentially get around any quota problems, but you should familiarize yourself with the quota system anyway.

Resources