Get current title from MP3 Stream on infomaniak - stream

I post here, because I'm working on the new website of a webradio and I've a problem, I want to get info (like current title ) of the mp3 stream of the radio.
But I've no idea about how to do this ...
The stream is hosting by infomaniak.
This the url : http://radiomed.ice.infomaniak.ch/radiomed.mp3
Can you help me ?
P.S.: Sorry for my english , I'm french .

You can use some tools like ffmpeg to get metadata in stream.
ffmpeg -i http://radiomed.ice.infomaniak.ch/radiomed.mp3 -f ffmetadata metadata.txt
Otherwise you can read the StreamTitle metadata directly inside packets.
Example in PHP :
$handle = fopen('http://radiomed.ice.infomaniak.ch/radiomed.mp3', 'r');
while (!feof($handle)) {
// Get 1000 bytes per 1000 bytes
$buffer = stream_get_contents($handle, 1000);
// Check if StreamTitle is present in it
if (strpos($buffer, 'StreamTitle=')!==false) {
// Slice StreamTitle to get title and artist
$title = explode('StreamTitle=', $buffer)[1];
return substr($title, 1, strpos($title, ';') - 2);
}
}
Tips : Add context in fopen function if user agent is required or if you have redirection.

Related

YouTube API - retrieve more than 5k items

I just want to fetch all my liked videos ~25k items. as far as my research goes this is not possible via the YouTube v3 API.
I have already found multiple issues (issue, issue) on the same problem, though some claim to have fixed it, but it only works for them as they don't have < 5000 items in their liked video list.
playlistItems list API endpoint with playlist id set to "liked videos" (LL) has a limit of 5000.
videos list API endpoint has a limit of 1000.
Unfortunately those endpoints don't provide me with parameters that I could use to paginate the requests myself (e.g. give me all the liked videos between date x and y), so I'm forced to take the provided order (which I can't get past 5k entries).
Is there any possibility I can fetch all my likes via the API?
more thoughts to the reply from #Yarin_007
if there are deleted videos in the timeline they appear as "Liked https://...url" , the script doesnt like that format and fails as the underlying elements dont have the same structure as existing videos
can be easily fixed with a try catch
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
try {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
....
}
}
catch {
console.log("error, prolly deleted video")
}
})
return liked_videos;
}
to scroll down to the bottom of the page ive used this simple script, no need to spin up something big
var millisecondsToWait = 1000;
setInterval(function() {
window.scrollTo(0, document.body.scrollHeight);
console.log("scrolling")
}, millisecondsToWait);
when more ppl want to retrive this kind of data, one could think about building a proper script that is more convenient to use. If you check the network requests you can find the desired data in the response of requests called batchexecute. One could copy the authentification of one of them provide them to a script that queries those endpoints and prepares the data like the other script i currently manually inject.
Hmm. perhaps Google Takeout?
I have verified the youtube data contains a csv called "liked videos.csv". The header is Video Id,Time Added, and the rows are
dQw4w9WgXcQ,2022-12-18 23:42:19 UTC
prvXCuEA1lw,2022-12-24 13:22:13 UTC
for example.
So you would need to retrieve video metadata per video ID. Not too bad though.
Note: the export could take a while, especially with 25k videos. (select only YouTube data)
I also had an idea that involves scraping the actual liked videos page (which would save you 25k HTTP Requests). But I'm unsure if it breaks with more than 5000 songs. (also, emulating the POST requests on that page may prove quite difficult, albeit not impossible. (they fetch /browse?key=..., and have some kind of obfuscated / encrypted base64 strings in the request-body, among other parameters)
EDIT:
Look. There's probably a normal way to get a complete dump of all you google data. (i mean, other than takeout. Email them? idk.)
anyway, the following is the other idea...
Follow this deep link to your liked videos history.
Scroll to the bottom... maybe with selenium, maybe with autoit, maybe put something on the "end" key of your keyboard until you reach your first liked video.
Hit f12 and run this in the developer console
// https://www.youtube.com/watch?v=eZPXmCIQW5M
// https://myactivity.google.com/page?utm_source=my-activity&hl=en&page=youtube_likes
// go over all "cards" in the activity webpage. (after scrolling down to the absolute bottom of it)
// create a dictionary - the key is the Video ID, the value is a list of the video's properties
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
// horrible parsing. your mileage may vary. I Tried to avoid using any gibberish class names.
let a_links = card.querySelectorAll("a")
let details = a_links[0];
let url = details.href.split("?v=")[1]
let video_length = a_links[3].innerText;
let time = a_links[2].parentElement.innerText.split(" • ")[0];
let title = details.innerText;
let date = card.closest("[data-date]").getAttribute("data-date")
liked_videos[url] = [title,video_length, date, time];
// console.log(title, video_length, date, time, url);
}
})
return liked_videos;
}
// https://stackoverflow.com/questions/57709550/how-to-download-text-from-javascript-variable-on-all-browsers
function download(filename, text, type = "text/plain") {
// Create an invisible A element
const a = document.createElement("a");
a.style.display = "none";
document.body.appendChild(a);
// Set the HREF to a Blob representation of the data to be downloaded
a.href = window.URL.createObjectURL(
new Blob([text], { type })
);
// Use download attribute to set set desired file name
a.setAttribute("download", filename);
// Trigger the download by simulating click
a.click();
// Cleanup
window.URL.revokeObjectURL(a.href);
document.body.removeChild(a);
}
function main() {
// gather relevant elements
var all_cards = document.querySelectorAll("div[aria-label='Card showing an activity from YouTube']")
var liked_videos = collector(all_cards)
// download json
download("liked_videos.json", JSON.stringify(liked_videos))
}
main()
Basically it gathers all the liked videos' details and creates a key: video_ID - Value: [title,video_length, date, time] object for each liked video.
It then automatically downloads the json as a file.

How to implement deep linking in Roku VideoPlayer Sample channel

We have developed a Roku channel using Roku VideoPlayer Sample Channel (https://github.com/rokudev/videoplayer-channel). Although a recent submittal to Roku was rejected for not providing a deep linking capability. The main.brs provides means to parse a request for a deep link which I've been able to implement to obtain my contentID and mediaType based on a curl command as follows:
curl -d '' 'http://192.168.1.24:8060/launch/dev?MediaType=special&contentID=49479'
The main.brs comments say to
launch/prep the content mapped to the contentID here.
We're using the xml files to provide the Roku "categories" screen and the screen for the listing after selecting an item from the categories screen (including the springboard screen). Within these xml files, we tag the contentID and mediaType of every video item.
I'm fairly new to the Roku development. While we've been able to create channels previously using their video channel template, I don't know how to "launch/prep the content mapped to the contentID". I've searched and tried various other calls (ie - playMedia(ContentID, Screen)), but I get errors on the debugger relating to "function call operator () attempted on non-function".
I would appreciate some instruction on how to jump to the springboard of the video based on the value of the contentID passed using the deep linking command. Or a means to play the video based on the contentID in the xml file.
Here's my main.brs:
sub Main(input as Dynamic)
print "################"
print "Start of Channel"
print "################"
' Add deep linking support here. Input is an associative array containing
' parameters that the client defines. Examples include "options, contentID, etc."
' See guide here: https://sdkdocs.roku.com/display/sdkdoc/External+Control+Guide
' For example, if a user clicks on an ad for a movie that your app provides,
' you will have mapped that movie to a contentID and you can parse that ID
' out from the input parameter here.
' Call the service provider API to look up
' the content details, or right data from feed for id
if input <> invalid
print "Received Input -- write code here to check it!"
if input.reason <> invalid
if input.reason = "ad" then
print "Channel launched from ad click"
'do ad stuff here
end if
end if
if input.contentID <> invalid
m.contentID = input.contentID
print "contentID is: " + input.contentID
print "mediaType is: " + input.mediaType
'launch/prep the content mapped to the contentID here
end if
end if
showHeroScreen(input)
end sub
' Initializes the scene and shows the main homepage.
' Handles closing of the channel.
sub showHeroScreen(input as object)
print "main.brs - [showHeroScreen]"
screen = CreateObject("roSGScreen")
m.port = CreateObject("roMessagePort")
screen.setMessagePort(m.port)
scene = screen.CreateScene("VideoScene")
m.global = screen.getGlobalNode()
'Deep link params
m.global.addFields({ input: input })
screen.show()
while(true)
msg = wait(0, m.port)
msgType = type(msg)
if msgType = "roSGScreenEvent"
if msg.isScreenClosed() then return
end if
end while
end sub
I'm thinking if I can get the params for deep linking properly set prior to the screen.show call, it should work? I am able to output the outputID and mediaType values using the debugger when using the curl to call the deep link, but it just goes to the home screen without cueing up the video that was deep linked.
Any help is appreciated.
Please check my GitHub REPO for simple Deep Linking Example.
Little explanation:
First I int my main.brs like this:
sub Main(args as Dynamic) as Void
showSGScreen(args)
end sub
"args" will be provided by Roku Firmware!
In my showSGScreen Sub, you will find:
'DeepLinking
if args.contentId <> invalid AND args.mediaType <> invalid
m.scene.contentDLId = args.contentId
m.scene.mediaDPType = args.mediaType
m.scene.deepLinkingLand = true
end if
Now check out my audiocontent.xml file:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<AudioContent>
<audio actors = "Artist - Unknown" album ="Album - Unknown" title = "KISS FM - LIVE STREAM" streamformat = "es.aac-adts" url = "http://80.86.106.143:9128/kissfm.aacp" HDPosterUrl = "pkg:/images/radio_stations_img/kiss_fm.png" Rating = "true" TrackIDAudio = "1" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Artist - Unknown" album ="Album - Unknown" title = "TDI Radio - LIVE STREAM" streamformat = "mp3" url = "http://streaming.tdiradio.com:8000/tdiradiob" HDPosterUrl = "pkg:/images/radio_stations_img/tdi_radio.png" Rating = "true" TrackIDAudio = "2" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Artist - Unknown" album ="Album - Unknown" title = "Polskie Radio - LIVE STREAM" streamformat = "hls" url = "http://stream85.polskieradio.pl/omniaaxe/k11.stream/playlist.m3u8" HDPosterUrl = "pkg:/images/radio_stations_img/polskie_radio.png" Rating = "true" TrackIDAudio = "3" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Edgar Allan Yo and Shporky Pork" album ="Mony Pajton EP" title = "Niko kao Ja - MP3" streamformat = "mp3" url = "pkg:/mp3/Edgar_Allan_Yo_and_Shporky_Pork_Niko_Kao_Ja.mp3" HDPosterUrl = "pkg:/images/mp3_img/mony_pajton.png" Rating = "false" TrackIDAudio = "4" ShortDescriptionLine1 = "short-form"/>
<audio actors = "Edgar Allan Yo and Shporky Pork" album ="Mony Pajton EP" title = "Zatvaramo Krug - MP3" streamformat = "mp3" url = "pkg:/mp3/Edgar_Allan_Yo_and_Shporky_Pork_Zatvaramo_Krug.mp3" HDPosterUrl = "pkg:/images/mp3_img/mony_pajton.png" Rating = "false" TrackIDAudio = "5" ShortDescriptionLine1 = "short-form"/>
</AudioContent>
Notice "TrackIDAudio" property.
Now go to HomeSceen.brs file and take a look at this:
if m.audiolist.content.getChild(index).TrackIDAudio = m.contentId
m.audiolist.jumpToItem = index
m.audioPlayer.itemContent = m.audiolist.content.getChild(index)
m.audioPlayer.audioControl = "play"
m.audioPlayer.setFocus(true)
m.audioPlayer.visible = true
exit for
end if
So here is where I check if m.contentId(this is actually args.contentId that we added back in the main.brs file to the HomeScene's top ) is the same as the TrackIDAudio from my LabelList. If they are the same, App Will play my item!
One thing, you can test your deep linking with this TOOL.
Also you should now that you will need to send Roku XML feed of your contentID's and MediaTypes so that they can link those items with your app when published.
So user click's on your item in Roku search, Roku sends contentId and MediaType that you have provided to them in the XML feed to the Roku firmware, ROKU firmware than puts contentId and MediaType in args and after that you do what you need with it.
Deep linking tool mimics this.
Also you can find some more info about deeplinking on this link
***One item in my audiocontent.xml has a bad link. I did this intentionally so I could show what will happen if you play corrupted link. So please do not worry about it when playing around with the Repo.
***Please check this, regarding changes in your question:
Ok, you can do it like this aswell:
Delete this line m.global.addFields({ input: input }) and add this two:
m.global.addFields({inputContentID: input.contentID})
m.global.addFields({inputMediaType: input.mediaType})
Now you can check if this m.global variables are empty and if not, you can start your video immediately:
You can check this when content for video Player is ready to be played by video player:
if Len(m.global.inputContentID) > 0 AND Len(m.global.inputMediaType) > 0
//loop through your content and find an item that has same inputContentID as the one saved in m.global.inputContentID variable.If you find it, play the video.
m.Video.control = "play"
m.global.inputContentID = ""
m.global.inputMediaType = ""
end if

Export SSRS report directly without rendering it on ReportViewer

I have a set of RDL reports hosted on the report server instance. Some of the report renders more than 100,000 records on the ReportViewer. So that it takes quite long time to render it on the Viewer. So, we decided to go with Export the content directly from the server based on the user input parameters for the report as well as export file format.
Main thing here, I do not want the user to wait until the export file available for download. Rather, User can submit the action and can proceed to do other works. In the background, the program has to export the file to some physical location. When the download will be available, the user will be informed with some notification about the exported file.
I found the way in this Link. I need to know what are the ways to achieve the above mentioned functionality as well as how to pass the input parameters for the report. Pl suggest me.
Note: I was using XML as datasource for the rdl reports.
EDIT
I found something useful and did the coding like the below,
string path = ServerURL +"?" + _reportFolder + "ReportName&rs:Command=Render&rs:Format=PDF";
WebRequest req = WebRequest.Create(path);
string reportParametersQT = String.Empty;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
WebResponse response = req.GetResponse();
Stream stream = response.GetResponseStream();
//screen.Response.Clear();
string enCodeFileName = HttpUtility.UrlEncode("fileName.pdf", System.Text.Encoding.UTF8);
// The word attachment in Addheader is used to directly show the save dialog box in browser
Response.AddHeader("content-disposition", "attachment; filename=" + enCodeFileName);
Response.BufferOutput = false; // to prevent buffering
Response.ContentType = response.ContentType;
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
Response.End();
I am able to download the exported file. But need to save the file in physical location instead of downloading. I dont know how to do that.
Both of these are very easy to do. You essentially just pass the parameters in the URL that you're calling, for example for a parameter called "LearnerList" you add &LearnerList=12345 to the URL. For exporting, add an additional paramter for Format=PDF (or whatever you want the file as) to get the report to export as a PDF instead of generating in Report Viewer.
Here's an example URL:
https://reporting.MySite.net/ReportServer/Pages/ReportViewer.aspx?/Users+Folders/User/My+Reports/Learner+Details&rs:Format=PDF&LearnerList=202307
Read these two pages, and you should be golden:
https://msdn.microsoft.com/en-us/library/ms155391.aspx
https://msdn.microsoft.com/en-us/library/ms154040.aspx

youtube api - loop throug huge data

I need to retrieve the 'view count' for each video channel , and I’m using this library .
this is my code
okay the code works fine and print me the view count foreach video , except that i got these warnings with some other videos without printing the view count
A PHP Error was encountered Severity: Warning Message:
simplexml_load_string() [function.simplexml-load-string]: Entity:
line 547: parser error : attributes construct error
Message:
simplexml_load_string() [function.simplexml-load-string]:
outube_gdata'/>
Message:
simplexml_load_string() [function.simplexml-load-string]: ^
Message:
simplexml_load_string() [function.simplexml-load-string]: Entity:
line 547: parser error : Couldn't find end of Start Tag link line 547
Message: simplexml_load_string() [function.simplexml-load-string]:
outube_gdata'/>
how can i deal with this large number of videos and channels without causing this warning msgs and lost in time , cause if i tried the same code on one channel with fewer videos i got no errors
$channels=array('google','apple','mac','xyz','abc','test');
for ($j=0; $j<count($channels) $j++)
{
$JSON = file_get_contents("https://gdata.youtube.com/feeds/api/users/".$channels[$j]."/uploads?v=2&alt=jsonc&max-results=0");
$JSON_Data = json_decode($JSON);
$total_videos = $JSON_Data->{'data'}->{'totalItems'};
for($i=1; $i<=$total_videos; )
{
$this->get_userfeed($channels[$j],$maxresult=20,$start=$i);
$i+=20;
}
}
public function get_userfeed($ch_id,$maxresult=10,$start=0,$do=null)
{
$output = $this->youtube->getUserUploads($ch_id, array('max-results'=>$maxresult,'start-index'=>$start));
$xml = simplexml_load_string($output);
// single entry for testing
foreach($xml->entry as $entry)
{
foreach($entry->id as $key=>$val)
{
$id = explode('videos/', (string)$val);
$JSON = file_get_contents("https://gdata.youtube.com/feeds/api/videos/".$id[1]."?v=2&alt=json");
$JSON_Data = json_decode($JSON);
$v_count = $JSON_Data->{'entry'}->{'yt$statistics'}->{'viewCount'};
if($v_count == NULL) $v_count =0;
echo $v_count;
// store the v_count into database
}
}
}
You're doing a few things wrong.
First off, if you want to minimize the number of calls you're making to the API, you should be setting max-results=50, which is the largest value that the API supports.
Second, I don't understand why you're making individual calls to http://.../videos/VIDEO_ID to retrieve the statistics for each video, since that information is already returned as part of the video entries you're getting from the http://.../users/USER_ID/uploads feed. You can just store the values returned by that feed and avoid having to make all those additional calls to retrieve each video.
Finally, the underlying issue is almost certainly that you're running into quota errors, and you can read more about them at http://apiblog.youtube.com/2010/02/best-practices-for-avoiding-quota.html
Taking any of the steps I mention should cut down on the total requests that you're making and potentially get around any quota problems, but you should familiarize yourself with the quota system anyway.

Using get_video on YouTube to download a video

I am trying to get the video URL of any YouTube video like this:
Open
http://youtube.com/get_video_info?video_id=VIDEOID
then take the account_playback_token token value and open this URL:
http://www.youtube.com/get_video?video_id=VIDEOID&t=TOKEN&fmt=18&asv=2
This should open a page with just the video or start a download of the video. But nothing happens, Safari's activity window says 'Not found', so there is something wrong with the URL. I want to integrate this into a iPad app, and the javascript method to get the video URL I use in the iPhone version of the app isn't working, so I need another solution.
YouTube changes all the time, and I think the URL is just outdated. Please help :)
Edit: It seems like the get_video method doesn't work anymore. I'd really appreciate if anybody could tell me another way to find the video URL.
Thank you, I really need help.
Sorry, that is not possible anymore. They limit the token to the IP that got it.
Here's a workaround by using the get_headers() function, which gives you an array with the link to the video. I don't know anything about ios, so hopefully you can rewrite this PHP code yourself.
<?php
if(empty($_GET['id'])) {
echo "No id found!";
}
else {
function url_exists($url) {
if(file_get_contents($url, FALSE, NULL, 0, 0) === false) return false;
return true;
}
$id = $_GET['id'];
$page = #file_get_contents('http://www.youtube.com/get_video_info?&video_id='.$id);
preg_match('/token=(.*?)&thumbnail_url=/', $page, $token);
$token = urldecode($token[1]);
$get = $title->video_details;
$url_array = array ("http://youtube.com/get_video?video_id=".$id."&t=".$token,
"http://youtube.com/get_video?video_id=".$id."&t=".$token."&fmt=18");
if(url_exists($url_array[1]) === true) {
$file = get_headers($url_array[1]);
}
elseif(url_exists($url_array[0]) === true) {
$file = get_headers($url_array[0]);
}
$url = trim($file[19],"Location: ");
echo 'Download video';
}
?>
I use this and it rocks: http://rg3.github.com/youtube-dl/
Just copy a YouTube URL from your browser and execute this command with the YouTube URL as the only argument. It will figure out how to find the best quality video and download it for you.
Great! I needed a way to grab a whole playlist of videos.
In Linux, this is what I used:
y=http://www.youtube.com;
f="http://gdata.youtube.com/feeds/api/playlists/PLeHqhPDNAZY_3377_DpzRSMh9MA9UbIEN?start-index=26";
for i in $(curl -s $f |grep -o "url='$y/watch?v=[^']'");do d=$(echo
$i|sed "s|url\='$y/watch?v=(.)&.*'|\1|"); youtube-dl
--restrict-filenames "$y/watch?v=$d"; done
You have to find the playlist ID from a common Youtube URL like:
https://www.youtube.com/playlist?list=PLeHqhPDNAZY_3377_DpzRSMh9MA9UbIEN
Also, this technique uses gdata API, limiting 25 records per page.
Hence the ?start-index=26 parameter (to get page 2 in my example)
This could use some cleaning, and extra logic to iterate thru all sets of 25, too.
Credits:
https://stackoverflow.com/a/8761493/1069375
http://www.commandlinefu.com/commands/view/3154/download-youtube-playlist (which itself didn't quite work)

Resources