Online audio stream using ruby on rails - ruby-on-rails

I'm trying to write small website that can stream audio online(radio station) and got few questions:
1. Do i have to index all my music files into database, or i can randomily pick file from file system and play it.
2. When should i use ajax to load new song(right after last finished, or few seconds before to get responce from server with link to file?)
3. Is it worth to use ajax, or better make list, that will play its full time and then start over?

I think you are asking the wrong Questions.
You need to think about how you will play the Audio in the browser (what player will you be using?).
You don't need Ruby on Rails to deliver the Files to the client. They can be requested directly from the Web server (Apache or Nginx)
Rails is only required for rendering the Website alongside the Player that will then play the Audio. Depending on what player you are using you can either control it through javascript to request the next song from the server or simply write a static playlist to the client that then gets played.

Regarding the streaming I aggree with #Tigraine
Now,
Single song play:
1 if you want to play an individual song then in your ajax call to action return a key value pair in json response which contains the audio URL of requested song.
For Playlist:
2. You should return an array which contains the audio URL in the same arrangement as they were added into playlist.
Then in your java script declare variable
var current_song
var total_song
In js audio player we commonly have methods which detects when the song completes and then on song complete you can increment the song counter as
current_song+=0
when the counter reaches to end reset it to one:
Sample code is:
Here I am using Jplayer you can use any other player which one of the best and fully customizable player.
var current_song = 0;
var total_songs;
var song_id_array;
$.ajax({
url:"<%=url_for :controller => 'cont_x',:action => 'song_in_playlist'%>",
data:'playlist_id=' + encodeURIComponent(playlist_id),
cache:false,
success:function (data) {
song_id_array = data.array; //data.array variable contain's the audio URL's array of songs inside selected playlist which is returned by the action //
total_songs = song_id_array.length;
}
})
current_song = 0;
play_right_song(current_song);
}
function play_right_song(current_song) {
$("#jquery_jplayer_1").jPlayer("destroy");
var audio_url_inside = song_id_array[current_song];
$('#jquery_jplayer_1').jPlayer({
ready:function (event) {
$(this).jPlayer("setMedia", {
mp3:audio_url_inside
}).jPlayer("play");
},
ended:function () { // The $.jPlayer.event.ended event
current_song += 1;
if (current_song >= total_songs) {
current_song = 0;
}
play_right_song(current_song);
},
swfPath:"<%= asset_path('Jplayer.swf')%>",
supplied:"mp3"
});
}
and if you want to handle next and previous song in playlist just ask for the code. I will update the answer.

Related

YouTube API - retrieve more than 5k items

I just want to fetch all my liked videos ~25k items. as far as my research goes this is not possible via the YouTube v3 API.
I have already found multiple issues (issue, issue) on the same problem, though some claim to have fixed it, but it only works for them as they don't have < 5000 items in their liked video list.
playlistItems list API endpoint with playlist id set to "liked videos" (LL) has a limit of 5000.
videos list API endpoint has a limit of 1000.
Unfortunately those endpoints don't provide me with parameters that I could use to paginate the requests myself (e.g. give me all the liked videos between date x and y), so I'm forced to take the provided order (which I can't get past 5k entries).
Is there any possibility I can fetch all my likes via the API?
more thoughts to the reply from #Yarin_007
if there are deleted videos in the timeline they appear as "Liked https://...url" , the script doesnt like that format and fails as the underlying elements dont have the same structure as existing videos
can be easily fixed with a try catch
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
try {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
....
}
}
catch {
console.log("error, prolly deleted video")
}
})
return liked_videos;
}
to scroll down to the bottom of the page ive used this simple script, no need to spin up something big
var millisecondsToWait = 1000;
setInterval(function() {
window.scrollTo(0, document.body.scrollHeight);
console.log("scrolling")
}, millisecondsToWait);
when more ppl want to retrive this kind of data, one could think about building a proper script that is more convenient to use. If you check the network requests you can find the desired data in the response of requests called batchexecute. One could copy the authentification of one of them provide them to a script that queries those endpoints and prepares the data like the other script i currently manually inject.
Hmm. perhaps Google Takeout?
I have verified the youtube data contains a csv called "liked videos.csv". The header is Video Id,Time Added, and the rows are
dQw4w9WgXcQ,2022-12-18 23:42:19 UTC
prvXCuEA1lw,2022-12-24 13:22:13 UTC
for example.
So you would need to retrieve video metadata per video ID. Not too bad though.
Note: the export could take a while, especially with 25k videos. (select only YouTube data)
I also had an idea that involves scraping the actual liked videos page (which would save you 25k HTTP Requests). But I'm unsure if it breaks with more than 5000 songs. (also, emulating the POST requests on that page may prove quite difficult, albeit not impossible. (they fetch /browse?key=..., and have some kind of obfuscated / encrypted base64 strings in the request-body, among other parameters)
EDIT:
Look. There's probably a normal way to get a complete dump of all you google data. (i mean, other than takeout. Email them? idk.)
anyway, the following is the other idea...
Follow this deep link to your liked videos history.
Scroll to the bottom... maybe with selenium, maybe with autoit, maybe put something on the "end" key of your keyboard until you reach your first liked video.
Hit f12 and run this in the developer console
// https://www.youtube.com/watch?v=eZPXmCIQW5M
// https://myactivity.google.com/page?utm_source=my-activity&hl=en&page=youtube_likes
// go over all "cards" in the activity webpage. (after scrolling down to the absolute bottom of it)
// create a dictionary - the key is the Video ID, the value is a list of the video's properties
function collector(all_cards) {
var liked_videos = {};
all_cards.forEach(card => {
// ignore Dislikes
if (card.innerText.split("\n")[1].startsWith("Liked")) {
// horrible parsing. your mileage may vary. I Tried to avoid using any gibberish class names.
let a_links = card.querySelectorAll("a")
let details = a_links[0];
let url = details.href.split("?v=")[1]
let video_length = a_links[3].innerText;
let time = a_links[2].parentElement.innerText.split(" • ")[0];
let title = details.innerText;
let date = card.closest("[data-date]").getAttribute("data-date")
liked_videos[url] = [title,video_length, date, time];
// console.log(title, video_length, date, time, url);
}
})
return liked_videos;
}
// https://stackoverflow.com/questions/57709550/how-to-download-text-from-javascript-variable-on-all-browsers
function download(filename, text, type = "text/plain") {
// Create an invisible A element
const a = document.createElement("a");
a.style.display = "none";
document.body.appendChild(a);
// Set the HREF to a Blob representation of the data to be downloaded
a.href = window.URL.createObjectURL(
new Blob([text], { type })
);
// Use download attribute to set set desired file name
a.setAttribute("download", filename);
// Trigger the download by simulating click
a.click();
// Cleanup
window.URL.revokeObjectURL(a.href);
document.body.removeChild(a);
}
function main() {
// gather relevant elements
var all_cards = document.querySelectorAll("div[aria-label='Card showing an activity from YouTube']")
var liked_videos = collector(all_cards)
// download json
download("liked_videos.json", JSON.stringify(liked_videos))
}
main()
Basically it gathers all the liked videos' details and creates a key: video_ID - Value: [title,video_length, date, time] object for each liked video.
It then automatically downloads the json as a file.

YouTube API: differentiating between Premiered and Livestream

I am using YouTube data API and trying to differentiate prior livestreams vs premiered content. The liveStreamingDetails in the video list is populated for both livestreams and premiered content. Is there a way I can differentiate between the two?
Below is my python code for getting live stream start time. If its not populated, then I know that video is not live stream. But the problem is that this value is getting populated for premiered content as well.
vid_request = youtube.videos().list(part = 'contentDetails, statistics, snippet, liveStreamingDetails, status',id = ','.join(vid_ids))
vid_response = vid_request.execute()
for videoitem in vid_response['items']:
try:
livestreamStartTime = videoitem['liveStreamingDetails']['actualStartTime']
except:
livestreamStartTime = ''
Any pointers on what could work would really help?

Building a playlist to control playback options for each media file individually in VLC/Lua

I have a playlist containing video files. For each playlist item I want to control the playlist mode for whether each track should "repeat", "play-and-stop" etc. in VLC using a Lua script.
file:///data/video1.mp4,repeat
file:///data/video2.mp4,play-and-stop
The aim is that some video tracks should repeat indefinitely until the user manually advances to the next track. Other tracks in the playlist should play once and then advance to the next track, or play-and-stop and wait for the user to interact before play commences again.
I currently have the following code adapted from here, however I'm unable to apply the playlist options to each track individually (the options apply to the whole playlist). Is there any way I can extend my script to achieve this?
function probe()
return string.match(vlc.path, "%.myplaylist$")
end
function parse()
playlist = {}
while true do
playlist_item = {}
line = vlc.readline()
if line == nil then
break
-- parse playlist line into two tokens splitting on comma
values = {}
i=0
for word in string.gmatch(line, '([^,]+)') do
values[i]=word
i=i+1
end
playlist_item.path = values[0]
playback_mode = values[1]
playlist_item.options = {}
table.insert(playlist_item.options, "fullscreen")
table.insert(playlist_item.options, playback_mode)
-- add the item to the playlist
table.insert( playlist, playlist_item )
end
return playlist
end
Adding "video options" to playlist_item.options is working, but adding "playlist options" on a per track basis does not. I'm unsure how to proceed as I can only return an entire playlist.
If you are interested you can solve the repeat problem by creating a playlist in VLC and saving it as a XSPF file. Then you need to edit the file with notepad and add this inside the extension tag of the track you want to repeat:
<vlc:option>input-repeat=9999</vlc:option>
Example:
<track>
<location>file:///C:/Users/Notebook/Desktop/17-LOOP.mp4</location>
<duration>10048</duration>
<extension application="http://www.videolan.org/vlc/playlist/0">
<vlc:id>1</vlc:id>
<vlc:option>input-repeat=9999</vlc:option>
</extension>
</track>
By doing this the moment this file is played in the playlist will repeat 9999 times(You can change this if the file is too short) or until you press next. Then the playlist will continue.
Given a custom playlist
file:///data/video1.mp4,repeat
file:///data/video2.mp4,play-once
I completed the playlist script in the original question above by adding the repeat/play-once information to the track metadata.
playlist_item.meta = { ["Playback mode"] = playback_mode }
The final step was to create an extension (similar to the Song Tracker extension) that listens to the input_changed event and uses the "Playback mode" track metadata to toggle vlc.playlist.repeat_() accordingly.
function activate()
update_playback_mode()
end
function input_changed()
update_playback_mode()
end
function update_playback_mode()
if vlc.input.is_playing() then
local item = vlc.item or vlc.input.item()
if item then
local meta = item:metas()
if meta then
local repeat_track = meta["Playback mode"]
if repeat_track == nil then
repeat_track = false
elseif string.lower(repeat_track) == "repeat" then
repeat_track = true
else
repeat_track = false
end
local player_mode = vlc.playlist.repeat_()
-- toggle playlist.repeat_() as required
if player_mode and not repeat_track then
vlc.playlist.repeat_()
elseif not player_mode and repeat_track then
vlc.playlist.repeat_()
end
return true
end
end

YouTube API Search - VEVO video in priority

I have a script that uses the YouTube API (v3) to find a video of a music from the name of the artist and the name of the music.
This works, however, in some cases, the first choice (sorted by relevance) is not the official video VEVO.
I tried adding VEVO in my query (after name of the artist and name of the music), but when there is no video VEVO, the API returns no results.
Is it possible to force to choose VEVO videos, if they exist?
Thank you.
Vincent
var request = gapi.client.youtube.search.list
({
q: artiste+' '+track,
part: 'snippet',
order: 'relevance'
});
request.execute(function(response)
{
idVideo=response.result.items[0].id.videoId;
});
This is the part that allows to select the id of a video based on the artist's name and the name of the music
UPDATE: I don't think the syndicated video suggesting I put below would work well but I'll leave it there just in case you want to explore it. What might work better, again not guaranteed but should be more accurate just hoping for the best would be to simply sort it by viewCount instead of relevance... Generally speaking, the VEVO videos have the most views.
Example: https://developers.google.com/apis-explorer/#p/youtube/v3/youtube.search.list?part=snippet&order=viewCount&q=nicki+minaj+anaconda&type=video&_h=3&
GET https://www.googleapis.com/youtube/v3/search?part=snippet&order=viewCount&q=nicki+minaj+anaconda&type=video&key={YOUR_API_KEY}
--
ORIGINAL ANSWER
I haven't been able to test it yet and it won't necessarily restrict it to ONLY vevo videos but you can try the syndicated option https://developers.google.com/youtube/v3/docs/search/list#videoSyndicated
string
The videoSyndicated parameter lets you to restrict a search to only videos that can be played outside youtube.com. If you specify a value for this parameter, you must also set the type parameter's value to video.
Acceptable values are:
any – Return all videos, syndicated or not.
true – Only retrieve syndicated videos.
If that returns nothing, than do the same search without syndicated and use the first option from that.
It is actually pretty easy. What you need to do is add 'VEVO' to you search query. This will make sure that anything from a VEVO channel will be the first result. It should look something like this.
var request = gapi.client.youtube.search.list
({
q: artiste+' '+track + 'VEVO',
part: 'snippet',
order: 'relevance'
});
If you wan't to make sure you are getting a VEVO video the easiest thing to do is parse the channel title to make sure it contains the word "VEVO". The Code would then look something like this
var request = gapi.client.youtube.search.list
({
q: artiste+' '+track + 'VEVO',
part: 'snippet',
order: 'relevance'
});
var obj = JSON.parse(result.content);
var findChannelTitle = obj.items[0].snippet.channelTitle;
var isVevo = findChannelTitle.match(/VEVO/g); //checks to see if this is VEVO content. We only wan't to use Vevo videos.
if (isVevo){ //returns true if VEVO is found in the channel title
var youtubeVideoId = obj.items[0].id.videoId; //finds the video ID
return youtubeVideoId;
}else{
return null;
}

How to cache many images(in loop) properly using forge.file.cacheURL?

I have a list of products that I download as json file from server. Each item contains a link to the image stored on server.
Now I want to be able to see the products when offline, so I store downloaded json file into forge.prefs http://docs.trigger.io/en/v1.3/modules/prefs.html and pull it out to display items on screen. It works nice but I also need to store images localy to be displayed correctly.
To achive this, I'm trying to use forge.file.cacheURL http://docs.trigger.io/en/v1.3/features/cache.html but can't handle the correct order of operations. To cache images I run the json file and for each line I call forge.file.cacheURL and store the url back to JSON item. But here is the problem as forge.file.cacheURL runs asynchronously so my loop running over the items and gathering the local images finishes and my code continues to display images(view items) but meantime the forge.file.cacheURL still gathers and caches the images because its asynchronous operation. I need somehow to detect that last item is being cached and then refresh the view on screen to use correct image urls ... or something else that will lead to what I need.
Hopefully you understand the concept. How should I handle this properly ?
Since v1.4.26 you've been able to permanently store images (see http://docs.trigger.io/en/v1.4/release-notes.html#v1-4-26), rather than just cache them. Depending on your needs, that might be a better option than forge.file.cacheURL and forge.file.isFile.
I don't follow exactly the situation you describe, but something like this will let you wait for several asynchronous things to finish before doing something:
// e.g.
var jsonCache = {
one: "http://example.com/one.jpg",
two: "http://example.com/two.jpg",
three: "http://example.com/three.jpg"
};
var cacheCount = 0;
for (var name in jsonCache) {
if (jsonCache.hasOwnProperty(name)) {
var imageURL = jsonCache[name];
cacheCount += 1;
forge.file.cacheURL(imageURL, function (file) {
forge.prefs.set(name, file, function () {
cacheCount -= 1; // race condition, but should be fine (!)
if (cacheCount <= 0) {
alert('all cached');
}
});
});
}
}

Resources