Is there an efficient way or a program to get all the segments from a youtube video?
I was thinking about using youtube-dl to download the manifest file but from what I have read it has no exact data on the segments.
I also tried google-ing it but I constantly come across how to play certain segments or insert/delete, which is not my interest. I want at a certain point to have a list of all the segments with start and end time.
Related
I am using mpeg-dash mpd file to stream video using videoJS.
I am trying to display thumbnail of the video while using the seek bar.
The adaptation set for image is received on the manifest file. Now I am trying to parse the mpd file and get segments out of it. How can i achieve this using javascript?
I tried parsing the manifest file using https://www.npmjs.com/package/mpd-parser this plugin but this picks up only segments for Audio, video, subtitle and closed caption.
Is there a plugin which handles the same for image adaptation set?
As I think you know, the images are in a separate adaptation set - from the DASH interoperability spec (https://dashif.org/docs/DASH-IF-IOP-v4.3.pdf):
For providing easily accessible thumbnails with timing, Adaptation Sets with the new #con- tentType="image" may be used in the MPD. A typical use case is for enhancing a scrub bar with visual cues. The actual asset referred to is a rectangular tile of temporally equidistant thumb- nails combined into one jpeg or png image. A tile, therefore is very similar to a video segment from MPD timing point of view, but is typically much longer.
and
It is typically expected that the DASH client is able to process such Adaptation Sets by download- ing the images and using browser-based processing to assign the thumbnails to the Media Presen- tation timeline.
It sounds like you want a tool or some code to allow you to be able to view the thumbnails - some players provide this at a user level, e.g. see TheoPlayer info here:
https://www.theoplayer.com/blog/in-stream-thumbnail-support-dvr-dash-streams
You can also leverage and possibly reuse the parsing that is already built into an open source player - see this discussion in the Shaka Player support issues which provides the method to parse and retrieve thumbnails and also the thumbnail format itself:
https://github.com/google/shaka-player/issues/3371#issuecomment-828819282
The above thread contains some example code to extract images also.
I am curious to know what would be the most efficient way to walk the youtube website. My goal is to eventually index all videos on youtube (hypothetically) and the only way I can think of is to go channel by channel indexing all of the videos. I am not very familiar with the v3 APi, so if there is a better way to accomplish this, please let me know. This gives rise to a few problems I can think of:
Where to begin? Channels and videos are accessed using random string IDs, so if I simply start with IDs beginning with 'A' I am going to run into a lot of null values. Not sure how IDs are assigned, but this also may keep the indexing in a certain segment/section of video types if it is based on the ID alphanumerics.
I am hoping to move methodically through the youtube directory, trying to avoid accidently indexing the same channel/video.
Should I somehow seperate the videos into groups and request them based on other parameters? A grouped scheme may be easier to work with, update, etc.
I won't know if the video has anything I am interested in indexing before accessing it.
First you need to understand that there are way too many videos for you to do this without having access to the stack directly, which you do not have and will not get.
As to automate the selection of video's, you can try to use the video ID's.
They are 11 characters long, consisting of only "a-z A-Z _ and - " . So that would reduce (still is 54 to the power of 11) the indexing/scanning if a video exists. Then save that ID (with related info) and move on.
Not a perfect option, but best I can see with your options and requirements.
Is it allowed to save meta-data like video lengths, keywords and specific event information at certain point in time in the video?
For example, there is a movie trailer and I want to save the point in the video where a certain event occurs in my own database.
If the information is being used in conjunction with the YouTube api/player, then there is no problem as you are just creating shortcuts to an evet.
If you are not sure, then you can read the YouTube terms of service for an in depth answer.
I would like to encode a date/time stamp in each frame of a video in a way that it can be easily read back by a computer. On my system the frame rate is variable, so counting frames does not seem like a good solution. I have it displaying the date and time in human readable form (text) on the frame, but reading that back into the computer doesn't appear to be as trivial as I would like. The recorded videos are large (10s of GB) and long so writing a text file also seems to be troublesome besides being one more file to keep track of. Is there a way to store frame-by-frame information in a video?
There are several ways you can do this.
If your compression is not very strong, you may be able to encode the time-stamp in the top or bottom row of your image. These may not contain too much valuable info. You can add some form of error correction (e.g. CRC) to correct any corruptions done by the compressor.
A more general solution (which I used in the past) is to have the video file, e.g. AVI, contain another separate text stream. Besides AVI most formats support multiple streams, since these are used for stereo-audio streams, subs etc. The drawback here is that there aren't many tools that allow you to write these streams, and you'll have to implement this yourself (using the relevant APIs) for each video format you want to support. In a way this is similar to keeping a text file next to your video, only this file content is multiplexed inside the same video file as a separate stream.
After finally successfully finding a way to concatenate multiple voice files into one single audio file on the iPhone, I am am now trying to superimpose an audio file over the length of the voice file.
So basically I have two .m4a files:
voice.m4a which is about 10 seconds for example.
music.m4a which is about 5 seconds.
What I require is that two file be combined in such a manner that the resulting single audio file now contains the music in the background of the voice file for the length of it, so basically the resulting output should have the 10 seconds of voice and the 5seconds of music repeated twice. It is absolutely important to have a single file that contains all of this.
I am trying to get all of this done in an application on the iPhone.
Can anyone please help me out with this?
If you are looking to do that programmatically, you will need to go deeper down into CoreAudio. For a simpler solution you could use AudioQueues or for more fine grained control AudioUnits and an AUGraph. The MultiChannelMixer is the Audio Unit you are looking for. Unfortunately there is no space for an elaborate tutorial here (would take a couple of days to write just the tutorial itself), but I am hoping I could point you to the right direction.
If you decide to go down that path and want to do further audio programming then this one time simple example, then I strongly suggest you buy "Learning Core Audio, A Hands-on Guide to Audio Programming for Mac and iOS" - Chris Adamson, Kevin Avila. You can find it on Amazon, paperback or Kindle.