Divide YouTube Video into different hyperlinks? - youtube

I have a YouTube Video (which was NOT uploaded by myself) with 50 min length:
The video talks about different contents, each content starts at different time for example:
content_x starts # 0minutes : 0Seconds
content_y starts #10minutes : 0Seconds
..etc
Now I would like to divide these contents according as hyperlink such that if i would like to watch any section, I can just click this link in the respective time (minutes & seconds).
I would prefer to do that in the description part for the YouTube video or in "About Section" so can you guide me how to do that? or any other simpler idea on how to reach different video contents in YouTube in dynamic and descriptive way?

You can append the time you want to the end of the youtube link
Eg
http://www.youtube.com/watch?v=XXX#t=31m08s
where 31m08s represents 31 minutes 8 seconds.
Similarly you can make links for the rest of the sections you want

Check this site : www.youtubetime.com it will generate a Youtube link with a specific starting time. Alternatively, tou can just write your time separated by spaces in a video's description (e.g.
very long description 0:00 part 1 1:00 part 2
etc... or you can write a comment with these time links and use it as an "index".
Hope that I've answered your question.

Right click the video and select Copy video URL at current time.
Then paste it anywhere.

Related

YouTube Data API search limit

I'm using YouTube Data API v3 and I tried to find videos published within certain period like below link.
part : snippet
regionCode : KR
relevanceLanguage : KO
publishedAfter: 2020-01-01T00:00:00Z
publishedBefore: 2020-01-02T00:00:00Z
type : video
order : date
https://developers.google.com/youtube/v3/docs/search/list?apix_params=%7B%22part%22%3A%22snippet%22%2C%22maxResults%22%3A50%2C%22order%22%3A%22title%22%2C%22publishedAfter%22%3A%222020-01-01T00%3A00%3A00Z%22%2C%22publishedBefore%22%3A%222020-01-01T02%3A00%3A00Z%22%2C%22regionCode%22%3A%22KR%22%2C%22relevanceLanguage%22%3A%22KO%22%2C%22type%22%3A%22video%22%2C%22videoCategoryId%22%3A%2215%22%7D
But the result seems to be limited to 500. So I tried to see when the 500th video was published.
If I memory the time, I can put that time into publishedAfter and extract another 500.
but oddly enough, the 500th video's published time was almost "2020-01-02T00:00:00Z", the publishedBefore.
first video published time : 2020-01-01T23:53:57.000Z
last 500th video published time : 2020-01-01T00:00:02.000Z
The total result is more than 6000. But within first 500 videos, all publishing time within publishedAfter and publishedBefore appear. So where did the remaining 5500 videos go?

How to determine the mean duration of YouTube videos for specific search term?

I know YouTube is very closed and doesn't publish any detailed statistics, but I have a specific research interest to find out the length of arbitrary How-To videos.
When I search for that term I will get a few million results. Would it be possible to determine the playback duration for portions of the search results? Since the usage of the YouTube API is limited one could grasp a few videos per day and maybe with multiple API-keys.
Beside using the API there might be powerful scrapers I could use.
JS browser utility
I'd recommend you a simple JS utility to use in a browser dev tools. Read here how to use it for counting. I've modified it to count video length time.
The JavaScript code
So open a youtube search page and open your browser’s Dev tools (it’s F12 on PCs, Preferences -> Advanced -> Show Develop menu on Mac). Once they are open, go to Console (Console tab) and enter the following code:
function domCounter(selector){
var a = document.querySelectorAll(selector);
var hour = 0, min = 0;
for(var i=0; i<a.length;i++){
var time = a[i].innerHTML.split(':');
// console.log(time);
hour += parseInt(time[0]);
min += parseInt(time[1]);
}
return hour + Math.round(min/60);
}
How to find css selector
So to call it in a browser console you just hit:
domCounter('span.video-time')
Disclaimer
This utility works for one search page result though. You might get improve it to traverse pagination.
You won't be able to get the duration videos returned from the search endpoint without looking at the video duration for each one.
The search endpoint does, however, provide videoDuration parameter you can pass in to your request to only return videos of a specific duration range:
The videoDuration parameter filters video search results based on
their duration. If you specify a value for this parameter, you must
also set the type parameter's value to video.
Acceptable values are:
any – Do not filter video search results based on their duration. This is the default value.
long – Only include videos longer than 20 minutes. medium – Only include videos that are between four and 20 minutes long (inclusive).
short – Only include videos that are less than four minutes long.

How to determine maximum tag length with v3 api?

We allow updating video tags in our application. We've been having trouble determining how to correctly calculate the length of tags allowed. Does anyone have any information on how to correctly calculate that? For example how do spaces between tag words count toward the limit? Something like "funny video"
Thanks!

ios endless video recording

I'm trying to develop an iPhone app that will use the camera to record only the last few minutes/seconds.
For example, you record some movie for 5 minutes click "save", and only the last 30s will be saved. I don't want to actually record five minutes and then chop last 30s (this wont work for me). This idea is called "Loop recording".
This results in an endless video recording, but you remember only last part.
Precorder app do what I want to do. (I want use this feature in other context)
I think this should be easily simulated with a Circular buffer.
I started a project with AVFoundation. It would be awesome if I could somehow redirect video data to a circular buffer (which I will implement). I found information only on how to write it to a file.
I know I can chop video into intervals and save them, but saving it and restarting camera to record another part will take time and it is possible to lose some important moments in the movie.
Any clues how to redirect data from camera would be appreciated.
Important! As of iOS 8 you can use VTCompressionSession and have direct access to the NAL units instead of having to dig through the container.
Well luckily you can do this and I'll tell you how, but you're going to have to get your hands dirty with either the MP4 or MOV container. A helpful resource for this (though, more MOV-specific) is Apple's Quicktime File Format Introduction manual
http://developer.apple.com/library/mac/#documentation/QuickTime/QTFF/QTFFPreface/qtffPreface.html#//apple_ref/doc/uid/TP40000939-CH202-TPXREF101
First thing's first, you're not going to be able to start your saved movie from an arbitrary point 30 seconds before the end of the recording, you'll have to use some I-Frame at approximately 30 seconds. Depending on what your Keyframe Interval is, it may be several seconds before or after that 30 second mark. You could use all I-frames and start from an arbitrary point, but then you'll probably want to re-encode the video afterward because it will be quite large.
SO knowing that, let's move on.
First step is when you set up your AVAssetWriter, you will want to set its AVAssetWriterInput's expectsMediaDataInRealTime property to YES.
In the captureOutput callback you'll be able to do an fread from the file you are writing to. The first fread will get you a little bit of MP4/MOV (whatever format you're using) header (i.e. 'ftyp' atom, 'wide' atom, and the beginning of the 'mdat' atom). You want what's inside the 'mdat' section. So the offset you'll start saving data from will be 36 or so.
Each read will get you 0 or more AVC NAL Units. You can find a listing of NAL unit types from ISO/IEC 14496-10 Table 7-1. They will be in a slightly different format than specified in Annex B, but it's fine. Additionally, there will only be IDR slices and non-IDR slices in the MP4/MOV file. IDR will be the I-Frame you're looking to hang onto.
The NAL unit format in the MP4/MOV container is as follows:
4 bytes - Size
[Size] bytes - NALU Data
data[0] & 0x1F - NALU Type
So now you have the data you're looking for. When you go to save this file, you'll have to update the MPV/MOV container with the correct length, sample count, you'll have to update the 'stsz' atom with the correct sizes for each sample and things like updating the media headers and track headers with the correct duration of the movie and so on. What I would probably recommend doing is creating a sample container on first run that you can more or less just overwrite/augment with the appropriate data for that particular movie. You'll want to do this because the encoders on the various iDevices don't all have the same settings and the 'avcC' atom contains encoder information.
You don't really need to know much about the AVC stream in this case, so you'll probably want to concentrate your experimenting around updating the container format you choose correctly. Good luck.

How can I synchronize a text with audio/sound in XNA/XACT?

I wanted to display the text while sound is playing at background. In short if there is sound/audio for "What is this", I want to display the text "What is this" in text box synchronously. Is this possible with XNA/XACT? and can I use this in standard C# based WPF or Silverlight applications?
Appreciating your help.
I'm not sure if xna has any build in support for this but you could set up a second meta file that holds time and action information. For example mark a time for each word or phase spaken in a file and write out the text at the appropriate time.
same way as when making subtitles for movies. example:
00:02 Who are you?
00:05 An angel.
00:07 What's your name?
compare movie time with this, and show messages in texbox with some duration.

Resources