Extracting SRT/etc from original video to use when re-encoding to H.265 - vlc

I'm using VLC to playback my H.265 videos
Using Cyberlink PowerDirector to re-encode videos to H.265 from original H.264, saves a bunch of space and i'm the only one viewing the content so not an issue for media platforms.
Currently when running the re-encode the video will come out as expected, except the subtitles no longer exist. This software does have the ability to attach subs via SRT or what have you but I need to first extract the existing subs into a text file as the program doesn't allow that (to my knowledge), VLC apparently does?

PowerDirector does indeed have a built in function to extract subtitles from a video which can be accessed via the timeline when right clicking the video and selecting "Extract Subtitles/English" or whatever language.
This imports the existing data into a locally stored SRT directly in the built in subtitles editor page of the program.

Related

How to parse a MPD manifest video file and get segments of an image adaptation set?

I am using mpeg-dash mpd file to stream video using videoJS.
I am trying to display thumbnail of the video while using the seek bar.
The adaptation set for image is received on the manifest file. Now I am trying to parse the mpd file and get segments out of it. How can i achieve this using javascript?
I tried parsing the manifest file using https://www.npmjs.com/package/mpd-parser this plugin but this picks up only segments for Audio, video, subtitle and closed caption.
Is there a plugin which handles the same for image adaptation set?
As I think you know, the images are in a separate adaptation set - from the DASH interoperability spec (https://dashif.org/docs/DASH-IF-IOP-v4.3.pdf):
For providing easily accessible thumbnails with timing, Adaptation Sets with the new #con- tentType="image" may be used in the MPD. A typical use case is for enhancing a scrub bar with visual cues. The actual asset referred to is a rectangular tile of temporally equidistant thumb- nails combined into one jpeg or png image. A tile, therefore is very similar to a video segment from MPD timing point of view, but is typically much longer.
and
It is typically expected that the DASH client is able to process such Adaptation Sets by download- ing the images and using browser-based processing to assign the thumbnails to the Media Presen- tation timeline.
It sounds like you want a tool or some code to allow you to be able to view the thumbnails - some players provide this at a user level, e.g. see TheoPlayer info here:
https://www.theoplayer.com/blog/in-stream-thumbnail-support-dvr-dash-streams
You can also leverage and possibly reuse the parsing that is already built into an open source player - see this discussion in the Shaka Player support issues which provides the method to parse and retrieve thumbnails and also the thumbnail format itself:
https://github.com/google/shaka-player/issues/3371#issuecomment-828819282
The above thread contains some example code to extract images also.

Prevent DASH video streaming from YouTube

I have a Roku app and some of the videos come from Youtube. I have no problem retrieving the videos but if I select a video with HD it wants to automatically stream the Dash version. I can prevent Dash if I force a non HD version but who wants to watch a SD version..
SO I am wondering is there any way to force the mp4 stream opposed to a Dash stream?
I have read that XP does not play Dash and so I tried using Windows NT 5.1 as the user-agent but that did not work.
Any help would be greatly appreciated.
DASH and MP4 are not mutually exclusive - they perform different functions in the video delivery.
In simple terms you can view it like this:
Camera captures frames - 'raw video'
The 'raw video' is encoded in some way to store it, generally in a way that balances video size vs the quality. The video is then sometimes refereed to by the encoder used (the codec) - for example if a h.264 codec is used the video may be called a h.264 video.
The video stream, i.e. all the individual frames that make up the video, is packaged into a container. This container may contain video and audio streams, and it may even have multiple video streams. The video is then often referred to by the container format - for example if our h.264 encoded video above is packaged into an MP4 container it is often referred to as an MP4 video, even though the MP4 'container' may contain several video and audio tracks.
To improve the quality of video streaming, a video may also use a streaming protocol like MPEG DASH. The theory here is simple: multiple copies of the video are created with different bit rates, and hence different size and quality. Each of these copies is broken up into, for example, 10 second chunks. An index file is created, called a manifest, and a pointer to each video and audio stream is included. A client playing the video, for example a browser, requests each 10 second chunk as it needs it. It chooses which copy of the video it selects the next chunk from depending on the current network conditions. This means if the network is good it can switch to higher quality copy for the next chunk and if there is a problem it can switch down to a lower quality chunk. If we take our example video encoded by h.264 and put into a MP4 container, we can now package it using DASH streaming format. A video packaged like this is often referred to as a DASH video.
The above is a simplified overview, but it hopefully highlights that your videos may be actually MP4 and DASH, and in fact commonly are.
As an additional note, different devices may support different codecs (and even codec profiles), packaging formats and streaming formats - for example iOS devices tend to support HLS rather than DASH at the time of writing. This changes frequently as devices and standards evolve and is one of the reasons it can be tricky to find a single format that will play on all devices and clients - for this reason servers often will provide the same video in multiple codec and streaming formats to support as many devices and clients as possible.

HTML5 and MP4 vs. M2TS containers

Problem:
To get an iOS app that streams video accepted into the app store, we need to have a HLS version.
What’s the problem?
Android does not support HLS well, and for other reasons, we need to store MP4 and HLS files of the same content.
What’s the difference between MP4 and HLS and why do you need to store both?
MP4 is a container that stores H.264 video and AAC audio for best compatibility in HTML 5 browsers – jsvideo players often have flash fallback if the browser does not support MP4 video in HTML 5 that use the same MP4 file, but played through flash.
HLS is a protocol where text files (.m3u8) contain references to playlists, which themselves reference .ts files (or m2ts), which are mpeg-2 transport streams, not to be confused with mpeg-2 video. The .ts files are containers for the same H.264 video and AAC audio.
Why am I complaining?
It takes time to create the HLS files and playlists from the MP4 files
(Most importantly) We are now storing twice as much data on S3
Why should I care? If your S3 bill is $10K per month for storing both MP4 and HLS, now it is only $5K. Or put another way, if you are paying $100K for storing data in MP4, it would cost $200K to store the same content in both MP4 and HLS.
What do I want?
I want to store only the .ts files and serve both desktop users, iOS users, and Android users with that single file.
Is it possible?
Doesn’t HLS require 5-10 second .ts segments instead of one big file?
As of IETF draft 7, and version 4 of the protocol, HLS supports the tag EXT-X-BYTERANGE which allows you to specify a media segment as a byte range (subrange) of a larger URL.
Are .ts files compatible with HTML5 video?
Apparently not. They are a different container than MP4, yet contain the same video and audio content. Worth looking into how to store the raw video/audio data once and simply using the correct containers when necessary. If JS video players can convert HTML 5 MP4 files into Flash video on the fly if the browser does not support HTML 5 MP4, then why not be able to do the same with M2TS data?
I might be ignorant on some level, but maybe someone can shed some light on this issue, and possibly present a solution.
There currently is no good solution.
A little background.
Video streaming used to require custom protocols such as RTP/RTMP/RTSP etc. These protocols work fine except, we were basically building two separate networks. One HTTP based for standard web traffic, and the other one. The idea came along to split video into little chunks and serve them to the player over HTTP. This way we do not need special servers/software and we could take advantage of the giant HTTP CDNs that were being built. In addition. because the video was split into chunks, we can can encode the same video into different qualities/file sizes. Then the player can choose the best quality video for its current bandwidth. This was the perfect solution for mobile because of the constant changing network conditions. Several competing standard were developed. Move networks was the first to market [citation needed]. The design was copied by Microsoft (Smooth Streaming) and Apple (HTTP Live streaming aka HLS). Microsoft is phasing out smooth streaming in favor of DASH. DASH looks like it will become the default streaming solution of the future. Except, because of its design-by-comity approach, it has basically been stuck in comity for a few years. Now, in those few years, Apple sold Millions of IOS devices. So HLS can not just be discontinued. Why doesn't everyone just use HLS then? I can think of three reasons 1) Its Apples standard, and people are haters. 2) Transport streams are a complicate file format. and 3) Transport streams a patent encumbered. MP4 is not patent encumbered but it also does not have the adaptive abilities. This make user experience poor on 2G networks. The only network supported by the iPhone 1. Also AT&T at the time did not want full bitrate video streamed over there woefully inadequate celular network. HLS was the compromise. All of this predates HTML5. So the video tag was not even considered in its design.
Addressing your points:
1) It takes time to create the HLS files and playlists from the MP4
files
This is a programing website, Automate it.
2) We are now storing twice as much data on S3
[sic] I want to store only the .ts files and serve both desktop users,
iOS users, and Android users with that single file.
You and me both man :).
Possible solutions.
1) What is specifically wrong with Androids implementation? (except for lack of in older devices)
2) JW player can play HLS (Not sure about on android)
3) Server side transmux on demand.
Doesn’t HLS require 5-10 second .ts segments instead of one big file?
You can do byte-ranges, but you need to make sure all devices you are interested in support it.
If JS video players can convert HTML 5 MP4 files into Flash video on
the fly if the browser does not support HTML 5 MP4, then why not be
able to do the same with M2TS data?
They don't convert. Flash natively supports mp4. It is possible to convert TS in AS3/JS. I have done it. JW player can convert TS in browser. video.js may be able to as well.

Multiple HTML5 media elements on one page in iOS (iPad)

My research has led me to learn that Apple's media element handler is a singleton, meaning I can't have a video playing while an audio is playing in the background. I'm tasked to build a slideshow presentation framework and the client wants a background audio track, timed audio voice-overs which match bullet points, and variable media which can either be an image or video - or a timed cycle of multiple media elements.
Of course, none of the media works on iOS. Each media element cancels out the previous.
My initial thought is to embed the voice-over audio into the video when there's a video present, but there's an existing Flash version of this setup which depends on existing assets so I pretty much have to use what's delivered.
Is there ANY work-around for this? I'm testing on iOS 4.3.5. The smartest devs in the world are on this site - we've got to be able to come up with something.
EDIT: Updated my iPad to iOS 5.0.1 and the issue remains.
How about do it with CSS to do the trick.
Maybe you know about a company called vdopia that distribute video ad on mobile.
http://mobile.vdopia.com/index.php?page=mobilewebsolutions
They claim to developed what so called vdo video format, that actually just to do a css sprite running on that :D
I mean you could have your "video" as a framed image, then attach html5 audio tag on there.
I would like to know your response
Are you working on a Web App or on a Native Application?
If you are working on a Web App you're in a world of hurt. This is because you simply do not have much control over things that Mobile Safari doesn't provide right away.
If this is the case I would come forth and be honest with the stakeholders.
If you are working on a Native Application you can resort to a mechanism that involves some back and forth communication between UIWebView and ObjC. It's actually doable.
The idea is the following:
Insert special <object> elements in your HTML5 documents, that you handcraft yourself according to your needs, taking special care to maintain the attr-* naming convention for non-standard attributes.
Here you could insert IDs, paths and other control variables in the multimedia artifacts that you want to play.
Then you could actually build some javascript (on top of jQuery,p.e.) that communicates with ObjC through the delegation mechanism on the UIWebView or through HTTP. I'll go over this choice down below.
Say that on $(document).ready() you go through all the objects that have a special class. A class that you carefully choose to identify all the special <object>.
You build a list of such objects and pass them on to the ObjC part of your application. You could easily serialize such list using JSON.
Then in ObjC you can do what you want with them. Play them through AVPlayer or some other framework whenever you want them played (again you would resort to a JS - ObjC bridge to actually signal the native part to play a particular element).
You can "communicate" with ObjC through the delegation pattern in UIWebView or through HTTP.
You would then have a JS - ObjC bridge in place.
The HTTP approach makes sense in some cases but it involves a lot of extra code and is resource hungry.
If you are building an ObjC application and want further details on how to actually build an ObjC - JS bridge that fits these needs get back to us :)
I'm halting this post as of now because it would be nice to know if it is in fact a Native App.
Cheers.
This is currently not possible. As you notice when a video plays it takes up the full screen with quicktime and moves the browser to the background. The only solution at this time is to merge the audio and video together into an mp4 format and play that single item.
If I understand you correctly, you are not able to merge the audio and video together because it relies on flash? Since iOS can't play flash you should merge the audio and video together and use flash as a backup. There are numerous html5 players which use javascript to try and play the html5 video first then fallback to flash for backup.
You mention there is an existing Flash setup of the video - is it a an swf file, could you import it into a video/audio editing software and add an audio track on top?
Something like this: http://www.youtube.com/watch?v=J2vvH7oi8m8&feature=youtube_gdata_player
Also, if it is a Flash file, will you be converting it to an avi or like for iOS? If you'd have to do it anyway, there is your chance for adding an audio track.
Could you use a webservice to merge the streams in real time with FFMpeg and then stream one output to quicktime?
To elaborate maybe a library like http://directshownet.sourceforge.net/about.html could also work. It looks like they have method
DESCombine – A class library that uses DirectShow Editing Services to combine video and audio files (or pieces of files) into a single output file. A help file (DESCombine.chm) is provided for using the class.
This could then be used to return the resulting data as the response to the call and loaded via the HTML5 player.

Delphi videos in any other format than flash?

Everyone knows about there instructional videos http://delphi.wikia.com/wiki/Delphi_Videos but I want to watch them on my iPad when I go on vacation.
The problem is the videos are in swf and will not play on my iPad. Does any know of another source for these videos in another format?
Thanks.
For the moment flash video is the container of choice for most video content on the internet. Saif is right. If you want those specific videos you'll need to convert them yourself. There are several decent flv to mp4 converters available for free (Miro comes to mind). SWF is takes a bit more work to extract the video content.
Now if you are looking for Delphi content that's already available in MPEG 4 you can try http://edn.embarcadero.com/tv. The content from the most recent Coderage event is available as an mp4 download.
You can convert them into a proper format using a free video convert like Any Video Converter
I've had reasonable success watching on-line Flash content on my iPad using iOSFlashVideo.
Not tried it on off-line flash files though.
--jeroen

Resources