iOS: Choosing specific Closed Captions EIA-608 - ios

There is lot of mystery around CC608 usage under iOS.
Apple's UsingHLS offers to declare them in the manifest like this:
#EXT-X-MEDIA:TYPE=CLOSED-CAPTIONS,GROUP-ID="cc",NAME="CC1",LANGUAGE="en",DEFAULT=YES,AUTOSELECT=YES,INSTREAM-ID="CC1"
#EXT-X-MEDIA:TYPE=CLOSED-CAPTIONS,GROUP-ID="cc",NAME="CC2",LANGUAGE="sp",AUTOSELECT=YES,INSTREAM-ID="CC2"
#EXT-X-STREAM-INF:BANDWIDTH=1000000,SUBTITLES="subs",CLOSED-CAPTIONS="cc" x.m3u8
But Apple's official sample stream do include CC608 embedded into the MPEG, and still they didn't list them in their manifest!
On that sample stream, I can turn CC608 on using closedCaptionDisplayEnabled=YES, but this method does not allow selection of a specific language.
In Apple's dev forum I have found this question with a promising answer:
Are you still calling "player?.closedCaptionDisplayEnabled=true"?
There's no need to do that. If you author your HLS playlist properly
with the appropriate language tags, the user can enable captions in
the language of their choice, or disable them completely as well.
I was failing to find API in iOS which will allow me to:
Read the list of available CC608 streams
Activate CC for a specific language
Would appreciate your help with this!

Related

swift: is there any way to Download a language pack to give offline translation feature

I'm trying to give a feature like google translate app where user can download multiple languages and see translations in those.
More specifically, I need to implement offline language translation. Like a user write some text and wants to translate it in some other language (Spanish or German) without internet.
Is there any way to do that? I'm not able to find anything about this. Please guide me through if someone knows about it.
Thanks.
I have not come across any solution that provides this functionality, although as google translate works, you do need to download required language pack one time. Then you can use it offline. Also language packs can be huge so you can definitely not keep all of them saved in your application at once.
In case this is your requirement, you can check out google ML Kit Translator for iOS. This is pretty neat along with the documentation.
https://developers.google.com/ml-kit/language/translation/ios

Does the API provide language code of the transcription?

I'm planning to transcribe a speech where the language is unknown, so I am trying to detect the language spoken automatically with multiple language codes given, however, I can't seem to find an option to actually find out which language the transcription will be in.
I've looked through the dev page of the speech-to-text api, but I can't seem to find a way to output the language code of the transcribed text.
Anyone could help me with this?
Thank you.
In general, the language code is returned with the results. For example, see the sample code here, which shows how to retrieve the language code from the results.
However, see the issue mentioned here. The language code does not always get returned when multiple languages are specified. As reported in the comments, this is an issue with the Google Speech API, an issue which reported here.

Interactive transcript for Youtube video

I am working on a Youtube video site and would like to implement interactive transcript feature like this one: http://demo.jwplayer.com/iframes/interactive-transcript/ (I have video transcript and caption files, in SRT and WebVTT format. I will not use Youtube's machine transcribed transcript.)
I did a research online. It appears that there is no free plug-in/module that can do this. There are some paid options, such as Captionbox (http://speakertext.com/captionbox) ,3rdMediea, SubPly (http://www.subply.com/en/Products/InterActiveTranscript.htm BTW: this is the best I have found so far. It loads transcript in different languages on the fly). I am reluctant to use these paid options, primarily because I do not want to rely on a single provider.
Can someone please advise me a better option?
Thank you.
You can always write your own solution:
Read YouTube API Dosc:
https://developers.google.com/youtube/js_api_reference?csw=1
Check getCurrentTime()
Read transcript from file/database/hiddendiv and display it if getCurrentTime == textTime from your transcript then just highlight it (like in captionbox example).

YouTube embed code on iPad

I'm working on improving the experience of a site by adding in iPad support. This includes support for videos. Our client is pushing towards a YouTube model for storing and serving videos -- great for us! I originally planned to implement the use of YouTube's new HTML5-supporting <iframe> snippets. This offloads the device detection to YouTube and makes embedding a video a cinch as we don't need to worry about compatibility. It turns out the the CMS we're using, Sitecore CMS, strips out <iframe>'s from our WYSIWYG editor. After a lot of research it looks like its a bit hard to not make this happen.
Fast forward to now... I tested out the old style <embed> code and discovered even though iOS doesn't support Flash, these embeds seem to work fine on iPad. Some Stack Overflow research led me to this post which suggests its because of the YouTube plugin /System/Library/Internet Plug-Ins/YouTubePlugIn.webplugin on iPads that allows for the playback.
My question is, is there any documentation that this is the exact reason? I'd like to go by this as why we can use the regular <embed> code but I need to back it up with proof via a document for iOS. Is this YT plug-in on every iPad by default, or do users need to manually install it? This seems like a great solution considering our unfortunate incompatibility with an <iframe> but I need to support the use of the <embed>'s with hard facts. Thanks in advance.
The answer you are looking for is to be found in Apple's URL Scheme Reference. Basically it's a mechanism that comes into play on the iDevices to detect and handle specifically, certain types of URLs - for instance Google Maps, iTunes and also YouTube.
Here's a few reference links.
https://developer.apple.com/library/archive/featuredarticles/iPhoneURLScheme_Reference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40007899
https://developer.apple.com/library/archive/featuredarticles/iPhoneURLScheme_Reference/YouTubeLinks/YouTubeLinks.html#//apple_ref/doc/uid/TP40007895-SW1
And just for good measure, you might also want to take a look at the Safari Developer Library for the best practice recommendations on HTML5 Video and Audio embedding :-)

Video editing language

My next project will be all about language tools, parsing and such. Because of that reason I've decided to write a simple language which can be used for video editing. So instead of those desktop applications (Sony vegas, Adobe Premiere, ..) it's basically a language where you define the effects and all and it will generate a video for you.
Since I've got no experience in this kind of business I need some help. The goal of the project is to create a simple language which is able to do some basic things (such as text fading in, etc). I am looking for articles/projects/blogs/whatever related with this which could help me writing this language. (Note that I don't need articles about language parsing since I'm pretty familar with that, just the video editing part).
Thanks,
William v. Doorn
I if understand right your goal, you should take a look at Avisynth. I use it -and like it. It's for Windows only, but conceptually it seems to have what you are going after: a script language for non-linear video editing.
I'm having trouble understanding the purpose of writing such code.
If you are intending on creating a tool that a user can use to edit a video by supplying a set of commands, how is forcing the user to write text better than the GUI video editors that are available? Its going to have a pretty low usability in this situation.
If you are looking for a way to automate some kind of editing process, some video editors like VirtualDub already contain tools for batching and plugins to allow them to be automated.
Are you actually looking to make a tool that will be used by someone to edit video or is this for your own intellectual curiosity?
The best starting point can be installing trial version of the common video editing softwares and see what they offer.
When it comes to writing video editing software, I always see ffmpeg mentioned.
From the site:
FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. It includes libavcodec - the leading audio/video codec library.

Resources