How are Protected Media Path and similar systems implemented? - ios

Windows provides DRM functionality to applications that require it.
Some of them, however, have more protection than others.
As an example, take Edge (both Legacy and Chromium) or IE that use Protected Media Path. They get to display >720p Netflix content. Other browsers don't use PMP and are capped at 720p.
The difference in the protections is noticeable when you try to capture the screen: while you have no problems on Firefox/Chrome, in Edge/IE a fixed black image takes the place of the media you are playing, but you still see media control buttons (play/pause/etc) that are normally overlaid (alpha blended) on the media content.
Example (not enough rep yet to post directly)
The question here is mainly conceptual, and in fact also could apply to systems that have identical behavior, like iOS that also replaces the picutre when you screenshot or capture the screen on Netflix.
How does it get to display two different images on two different outputs (Capture APIs with no DRM content and attached phisical monitor screen with DRM content)?
I'll make a guess and I'll start by excluding HW overlays. The reason is that play/pause buttons are still visible on the captured output. Since they are overlaid (alpha blended) on the media on the screen, and alpha blending on HW overlays is not possible in DirectX 9 or later nor it is using legacy DirectDraw, hardware overlays have to be discarded. And by the way, neither d3d9.dll or ddraw.dll are loaded by mfpmp.exe or iexplore.exe (version 11). Plus, I think hardware overlays are now considered a legacy feature, while Media Foundation (which Protected Media Path is a part of) is totally alive and maintained.
So my guess is that DWM, that is in charge for the screen composition, is actually doing two compositions. Either by forking the composition process at the point when it encounters a DRM area and feeds one output to the screen (with DRM protected content) and the other to the various screen capturing methods and APIs, or by entirely doing two different compositions in the first place.
Is my guess correct? And could you please provide evidence to support your answer?
My interest is understanding how composition software and DRM are implemented, primarily in Windows. But how many other ways could there be to do it in different OSes?
Thanks in advance.

According to this document, both the options are available.
The modern PlayReady DRM that Netflix uses for its playback in IE, Edge, and the UWP app uses the DWM method, which can be noticed by the video area showing only a black screen when DWM is forcibly killed. It seems that this is because the modern PlayReady is supported since Windows 8.1, which does not let users disable DWM easily.
I think both methods were used in Windows Vista-7, but I have no samples to test it. As HW overlays don't look that good with window previews, animations, and transparencies, they would have switched each method depending on the DWM status.
For iOS, it seems that a mechanism that is similar to the DWM method is done in the display server(SpringBoard?) level to present the protected content which is processed in the Secure Enclave Processor.

Related

Converting a Canvas prototype to iOS native

I have developed a Canvas prototype of a game (kind of), and even though I have it running at a decent 30 FPS in a desktop browser, the performance on iOS devices is not what I hoped (lots of unavoidable pixel-level manipulation in nested x/y loops, already optimized as far as possible).
So, I'll have to convert it to a mostly native ObjC app.
I have no knowledge of ObjC or Cocoa Touch, but a solid generic C background. Now, I guess I have two options -- can anyone recommend one of them and whether they are at all possible?
1) Put the prototype into a UIWebView, and JUST do the pixel buffer filling loops in C. Can I get a pointer to a Canvas pixel array living in a web view "into C", and would I be allowed to write to it?
2) Make it all native. The caveat here is that I use quite a few 2D drawing functions too (bezierCurveTo etc.), and I wouldn't want to recode those, or find drawing libraries. So, is there a Canvas-compatible drawing API available in iOS that can work outside a web view?
Put the prototype into a UIWebView, and JUST do the pixel buffer filling loops in C
Nah. Then just embed a web view into your app and continue coding in JavaScript. It's already JITted so you don't really have to worry about performance.
Can I get a pointer to a Canvas pixel array living in a web view "into C"
No, not directly. If you are a hardcore assembly hacker and reverse engineer, then you may be able to do it by looking at the memory layout and call stack of a UIWebView drawing to a canvas, but it's just nonsense.
and would I be allowed to write to it?
Once you program your way down to that, I'm sure you would.
Make it all native.
If you wish so...
The caveat here is that I use quite a few 2D drawing functions too (bezierCurveTo etc.), and I wouldn't want to recode those, or find drawing libraries.
You wouldn't have to do either of those, instead you could just read the documentation of the uber awsum graphix libz called CoreGraphics. It comes by default with iOS (as well as OS X, for the record).
So, is there a Canvas-compatible drawing API available in iOS that can work outside a web view?
No, I don't know of one.
Translating your objective c code to html5 sounds like a daunting task, How about a 3rd option where you don't have to change your JavaScript code ?
http://impactjs.com/ejecta
Ejecta is like a Browser without the Browser. It's specially crafted
for Games and Animations. It has no DIVs, no Tables, no Forms – only
Canvas and Audio elements. This focus makes it fast.
Yes, it's open source
https://github.com/phoboslab/Ejecta

Interactive video in iOS : Is it possible to trigger specific actions in code by tapping discrete parts in the video?

I am asking this because I couldn't find the answer anywhere, at least using the keywords I could think.
The most relevant question/answer I've found is : (Create interactive videos in iPad - An app for product demo) . The user Jano replied:
The easiest way to create interactive videos for iOS is to use Apple's HTTP Live Streaming technology. You have to create a video, embed metadata, play it using MPMoviePlayerController or AVPlayerItem, and then display clickable areas in response to metadata notifications.
Metadata should contain coordinates for the element you are tracking, eg: a dress, and a identifier for the product. You overlay this info with a clickable subview that reveals more information about the product. There are several applications of this kind in iTunes, here is one.
Once you get a working product and weeks-time of videos, the most difficult part is to perform motion tracking with the less possible human interaction. One approach is to use Adobe After Effects, another is to code your own solution based on OpenCV.
The example I've found concerning this technology (http://vimeo.com/16455248) showed the automatic addition of NSButtons when the video reaches the meta-tags embedded. My client wants a human body interactive video that pauses at a specific time (maybe using the meta-tags) and reacts to user tapping in an element in video (e.g: imagine a pill inside stomach; after tapping this pill it triggers another pre-rendered video, in a way not transparent to user). I have thought about animations using Cocos2D or Open GL ES, but I lack people who master these technologies.
I didn't quite understand the "motion tracking" reference in the quote above. Jano mentions Adobe After Effects and OpenCV. This motion tracking is like an "UIGestureRecognizer" ? Does it track parts of the video itself or motions initiated by user, as taps ?
I expect I've exposed the question in the most clear form possible. Thank you in advance.
This question is a year old, but I can give you insight into the After Effects question. AE has a feature where you can define an area in a video frame and the software will track that area across the timeline, logging the coordinates at specific intervals. For example, in a video of a person riding a mountain bike, you could select an area around their helmet and AE will log coordinates of the helmet throughout the timeline.
Since Flash was the most likely target for interactive video, the typical workflow would encode this coordinate data into a Flash video as cue point events (this is the only method I have personally experienced). According to some googling, the data is stored in key frames and can be extracted using scripts.
More info: http://helpx.adobe.com/after-effects/using/tracking-stabilizing-motion-cs5.html
Here's a manual method for extracting the data:
In the timeline panel select the footage and press the U key, all
track points keyframes will show up. Here’s the magic, select the
Feature Center property of each track point and copy it (Cmd+C for Mac
or Ctrl+C for PC)
Now open any text editor such as TextMate or Notepad and paste the
data (Cmd+V for Mac or Ctrl+V for PC)

Custom tabbar items in MonoTouch

I would like to create custom tabbar items similar to the ones shown here:
I assume these have to be designed and created first in Photoshop or a similar application. Are there any resources or tutorials available that demonstrate the creation of such items in Photoshop and how these are then used in MonoTouch?
Creating and bundling bitmaps is one option - and likely the most common one (for which googling should turn up several tutorials). Now in order to get optimal quality you need to supply multiple sets for the old iPhone/iPod, the newer retina iPhone/iPod, the iPad (1,2) and the retina iPad 3. This can takes a lot of space to cover each case with beautiful icons (or it won't look as good).
An alternative is to create the bitmaps at runtime, e.g. using the CoreGraphics API. This might seems counterproductive (and can surely be in many cases) but it has the advantage of requiring less (storage) space and/or getting better quality (see note).
Why ? because if you create them at runtime then you'll only create the ones for the specific device you're executing on. You can even cache them and re-create them when missing (e.g. if iOS flush your application cache).
If you're not an artist (and I'm not) you might want to look at easily licensable vector icons. The ones from your screenshot looks monochrome and could even use (bundle or extra the outlines from a) custom font - like the one provided by FontAwesome (CC BY 3.0) or similar sources.
Note: Maybe you noticed (I know I did) that some iPad applications looked beautiful (compared to others) on the iPad3 even if they were released months before the hardware become available. Vector graphics wins ;-)
UPDATE: Someone already made a script to convert the FontAwesome characters to iOS tab bar icons. However since it's done outside the app you'll need multiple versions of each bitmap to get the best look on every devices.

Recreating Theater Mode with DirectX

I need to simultaneously display a video that is playing in my applciation, full screen on a larger monitor. On some video cards, this is called Theater mode and is configured using a tool that the card manufacturer supplies.
I would like to do this with only software. Can I do this with DirectX?
My idea is to take the currently active video playing using DirectShow and repaint it on a second display (as configured by the user) in full screen mode.
What technologies or methods would I use for this?
The straightforward way is to split yet encoded video into two branches and use two video renderer set to present video on different monitors. One renderer could be a part of your application UI, the other could expand full screen on the large secondary monitor.
Splitting encoded video give you an option to still leverage hardware assisted decoding (DXVA) if available. You might prefer to use software only decoder and split already decoded video - this is also going to work.
You might additionally want to implement filter which would separately temporarily disable one or the other renderer, such as for example by stopping passing media samples through.
Another thing you can do is to use bridging to even more flexibly control the renderers and be able to detach them from media source.

Programmatically determine available iPhone camera resolutions

It looks like when I shoot video with UIImagePickerControllerQualityTypeMedium, on an iPod Touch it comes out 480x360, but on an iPhone 4 it's something higher (can't say just what as I don't have one handy at the moment) and on an iPad 2 presumably the same as the 4, if not something different again.
I'd like to shoot the same quality on all devices -- I have to add some frames and titles, and it'll make my life a lot easier if I just have to code that for one resolution. Is there any way to determine what the different UIImagePickerControllerQualityType values correspond to at run time? (Apart from shooting video with each and then examining the result, that is.)
Or is my only choice to use UIImagePickerControllerQualityType640x480?
If you need more customization/power on iOS than you get wish the higher level objects, such as UIImagePickerController, it is recommended to work at the next lower level: AV Foundation Framework. Apple has some excellent documentation on AV Foundation programming that should come in handy for that purpose.
Unfortunately, even there you are limited to capturing at 640x480 if you do want it standard across all devices. There, however, is a great chart available at the same link (but anchors are broken in the docs, so Ctrl+F to "Capturing Still Images") that lists all the resolutions for various devices under certain quality directives.
Your most solid bet, assuming 640x480 is too small, is to work out some sort of scaling algorithm that would allow you to scale according to the overall resolution.

Resources