Recording the screen of an iPad 2 - ios

First of all: This question is not directly programming related. However, the problem only exists for developers, so I'm trying to find an answer here anyways since there are maybe other people on this community who already solved the problem.
I want to record the screen of the iPad 2 to be able to create demo videos of an app.
Since I'm using motion data, I cannot use the simulator to create the video and have to use the actual iPad itself.
I've seen various websites where different methods were discussed.
iPad 2 <==> Apple Digital AV Adapter <==> Blackmagic Design Intensity Pro <==> Playback software <==> TechSmith Camtasia screen recorder on the playback software to circumvent the HDCP flag
iPad 2 <==> Apple VGA Adapter <==> VGA2USB <==> Recording software
...
Everyone seems to have his own hacky solution to this problem.
My setup is the following:
iPad 2 (without Jailbreak)
Apple Mac mini with Lion Server
PC with non-HDCP compliant main board
Non-HDCP compliant displays
It doesn't matter whether the recording has to be on the mac or on the PC.
My questions:
Is it possible to disable the HDCP flag programmatically from within the application itself?
HDMI offers a better quality than VGA. Will the first method I've listed work with my setup although I don't have a full HDCP chain?
What about the Intensity Extreme box? Can I use it and then connect to the Thunderbolt port of the mac mini and record from there?
Is the Thunderbolt port of the mac mini bidirectional and is also suited for capturing? Is the mac mini HDCP compliant? If it does not work due to my screens not being HDCP compliant, will it work if I start the recording software, then disconnect the screens? Will it work if I use an iPad 2 over VNC as a screen since it has to be HDCP compliant if it sends HDCP streams?
If I have to fall back to the VGA solution: Will the VGA adapter mirror everything what's showing on the iPad 2 screen or do I have to program a special presentation mode which sends everything to the VGA cable instead of the iPad screen? Will the proposed setup using VGA2USB be qualitatively high or would you recommend other tools?
What about the Apple Composite AV Cable? Maybe another approach?

I decided to use the Blackmagic Design Intensity Pro together with the Apple Digital AV adapter on a machine with a working HDCP chain.
This approach worked well, capturing is possible in 720p with the iPad's screen centered within the captured video. Therefore, capturing happens not at the full native resolution of the iPad's screen, and black borders are introduced to fill the 720p video frames.

I posted info about displaying HDMI HDTV output to a non-HDCP monitor. Look for my posts. Perhaps it will be of use. Or, how about just using a cell phone to record a video of your tablet's screen, with proper lighting, low reflectance, etc. Won't be 100% but might be sufficient.

Related

How are Protected Media Path and similar systems implemented?

Windows provides DRM functionality to applications that require it.
Some of them, however, have more protection than others.
As an example, take Edge (both Legacy and Chromium) or IE that use Protected Media Path. They get to display >720p Netflix content. Other browsers don't use PMP and are capped at 720p.
The difference in the protections is noticeable when you try to capture the screen: while you have no problems on Firefox/Chrome, in Edge/IE a fixed black image takes the place of the media you are playing, but you still see media control buttons (play/pause/etc) that are normally overlaid (alpha blended) on the media content.
Example (not enough rep yet to post directly)
The question here is mainly conceptual, and in fact also could apply to systems that have identical behavior, like iOS that also replaces the picutre when you screenshot or capture the screen on Netflix.
How does it get to display two different images on two different outputs (Capture APIs with no DRM content and attached phisical monitor screen with DRM content)?
I'll make a guess and I'll start by excluding HW overlays. The reason is that play/pause buttons are still visible on the captured output. Since they are overlaid (alpha blended) on the media on the screen, and alpha blending on HW overlays is not possible in DirectX 9 or later nor it is using legacy DirectDraw, hardware overlays have to be discarded. And by the way, neither d3d9.dll or ddraw.dll are loaded by mfpmp.exe or iexplore.exe (version 11). Plus, I think hardware overlays are now considered a legacy feature, while Media Foundation (which Protected Media Path is a part of) is totally alive and maintained.
So my guess is that DWM, that is in charge for the screen composition, is actually doing two compositions. Either by forking the composition process at the point when it encounters a DRM area and feeds one output to the screen (with DRM protected content) and the other to the various screen capturing methods and APIs, or by entirely doing two different compositions in the first place.
Is my guess correct? And could you please provide evidence to support your answer?
My interest is understanding how composition software and DRM are implemented, primarily in Windows. But how many other ways could there be to do it in different OSes?
Thanks in advance.
According to this document, both the options are available.
The modern PlayReady DRM that Netflix uses for its playback in IE, Edge, and the UWP app uses the DWM method, which can be noticed by the video area showing only a black screen when DWM is forcibly killed. It seems that this is because the modern PlayReady is supported since Windows 8.1, which does not let users disable DWM easily.
I think both methods were used in Windows Vista-7, but I have no samples to test it. As HW overlays don't look that good with window previews, animations, and transparencies, they would have switched each method depending on the DWM status.
For iOS, it seems that a mechanism that is similar to the DWM method is done in the display server(SpringBoard?) level to present the protected content which is processed in the Secure Enclave Processor.

ios ARKit 3 with iPad Pro 2020, how to use front camera data with back camera tracking?

The ARKit API supports simultaneous world and face tracking via the back and front cameras, but unfortunately due to hardware limitations, the new iPad Pro 2020 is unable to use this feature (probably because the LIDAR camera takes a lot more power). This is a bit of a step back.
Here is an updated reference in the example project:
guard ARWorldTrackingConfiguration.supportsUserFaceTracking else {
fatalError("This sample code requires
iOS 13 / iPad OS 13, and an iOS device with
a front TrueDepth camera. Note: 2020 iPads
do not support user face-tracking while world tracking.")
}
There is also a forum conversation proving that this is an unintentional hardware flaw.
It looks like the mobile technology is not "there yet" for both. However, for my use case I just wanted to be able to switch between front and back tracking modes seamlessly, without needing to reconfigure the tracking space. For example, I would like a button to toggle between "now you track and see my face" mode and "world tracking" mode.
There are 2 cases: it's possible or it's impossible, but maybe there are some alternative approaches depending on that.
Is it possible, or would switching AR tracking modes necessitate setting-up the tracking space again? If so, how would it be achieved?
If it's impossible:
Even if I don't get face-tracking during world-tracking, is there a way to get a front-facing camera feed that I can use with the Vision framework, for example?
Specifically: how do I enable back-facing tracking and get front and back facing camera feeds simultaneously, and disable one or the other selectively? If it's possible even without front-facing tracking and only the basic feed, this will work.

Will ARCORE work on Huawei Mate 10? it is working on p20 line and they have the exact same hardware except one less camera lens

It is weird for me to come here and see that P2 line is ARCore capable but mate 10 line is not, and I would like to know why if the hardware in P20 Pro is almost the same except for the RAM and one more lens, it just doesn't make any sense for me.
As far as I know it has something about calibrating each phone based on the camera and motion sensors, and their locations on the phone. So even though their specifications might seem similiar there still are differences on location of the sensors and the cameras.
They might add support in the future.
Keep checking this page for supported devices, as I remember when it first became available it was not supported on devices like Galaxy S8 and S8+ and later support was added so keep an eye out for it.

Why is Pokemon Go running on unsupported devices?

If most of the devices are not supported ARCore, then why is Pokemon Go running on every device?
My device is not supported by ARCore but Pokemon Go is on it with full performance.
Why?
Until October 2017, Pokemon Go appears to use a Niantic made AR engine. At a high level, the game placed the Pokemon globally in space at a server defined location (the spawn point). The AR engine used the phone’s GPS and compass to determine if the phone should be moved to the left or to the right. Eventually, the phone pointed to the right heading and the AR engine drawed the 3D model over the video coming from the camera. At that time there was no attempt to perform mapping of the environment, surface recognition, etc. That was a simple, yet very effective technique which created the stunning effects we’ve all seen.
After that Niantic has shown prototypes of Pokemon GO using ARKit for iOS. It is easy to notice enhancements: missed pokeballs appear to bounce very naturally on the sidewalk and respect physics, it feels like Pikachu naturally walks on the sidewalk as opposed to floating in the air with the currently released game. Most observers expected Niantic to replace the current engine with ARKit (iOS) and ARCore (Android), possibly via Unity 3D AR APIs.
In early 2018 Niantic improved the aspect of the game on Android by adding support for ARCore, Google’s augmented reality SDK. And a similar update to what we’ve already seen on iOS 11, which was updated to support ARKit. The iOS update gave the virtual monsters a much greater sense of presence in the world, due to camera tracking, allowing them to more accurately stand on real-world surfaces rather than floating in the center of the frame. Android users will need a phone compatible with ARCore in order to use the new “AR+” mode.
Prior to AR+, Pokémon Go would use rough approximations of where objects were to try and place the Pokémon in your environment, but it was mostly a clunky workaround that functioned mostly as a novelty feature. The new AR+ mode also lets iOS users take advantage of a new capture bonus, called expert handler, that involves sneaking up close to a Pokémon, so as not to scare it away, in order to more easily capture it. With ARKit, since it’s designed to use the camera with the gyroscope and all the sensors, it actually feeds in 60 fps with full resolution. It’s a lot more performant and it actually uses less battery than the original AR mode.
For iOS users there's a standard list of supported devices:
iPhone 6s and higher
iPad 2017 and higher
For Android users not everything is clear. Let's see why. Even if you have an officially unsupported device with poor-calibrated sensors you can still use ARCore on your phone. For example, ARCore for All allows you do it. So for Niantic, as well, there's no difficulties to make every Android phone suitable for Pokemon Go.
Hope this helps.

Take photos with "portrait" mode (bokeh) in iOS 11 / iPhone 7/8plus in 3rd-party app?

The iPhone 7 plus and 8 plus (and X) have an effect in the native camera app called "Portrait mode", which simulates a bokeh-like effect by using depth data to blur the background.
I want to add the capability to take photos with this effect in my own app.
I can see that in iOS 11, depth data is available. But I have no idea how to use this to achieve the effect.
Am I missing something -- is it possible to turn on this effect somewhere and just get the image with it applied, rather than having to try and make this complicated algorithm myself?
cheers
Unfortunately portrait mode and portrait lighting aren't open to developers as of iOS 11 so you would have to implement a similar effect on your own. Capturing Depth in iPhone Photography and Image Editing with Depth from this years WWDC go into detail on how to capture and edit images with depth data.
There are two sample projects on the developer site that show you how to capture and visualize depth data using a Metal shader, and on how to detect faces using AVFoundation. You could definitely use these to get you started! If you search for AVCam in the Guides and Sample Code they should be the first two that come up (I would post the links but stack overflow is only letting me add two).

Resources