I have a game that is fully ready for android, but during the process of getting it to work on iOS I noticed that the instantiated objects weren't visible in iOS. I can confirm they were there as they have colliders on them that send a game over signal when they collide and after a couple of seconds I do get gameover (when they collide). It's just that they're not visible. Also, this works fine in Unity. I only get the problem after building to iOS and I don't get the problem when I build to android.
EDIT
This is a 2d game so sprites are being rendered. Again the instantiated objects are there and functioning as they should just the sprites are not being shown on the screen
Within limited information provide in the question I can only answer to your problems is that: There are some limitation for iOS build which includes Graphic limit:
DXT texture compression is not supported; use PVRTC formats instead.
Please see the Texture2D Component page for more information.
Rectangular textures can not be compressed to PVRTC formats.
Movie Textures are not supported; use a full-screen streaming playback
instead. Please see the Movie playback page for more information.(More)
You should also need to check texture override in unity for iOS.
Remember: No one can answer with the limited information you can also do iOS Debugging yourself through this Guide.
Related
Windows provides DRM functionality to applications that require it.
Some of them, however, have more protection than others.
As an example, take Edge (both Legacy and Chromium) or IE that use Protected Media Path. They get to display >720p Netflix content. Other browsers don't use PMP and are capped at 720p.
The difference in the protections is noticeable when you try to capture the screen: while you have no problems on Firefox/Chrome, in Edge/IE a fixed black image takes the place of the media you are playing, but you still see media control buttons (play/pause/etc) that are normally overlaid (alpha blended) on the media content.
Example (not enough rep yet to post directly)
The question here is mainly conceptual, and in fact also could apply to systems that have identical behavior, like iOS that also replaces the picutre when you screenshot or capture the screen on Netflix.
How does it get to display two different images on two different outputs (Capture APIs with no DRM content and attached phisical monitor screen with DRM content)?
I'll make a guess and I'll start by excluding HW overlays. The reason is that play/pause buttons are still visible on the captured output. Since they are overlaid (alpha blended) on the media on the screen, and alpha blending on HW overlays is not possible in DirectX 9 or later nor it is using legacy DirectDraw, hardware overlays have to be discarded. And by the way, neither d3d9.dll or ddraw.dll are loaded by mfpmp.exe or iexplore.exe (version 11). Plus, I think hardware overlays are now considered a legacy feature, while Media Foundation (which Protected Media Path is a part of) is totally alive and maintained.
So my guess is that DWM, that is in charge for the screen composition, is actually doing two compositions. Either by forking the composition process at the point when it encounters a DRM area and feeds one output to the screen (with DRM protected content) and the other to the various screen capturing methods and APIs, or by entirely doing two different compositions in the first place.
Is my guess correct? And could you please provide evidence to support your answer?
My interest is understanding how composition software and DRM are implemented, primarily in Windows. But how many other ways could there be to do it in different OSes?
Thanks in advance.
According to this document, both the options are available.
The modern PlayReady DRM that Netflix uses for its playback in IE, Edge, and the UWP app uses the DWM method, which can be noticed by the video area showing only a black screen when DWM is forcibly killed. It seems that this is because the modern PlayReady is supported since Windows 8.1, which does not let users disable DWM easily.
I think both methods were used in Windows Vista-7, but I have no samples to test it. As HW overlays don't look that good with window previews, animations, and transparencies, they would have switched each method depending on the DWM status.
For iOS, it seems that a mechanism that is similar to the DWM method is done in the display server(SpringBoard?) level to present the protected content which is processed in the Secure Enclave Processor.
The iPhone 7 plus and 8 plus (and X) have an effect in the native camera app called "Portrait mode", which simulates a bokeh-like effect by using depth data to blur the background.
I want to add the capability to take photos with this effect in my own app.
I can see that in iOS 11, depth data is available. But I have no idea how to use this to achieve the effect.
Am I missing something -- is it possible to turn on this effect somewhere and just get the image with it applied, rather than having to try and make this complicated algorithm myself?
cheers
Unfortunately portrait mode and portrait lighting aren't open to developers as of iOS 11 so you would have to implement a similar effect on your own. Capturing Depth in iPhone Photography and Image Editing with Depth from this years WWDC go into detail on how to capture and edit images with depth data.
There are two sample projects on the developer site that show you how to capture and visualize depth data using a Metal shader, and on how to detect faces using AVFoundation. You could definitely use these to get you started! If you search for AVCam in the Guides and Sample Code they should be the first two that come up (I would post the links but stack overflow is only letting me add two).
I'm using SceneKit with Metal (not openGL) & would like to allow a user to record a video of him playing the game. Any ideas how can I render the scene to a video? (There's no need to record the scene audio, which might make it more simple)
I thought I'd add it as an answer:
ReplayKit should do the job fine, though it does require iOS9 and a device that supports Metal (A7 or later). I've never used it but from what I remember of WWDC 2015 it only required a few lines of code to set up. There's tons of tutorials on it available on the net.
This one seems to include most bits such as starting and stopping recording, as well as excluding interface objects from the video if required.
I am a very beginner in Objective-C and iOS programming. I spent a month to find out how to show a 3D model using OpenGL ES (version 1.1) on top of the live camera preview by using AvFoundation. I am doing a kind of augmented reality application on iPad. I process the input frames and show 3D object overlay with the camera preview in realtime. These was fine because there are so many site and tutorial about these things (Thanks to this website as well).
Now, I want to make a screen capture of the whole screen (the model with camera preview as the background) as the image and show in the next screen. I found a really good demonstration here, http://cocoacoderblog.com/2011/03/30/screenshots-a-legal-way-to-get-screenshots/. He did everything I want to do. But, as I said before, I am so beginner and don't understand the whole project without explanation in details. So, I'm stuck for a while because I don't know how to implement this.
Does anybody know any of good tutorial or any kind of source in this topic or any suggestion that I should learn more in order to do this screen capture? This will help me a lot to moving on.
Thank you in advance.
I'm currently attempting to solve this same problem to allow a user to take a screenshot of an Augmented Reality app. (We use Qualcomm's AR SDK plugged into Unity 3D to make our AR apps, which saved me from ever having to learn how to programmatically render OpenGL models)
For my solution I am first looking at implementing the second answer found here: How to take a screenshot programmatically
Barring that I will have to re-engineer the "Combined Screenshots" method found in CocoaCoder's Screenshots app.
I'll check back in when I figure out which one works better.
Here are 3 very helpful links to capture screenshot:
OpenGL ES View Snapshot
How to capture video frames from the camera as images using AV Foundation
How do I take a screenshot of my app that contains both UIKit and Camera elements
Enjoy
I'm currently working on a project which part of it needs some level of AI. End user pointing iOS camera to a pre recorded video, the screen is very large so they can only frame part of the entire video. They can move their iPhone and shoot any part of the video and the app will automatically recognize what they aiming for and fire a match event.
To sum up
Is there a lib can recognize predefined object from video source?
I've heard of people getting OpenCV to compile on the iPhone. I am not quite clear on what you will be able to do with it, but I would certainly check it out.
yes OpenCv
http://www.eosgarden.com/en/opensource/opencv-ios/overview/
http://www.youtube.com/watch?v=xzVXyrIRm30&feature=player_detailpage