iOS AIR Native Extension With OpenGLES Causes App To Freeze - ios

I created a Air Native Extension for iOS that opens a GLKViewController with a GLKView inside to render some 3D content, this all works fine. When the ViewController is dismissed, the AIR App has stopped rendering. Interaction still works (so the app is not frozen), but the rendering has stopped on the last frame before the native extension's view controller opened.
This is code from my view controller's viewDidLoad method (this is all the subclass does):
GLKView* view = (GLKView*)self.view;
if( view.context == nil )
{
EAGLContext *context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!context)
{
NSLog(#"Failed to create ES context");
return;
}
view.context = context; //Removing this fixes flash!
}
When I comment the view.context = context line then Flash will continue rendering fine (but obviously, I no longer have a context and can't render).
I assume Flash is losing it's EAGLContext when the GLKView sets the current context and is not resetting it. Is there a way I can fix or avoid this?
I have tried to save the current EAGLContext before opening the view controller, then to reset it when the view controller is closed, but that did not work.

To fix this I had to save Flash's EAGLContext before setting my own, then restore Flash's one after I had finished drawing/setting up the context. This let Flash continue to draw without knowing that my view controller was also drawing. I ended up doing this using the old EAGLView and a custom view controller as it wasn't clear where the GLKView was setting the context.
Of course, it would have been better for Flash to set the context itself before it tried to draw a frame, like Apple suggest!

Related

Is Apple using black magic to accomplish camera's preview orientation?

I have this AVFoundation camera app of mine. The camera preview is the result of a filter, applied by didOutputSampleBuffer method.
When I setup the camera I am following what apple did on one of their sample codes (CIFunHouse):
// setting up the camera
CGRect bounds = [self.containerOpenGL bounds];
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
_videoPreviewView = [[GLKView alloc] initWithFrame:bounds
context:_eaglContext];
[self.containerOpenGL addSubview:_videoPreviewView];
[self.containerOpenGL sendSubviewToBack:_videoPreviewView];
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
NSDictionary *options = #{kCIContextUseSoftwareRenderer : #(NO),
kCIContextPriorityRequestLow : #(YES),
kCIContextWorkingColorSpace : [NSNull null]};
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
[_videoPreviewView bindDrawable];
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI);
_videoPreviewView.transform = transform;
_videoPreviewView.frame = bounds;
});
self.containerOpenGL is a full screen view and is constrained to the four corners of the screen. Autorotation is on.
But this is the problem...
When I setup the GLKView and self.ciContext it is created assuming the device is on a particular orientation. If the device is on a particular orientation and I run the application, the previewView will fit the entire self.containerOpenGL area but when I rotate the device the previewView will be out center.
I see that Apple code works perfectly and they don't use any constraints. They do not use any autorotation method, no didLayoutSubviews and when you rotate the device, running their code, everything rotates except the preview view. And worse than that, my previewView appears to rotate but not their's.
Is this black magic? How do I they do that?
They add their preview view to a uiwindow and that is why it does not rotate. I hope this answers the question. If not I will continue to look through their source code.
Quote from source code.
we make our video preview view a subview of the window, and send it to the back; this makes FHViewController's view (and its UI elements) on top of the video preview, and also makes video preview unaffected by device rotation
They also add this
_videoPreviewView.enableSetNeedsDisplay = NO;
This may keep it from responding as well
Edit: It appears that now the preview rotates and the UI does as well so to combat this you can add a second window and send it to the back and make the main window clear and add the previewView to the second window with a dummyViewController that tells it not to autorotate by overriding the appropriate method. This will allow the preview to not rotate but the UI to rotate.

Is it safe to call [EAGLContext presentRenderBuffer:] on a secondary thread?

I have (multiple) UIViews with layers of type CAEAGLLayer, and am able to call [EAGLContext presentRenderBuffer:] on renderbuffers attached to these layers, on a secondary thread, without any kind of graphical glitches.
I would have expected to see at least some tearing, since other UI with which these UIViews are composited is updated on the main thread.
Does CAEAGLLayer (I have kEAGLDrawablePropertyRetainedBacking set to NO) do some double-buffering behind the scenes?
I just want to understand why it is that this works...
Example:
BView is a UIView subclass that owns a framebuffer with renderbuffer storage assigned to its OpenGLES layer, in a shared EAGLContext:
#implementation BView
-(id) initWithFrame:(CGRect)frame context:(EAGLContext*)context
{
self = [super initWithFrame:frame];
// Configure layer
CAEAGLLayer* eaglLayer = (CAEAGLLayer*)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = #{ kEAGLDrawablePropertyRetainedBacking : [NSNumber numberWithBool:NO], kEAGLDrawablePropertyColorFormat : kEAGLColorFormatSRGBA8 };
// Create framebuffer with renderbuffer attached to layer
[EAGLContext setCurrentContext:context];
glGenFramebuffers( 1, &FrameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, FrameBuffer );
glGenRenderbuffers( 1, &RenderBuffer );
glBindRenderbuffer( GL_RENDERBUFFER, RenderBuffer );
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, RenderBuffer );
return self;
}
+(Class) layerClass
{
return [CAEAGLLayer class];
}`
A UIViewController adds a BView instance on the main thread at init time:
BView* view = [[BView alloc] initWithFrame:(CGRect){ 0.0, 0.0, 75.0, 75.0 } context:Context];
[self.view addSubView:view];
On a secondary thread, render to the framebuffer in the BView and present it; in this case it's in a callback from a video AVCaptureDevice, called regularly:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
[EAGLContext setCurrentContext:bPipe->Context.GlContext];
// Render into framebuffer ...
// Present renderbuffer
glBindRenderbuffer( GL_RENDERBUFFER, BViewsRenderBuffer );
[Context presentRenderbuffer:GL_RENDERBUFFER];
}
It used to not work. There used to be several issues with updating the view if the buffer was presented on any but the main thread. It seems this has been working for some time now but it is on your own risk to implement it as you do. Later versions may have issues with it as well as some older probably still do (not that you need to support some old OS versions anyway).
Apple was always a bit closed as to how things work internally but we may guess quite a few things. Since iOS seems to be the only platform that uses your main buffer as a FBO (frame buffer object) I would expect the main frame buffer is inaccessible for development and your main FBO is actually redrawn to the main frame buffer when you present the render buffer. The last time I checked the method to present the render buffer will block your current thread and seems to be limited to the screen refresh rate (60FPS in most cases) which implies there is still some locking mechanism. Some additional test should be done but I would expect there is some sort of a pool of buffers which need to be redrawn to the main buffer where in the pool only one unique buffer id can be present at the time or the calling thread is blocked. This would result in the first call to the present render buffer would not be blocked at all but each sequential would be if the previous buffer has not yet been redrawn.
If this is true then yes, a double buffering is mandatory at some point since you may immediately continue drawing to your buffer. Since the render buffer has the same id over the frames it may not be swapped (for what I know) but it could be redrawn/copied to another buffer (most likely a texture) which can be done on the fly at any given time. In this procedure then when you first present the buffer you will copy the buffer to the texture which will be locked. When the screen refreshes the texture will be collected and unlocked. So if this texture is locked your presentation call will block the thread, otherwise it will continue smoothly. It is hard to say this is double buffering then. It has 2 buffers but it still works with a locking mechanism.
I do hope you may then understand why it works. It is pretty much the same procedure you would use when loading large data structures on the separate shared context which runs on a separate thread.
Still most of this is just guessing unfortunately.

Cropping/zooming not working while setting iOS Wallpaper using PhotoLibrary private framework

I have managed (with the help of this post) to open up a PLStaticWallpaperImageViewController from the PhotoLibrary private framework, which allows the direct setting of the wallpaper and lock screen (using same UI as the Photos app). Unfortunately, the image cropping/zooming features don't seem to work, as touches to the image view itself don't seem to be coming through (the main view is also not dismissed properly after the cancel/set buttons are touched, but this isn't so important).
I have an Xcode project demonstrating the wallpaper setting (can be run in simulator as well as a non-jailbroken device):
https://github.com/newenglander/WallpaperTest/
The code is quite basic, and involves a ViewController inheriting from PLStaticWallpaperImageViewController and implementing an init method similar to the following:
- (id)initWithCoder:(NSCoder *)aDecoder {
self = [self initWithUIImage:[UIImage imageWithContentsOfFile:#"/System/Library/WidgetResources /ibutton/white_i#2x.png"]];
self.allowsEditing = YES;
self.saveWallpaperData = YES;
return self;
}
(It will be necessary to allow access to the photo library after the first launch, and for some reason the popup for this comes up behind the app, rather than on top.)
Perhaps someone has insight as to why the cropping/zooming isn't working, or can give me an alternative way to set the wallpaper in an app (destined for Cydia rather than the App Store of course)?
Use this sample project, working very well.
Have inside camera control and custom layout, crop image when taken or after chose from your library, i used for my project and in very simple to customize.
https://github.com/yuvirajsinh/YCameraView
//---------- Answer improved----------//
I take a look on your project and i see 2 problem:
here you have 3 warning of semantic issue:
- (id)initWithUIImage:(id)arg1 cropRect:(struct CGRect { struct CGPoint { float x_1_1_1; float x_1_1_2; } x1; struct CGSize { float x_2_1_1; float x_2_1_2; } x2; })arg2;
in your ViewController.m you setting to get the image from where?
- (id)initWithCoder:(NSCoder *)aDecoder
{
// black_i
//what directory is this?
self = [self initWithUIImage:[UIImage imageWithContentsOfFile:#"/System/Library/WidgetResources/ibutton/white_i#2x.png"]];
//--------------------
self.allowsEditing = YES;
self.saveWallpaperData = YES;
return self;
}
i try to remove your
- (id)initWithUIImage:(id)arg1 cropRect:(struct CGRect { struct CGPoint { float x_1_1_1; float x_1_1_2; } x1; struct CGSize { float x_2_1_1; float x_2_1_2; } x2; })arg2;
change IMG directory in to:
self = [self initWithUIImage:[UIImage imageNamed:#"myImage.png"]];
and all working well but can't crop image, with my git hub YCameraView you have first understand how it work CROPPING function if you want to use crop or more simple, you have to create a fullScreen UICameraPicker allow user to get from camera or from library and allow the editing in cameraPicker then you can load a new picture in your View like this
self = [self initWithUIImage:[UIImage imageNamed:imageSelected.image]];
for a dismiss view, you can't because is a full app allow user to setUp background wallpaper and you can't terminate the app to see a SpringBoard, you have to create first view > picker > detail view with settings for a Home and LockScreen > then dismiss and come back to a first view.
PS: I think in your project to enable editing direct in a view you have to improve your code with a pinch and pan gesture on the UIView
Hope this help you!

iOS AirPlay Second Screen Tutorial

I am looking at adding AirPlay capabilities to one of my ViewControllers. The View Controller just shows a UIWebView. What I want to do is add a button that will mirror this content to an Apple TV. I know system-wide mirroring can be done, but it doesn't fill up the entire screen, has black bars all around. I have been searching online, but most everything I have found is way back from iOS 5 and out of date. Could someone point me in the direction of a tutorial or drop-in library that would help out? I just need it to mirror the content of just one view to be full-screen on Apple TV.
So far, here is what I have done, but I believe it only creates the second Window, without putting anything on it.
In the AppDelegate, I create a property for it:
#property (nonatomic, retain) UIWindow *secondWindow;
In didFinish method of AppDelegate I run:
NSNotificationCenter *center = [NSNotificationCenter defaultCenter];
[center addObserver:self selector:#selector(handleScreenDidConnectNotification:)
name:UIScreenDidConnectNotification object:nil];
[center addObserver:self selector:#selector(handleScreenDidDisconnectNotification:)
name:UIScreenDidDisconnectNotification object:nil];
Then in AppDelegate I have:
- (void)handleScreenDidConnectNotification:(NSNotification*)aNotification
{
UIScreen *newScreen = [aNotification object];
CGRect screenBounds = newScreen.bounds;
if (!self.secondWindow)
{
self.secondWindow = [[UIWindow alloc] initWithFrame:screenBounds];
self.secondWindow.screen = newScreen;
// Set the initial UI for the window.
}
}
- (void)handleScreenDidDisconnectNotification:(NSNotification*)aNotification
{
if (self.secondWindow)
{
// Hide and then delete the window.
self.secondWindow.hidden = YES;
self.secondWindow = nil;
}
}
In the viewController in which I would like to allow to mirror the WebView on Apple TV, I have:
- (void)checkForExistingScreenAndInitializeIfPresent
{
if ([[UIScreen screens] count] > 1)
{
// Get the screen object that represents the external display.
UIScreen *secondScreen = [[UIScreen screens] objectAtIndex:1];
// Get the screen's bounds so that you can create a window of the correct size.
CGRect screenBounds = secondScreen.bounds;
appDelegate.secondWindow = [[UIWindow alloc] initWithFrame:screenBounds];
appDelegate.secondWindow.screen = secondScreen;
// Set up initial content to display...
// Show the window.
appDelegate.secondWindow.hidden = NO;
NSLog(#"YES");
}
}
I got all this from here. However, that's all that it shows, so I'm not sure how to get the content onto that screen.
Depending on what’s going on in your web view, you’ll either have to make a second one pointed at the same page or move your existing one to the new window. Either way, you treat the second window pretty much the same as you do your app’s main window—add views to it and they should show up on the second display.
I assume you've seen it, but this is the only sample project I could find: https://github.com/quellish/AirplayDemo/
Here are some related questions that might be worth reading:
does anyone know how to get started with airplay?
Airplay Mirroring + External UIScreen = fullscreen UIWebView video playback?
iOS AirPlay: my app is only notified of an external display when mirroring is ON?
Good luck!
There are only two options to do Airplay 'mirroring' at the moment: the system-wide monitoring and completely custom mirroring. Since the system-wide mirroring is not a solution for you, you'll have to go down the way you already identified in your code fragments.
As Noah pointed out, this means providing the content for the second screen, the same way as providing it for the internal display. As I understand you, you want to show the same data/website as before on the internal display, but display it differently in the remote view/webview (e.g. different aspect ratio). One way can be having one webview follow the other in a master/slave setup. You'd have to monitor the changes (like user scolling) in the master and propagate them to the slave. A second way could be rendering the original webview contents to a buffer and drawing this buffer in part in a 'dumb' UIView. This would be a bit faster, as the website would not have to be loaded and rendered twice.

Table View in Popover Won't Scroll When Over AGSMapView

I'm using the ArcGIS Runtime SDK for iOS, and I have a subclass of UITableViewController that I'm presenting in a popover; the base view of the main view controller being displayed is an AGSMapView with an AGSGraphicsLayer (AGSSimpleRenderer attached). When the popover is presented, there is no ability to select a cell, and scrolling is extremely limited. When I change the base view controller's view from an AGSMapView to a blank UIView, the table view in the popover works perfectly.
My viewDidLoad: is as follows:
[super viewDidLoad];
self.mapView.layerDelegate = self;
self.mapView.calloutDelegate = self;
self.mapView.callout.delegate = self;
// Create an instance of a tiled map service layer.
AGSTiledMapServiceLayer *tiledLayer = [[AGSTiledMapServiceLayer alloc] initWithURL:[NSURL URLWithString:#"http://server.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer"]];
// Add it to the map view.
[self.mapView addMapLayer:tiledLayer withName:#"Tiled Layer"];
AGSSimpleMarkerSymbol *simpleMarkerSymbol = [AGSSimpleMarkerSymbol simpleMarkerSymbol];
self.graphicsLayer.renderer = [AGSSimpleRenderer simpleRendererWithSymbol:simpleMarkerSymbol];
// Add graphics layer to the map view.
[self.mapView addMapLayer:self.graphicsLayer withName:#"Graphics Layer"];
Could this be a bug in the AGSMapView implementation, and if so, how do I go about reporting it?
Thanks in advance.
Related: Sister Question in the Esri Forums
EDIT
I determined that the issue is fairly difficult to reproduce, so I added mapViewDidLoad: to add current location. This causes the issue to occur without fail. My code is as follows:
- (void)mapViewDidLoad:(AGSMapView *)mapView {
[self.mapView.locationDisplay startDataSource];
self.mapView.locationDisplay.autoPanMode = AGSLocationDisplayAutoPanModeDefault;
}
A simple workaround is to make the main view of your view controller a UIView and then add both the AGSMapView and your popover as subviews of the base UIView.
This was revealed to be a bug in the ArcGIS Runtime SDK for iOS. Esri Support contacted me and told me that it will be fixed in the next release.
I had this problem but i solve this with adding graphic layer after TiledLayer loaded not in viewDidLoad
-(void)layerDidLoad:(AGSLayer *)layer
{
NSLog(#"layer %# loaded",layer.name);
if(!self.graphicLayer)
{
self.graphicLayer = [AGSGraphicsLayer graphicsLayer];
[self.mapView addMapLayer:self.graphicLayer withName:#"Graphic Layer"];
}
}

Resources