Here's my scenario:
In an MTKView (and a single command buffer therein), I have a MTLTexture that is the result of a MTLComputeEncoder operation.
My draw method on the MTKView is called as a high priority instance of a dispatch_get_global_queue. At the tail end of drawRect in that MTKView, I "scoop" that texture and set it as a property on a different MTKView. I then dispatch a low priority thread to call that view's draw method to copy a region of the original texture into the second MTKView, along with an eventual presentDrawable call -- all on a different command buffer. Meanwhile, the original MTKView ends the draw method with its own call to presentDrawable.
I should mention that both these MTKViews are configured with isPaused=YES and enableSetNeedsDisplay=NO.
This is all working reasonably well, but I occasionally get a crash with either a "[CAMetalLayerDrawable texture] should not be called after presenting the drawable" warning or the occasional crash in some unidentifiable thread:
QuartzCore`-[CAMetalDrawable present]:
0x7fff881cf471 <+0>: pushq %rbp
0x7fff881cf472 <+1>: movq %rsp, %rbp
0x7fff881cf475 <+4>: movq %rdi, %rax
0x7fff881cf478 <+7>: movq 0x15f25ce1(%rip), %rcx ; CAMetalDrawable._priv
0x7fff881cf47f <+14>: movq (%rax,%rcx), %rcx
0x7fff881cf483 <+18>: movq 0x20(%rcx), %rdi <---- crashes here
0x7fff881cf487 <+22>: xorps %xmm0, %xmm0
0x7fff881cf48a <+25>: movq %rax, %rsi
0x7fff881cf48d <+28>: popq %rbp
0x7fff881cf48e <+29>: jmp 0x7fff881cf493 ; layer_private_present(_CAMetalLayerPrivate*, CAMetalDrawable*, double, unsigned int)
If I remove/comment out the ancillary "scoop" and redraw of the original texture, there is no crash.
Is there a more acceptable way to accomplish this? Possibly using an MTLBlitCommandEncoder to explicitly copy the contents of the supplied texture?
UPDATE #1
A little more clarity with the code is a wise idea, possibly:
This is the drawRect method of the primary MTKView, which is driven by a dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) :
- (void)drawRect:(NSRect)dirtyRect {
dispatch_semaphore_wait(inflightSemaphore, DISPATCH_TIME_FOREVER);
[super drawRect:dirtyRect];
…
id<MTLTexture> theTexture = …
[renderedQueue addObject:theTexture];
// populate secondary views
for (BoardMonitorMTKView *v in self.downstreamOutputs)
[v enqueueTexture:(hasValidContent) ? [renderedQueue firstObject] : nil];
__block dispatch_semaphore_t fSemaphore = inflightSemaphore;
__weak __block NSMutableArray *rQueue = renderedQueue;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
if ([rQueue count]==2)
[rQueue removeObjectAtIndex:0];
dispatch_semaphore_signal(fSemaphore);
}];
[commandBuffer presentDrawable:self.currentDrawable];
[commandBuffer commit];
}
In the secondary view:
-(void)enqueueTexture:(id<MTLTexture>)inputTexture {
dispatch_semaphore_wait(inflightSemaphore2, DISPATCH_TIME_FOREVER);
if (inputTexture) {
[textureQueue addObject:inputTexture];
if ([textureQueue count]>kTextureQueueCapacity)
[textureQueue removeObject:[textureQueue firstObject]];
}
dispatch_semaphore_signal(inflightSemaphore2);
dispatch_queue_t aQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
dispatch_async(aQueue, ^{
[self draw];
});
}
- (void)drawRect:(NSRect)dirtyRect {
dispatch_semaphore_wait(inflightSemaphore2, DISPATCH_TIME_FOREVER);
[super drawRect:dirtyRect];
…
id<MTLTexture>inputTexture = [textureQueue firstObject];
MTLRenderPassDescriptor *renderPassDescriptor = self.currentRenderPassDescriptor;
id<MTLRenderCommandEncoder> encoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
MTLViewport viewPort = {0.0, 0.0, (double)self.drawableSize.width, (double)self.drawableSize.height, -1.0, 1.0};
[encoder setViewport:viewPort];
[encoder setRenderPipelineState:metalVertexPipelineState];
// draw main content
NSUInteger vSize = _vertexInfo.metalVertexCount*sizeof(AAPLVertex);
id<MTLBuffer> mBuff = [self.device newBufferWithBytes:_vertexInfo.metalVertices
length:vSize
options:MTLResourceStorageModeShared];
[encoder setVertexBuffer:mBuff offset:0 atIndex:0];
[encoder setVertexBytes:&_viewportSize length:sizeof(_viewportSize) atIndex:1];
[encoder setFragmentTexture:inputTexture atIndex:0];
[encoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:_vertexInfo.metalVertexCount];
[encoder endEncoding];
__block dispatch_semaphore_t fSemaphore = inflightSemaphore2;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
[textureQueue removeObject:[textureQueue firstObject]];
dispatch_semaphore_signal(fSemaphore);
}];
[commandBuffer presentDrawable:self.currentDrawable];
[commandBuffer commit];
}
I'm not sure you're aware that it's both acceptable and common to create your own texture and render to that. You don't have to render to the texture of a drawable of a view or layer.
Create your own texture(s) to render to and then, for just the present step, render from your texture to a drawable's texture.
Mind you, depending on exactly what you're doing, you may want a pool of three or so textures that you rotate through. The issue you need to be concerned with is whether Metal is still reading from a texture, so you don't write to it before it's done being read.
Related
Here is the problem: I want to show sequenced diagrams on iOS device like gif effect, and I use UIImageView animation feature, so implement a method at below:
// Animation.m
// #files: diagrams' array
// #fileDir: file directory
// #imageview: Just UIImageView
//
- (void)showSeqDiagram:(NSArray*)files
fileDirect:(NSString*)fileDir
imageView:(UIImageView*)imageview
{
int cnt = [files count];
NSMutableArray * images = [NSMutableArray arrayWithCapacity:cnt];
for (id file in files)
{
NSString * imageName = [NSString stringWithFormat:#"%#/%#", fileDir, file];
UIImage * image = [UIImage imageNamed:imageName];
if(!image)
continue;
[images addObject:image];
}
imageview.animationImages = images;
imageview.animationDuration = 5;
imageview.animationRepeatCount = 1;
[imageview startAnimating];
}
Up to this, things went well. I made a mistake about declaration of return type in header file like this:
//Animation.h
- (UIImageView*)showSeqDiagram:(NSArray*)files
fileDirect:(NSString*)fileDir
imageView:(UIImageView*)imageview;
and I call this method in - (void)loadView
- (void)loadview
{
......
[self showSeqDiagram:files fileDirect:fileDir imageView:imageview];
......
}
then crashed at calling position and throw EXC_BAD_ACCESS(code=1, ) on iPhone5 simulator and device, iOS verison9.3;
occasionally crashed on iPhone6 simulator, iOS version9.3;
never crashed on iPhone6/6plus device and iPhone6plus simulator, tested on iOS 7-9.
I know it's wrong, but want to know why this happend, could anybody give a explanation? thanks in advance~
Well, My co-worker of mine had answer, here are the details:
Firstly 2 Background:
arm-arch store method return value in %rax register;
and object won't release immediately in ARC mode and
_obj_release_XXX has the lowest priority in RunLoop;
so showSeqDiagram won't return any value, so %rax register has no value, but autoReleasePool try to release "UIImageView" obj in %rax after a short time(priority is the lowest). In such short time, if some other method in other RunLoop over-write %rax, then it won't crash; otherwise, if nobody re-write %rax, then crash happened.
this could partly explain why crashed randomly. Are there any better answers?
I’ve read through Concurrency Programming Guide
In the guide the text states that GCD dispatch queues define their own #autoreleasepool pools and mentions that it’s still recommended to define one at a per dispatch level, yet for NSOperation nothing is said and the example code provided by Apple also does not show usage of the #autoreleasepool structure. The only place where #autoreleasepool is vaguely mentioned in the context of NSOperation is in the Revision History,
2012-07-17 - Removed obsolete information about autorelease pool usage with operations.
Looking at sample code available online, for ex. http://www.raywenderlich.com/19788/how-to-use-nsoperations-and-nsoperationqueues is making usage of #autoreleasepool in the implementations of NSOperations based objects, for example:
#implementation ImageDownloader
- (void)main {
#autoreleasepool {
...
}
}
#end
How should I be implementation modern NSOperation objects?
What was the update Apple are referring to from 2012-07-17 ?
If you are deriving from NSOperation and implementing the main method, you do not need to set up an autorelease pool. The default implementation of the start method pushes an NSAutoReleasePool, calls main and then subsequently pops the NSAutoReleasePool. The same goes for NSInvocationOperation and NSBlockOperation, which share the same implementation of the start method.
The following is an abridged disassembly of the start method for NSOperation. Note the calls to NSPushAutoreleasePool, then a call to main followed by a call to NSPopAutoreleasePool:
Foundation`-[newMyObj__NSOperationInternal _start:]:
0x7fff8e5df30f: pushq %rbp
...
0x7fff8e5df49c: callq *-0x16b95bb2(%rip) ; (void *)0x00007fff8d9d30c0: objc_msgSend
0x7fff8e5df4a2: movl $0x1, %edi
; new NSAutoreleasePool is pushed here
0x7fff8e5df4a7: callq 0x7fff8e5df6d6 ; NSPushAutoreleasePool
... NSOperation main is called
0x7fff8e5df6a4: callq *-0x16b95dba(%rip) ; (void *)0x00007fff8d9d30c0: objc_msgSend
0x7fff8e5df6aa: movq %r15, %rdi
; new NSAutoreleasePool is popped here, which releases any objects added in the main method
0x7fff8e5df6ad: callq 0x7fff8e5e1408 ; NSPopAutoreleasePool
Here is a snapshot of some example code running
MyObj is allocated in the main method and I make sure the object must be autoreleased
main returns to _start, and the following image shows a stack trace with MyObj dealloc being called by the current autorelease pool, popped inside _start
For reference, this is the example code I used to verify the behavior:
#import <Foundation/Foundation.h>
#interface MyObj : NSObject
#end
#implementation MyObj
- (void)dealloc {
NSLog(#"dealloc");
}
#end
#interface TestOp : NSOperation {
MyObj *obj;
}
#end
#implementation TestOp
- (MyObj *)setMyObj:(MyObj *)o {
MyObj *old = obj;
obj = o;
return old;
}
- (void)main {
MyObj *old = [self setMyObj:[MyObj new]];
[self setMyObj:old];
}
#end
int main(int argc, const char * argv[]) {
#autoreleasepool {
// insert code here...
NSLog(#"Hello, World!");
NSOperationQueue *q = [NSOperationQueue new];
TestOp *op = [TestOp new];
[q addOperation:op];
[op waitUntilFinished];
}
return 0;
}
Grand Central Dispatch similarly manages autorelease pools for dispatch queues, per the Concurrency Programming Guide:
If your block creates more than a few Objective-C objects, you might want to enclose parts of your block’s code in an #autorelease block to handle the memory management for those objects. Although GCD dispatch queues have their own autorelease pools, they make no guarantees as to when those pools are drained. If your application is memory constrained, creating your own autorelease pool allows you to free up the memory for autoreleased objects at more regular intervals.
I'm trying to set up a CAEAGLLayer subclass with a gl context. That is, instead of creating a UIView subclass which returns a CAEAGLLayer and binding a gl context to this layer from within the UIView subclass, I'm directly subclassing the layer and trying to setup the context in the layer's init, like so:
- (id)init
{
self = [super init];
if (self) {
self.opaque = YES;
_glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSAssert([EAGLContext setCurrentContext:_glContext], #"");
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
[_glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
glGenFramebuffers(1, &_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorRenderBuffer);
/// . . .
up to that point everything seems fine. However, I then try to create a shader program with a "pass-thru" vertex/fragment shader pair and while linking the program returns no errors, validation fails saying: "Current draw framebuffer is invalid."
The code that links and validates the shader program (after attaching the shaders) looks like so, just in case:
- (BOOL)linkAndValidateProgram
{
GLint status;
glLinkProgram(_shaderProgram);
#ifdef DEBUG
GLint infoLogLength;
GLchar *infoLog = NULL;
glGetProgramiv(_shaderProgram, GL_INFO_LOG_LENGTH, &infoLogLength);
if (infoLogLength > 0) {
infoLog = (GLchar *)malloc(infoLogLength);
glGetProgramInfoLog(_shaderProgram, infoLogLength, &infoLogLength, infoLog);
NSLog(#"Program link log:\n%s", infoLog);
free(infoLog);
}
#endif
glGetProgramiv(_shaderProgram, GL_LINK_STATUS, &status);
if (!status) {
return NO;
}
glValidateProgram(_shaderProgram);
#ifdef DEBUG
glGetProgramiv(_shaderProgram, GL_INFO_LOG_LENGTH, &infoLogLength);
if (infoLogLength > 0) {
infoLog = (GLchar *)malloc(infoLogLength);
glGetProgramInfoLog(_shaderProgram, infoLogLength, &infoLogLength, infoLog);
NSLog(#"Program validation log:\n%s", infoLog);
free(infoLog);
}
#endif
glGetProgramiv(_shaderProgram, GL_VALIDATE_STATUS, &status);
if (!status) {
return NO;
}
glUseProgram(_shaderProgram);
return YES;
}
I'm wondering if there might be some extra setup at some point throughout the lifecycle of CAEAGLLayer that I might be unaware of and might be skipping by trying to setup GL in init?
The problem was the layer has no dimensions at that point in init. Which in turn makes it where trying to set the render buffer storage to the layer implies a buffer of 0.
UPDATE: My current best thinking is that, instead of imposing a size on init (which worked fine for the purposes of testing but is kind hacky), I should just re set the buffer storage whenever the layer changes sizes. So I'm overriding -setBounds: like so:
- (void)setBounds:(CGRect)bounds
{
[super setBounds:bounds];
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &someVariableToHoldWidthIfYouNeedIt);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &someVariableToHoldHeightIfYouNeedIt);
}
As far as I know you have to overwrite the layerClass method in the View, like this
+ (Class)layerClass
{
return [MYCEAGLLayer class];
}
Also you have to set the drawableProperties on the MYCEAGLLayer.
I have custom UICollectionViewCell subclass where I draw with clipping, stroking and transparency. It works pretty well on Simulator and iPhone 5, but on older devices there is noticeable performance problems.
So I want to move time-consuming drawing to background thread. Since -drawRect method is always called on the main thread, I ended up saving drawn context to CGImage (original question contained code with using CGLayer, but it is sort of obsolete as Matt Long pointed out).
Here is my implementation of drawRect method inside this class:
-(void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
if (self.renderedSymbol != nil) {
CGContextDrawImage(ctx, self.bounds, self.renderedSymbol);
}
}
Rendering method that defines this renderedSymbol property:
- (void) renderCurrentSymbol {
[self.queue addOperationWithBlock:^{
// creating custom context to draw there (contexts are not thread safe)
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(space);
// custom drawing goes here using 'ctx' context
// then saving context as CGImageRef to property that will be used in drawRect
self.renderedSymbol = CGBitmapContextCreateImage(ctx);
// asking main thread to update UI
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
[self setNeedsDisplayInRect:self.bounds];
}];
CGContextRelease(ctx);
}];
}
This setup works perfectly on main thread, but when I wrap it with NSOperationQueue or GCD, I'm getting lots of different "invalid context 0x0" errors. App doesn't crash itself, but drawing doesn't happen. I suppose there is a problem with releasing custom created CGContextRef, but I don't know what to do about it.
Here's my property declarations. (I tried using atomic versions, but that didn't help)
#property (nonatomic) CGImageRef renderedSymbol;
#property (nonatomic, strong) NSOperationQueue *queue;
#property (nonatomic, strong) NSString *symbol; // used in custom drawing
Custom setters / getters for properties:
-(NSOperationQueue *)queue {
if (!_queue) {
_queue = [[NSOperationQueue alloc] init];
_queue.name = #"Background Rendering";
}
return _queue;
}
-(void)setSymbol:(NSString *)symbol {
_symbol = symbol;
self.renderedSymbol = nil;
[self setNeedsDisplayInRect:self.bounds];
}
-(CGImageRef) renderedSymbol {
if (_renderedSymbol == nil) {
[self renderCurrentSymbol];
}
return _renderedSymbol;
}
What can I do?
Did you notice the document on CGLayer you're referencing hasn't been updated since 2006? The assumption you've made that CGLayer is the right solution is incorrect. Apple has all but abandoned this technology and you probably should too: http://iosptl.com/posts/cglayer-no-longer-recommended/ Use Core Animation.
Issue solved by using amazing third party library by Mind Snacks — MSCachedAsyncViewDrawing.
I just "solved" what appears to be a deadlock or synchronization issue with:
[NSThread sleepForTimeInterval:0.1];
in an app that attaches MPMediaItem (music/images) property references from the IPOD library to object instances, and those objects are back-stored via CoreData. My interest here is to understand exactly what's going on and what is the best practice in this situation. Here goes:
The recipe to replicate this every time is as follows:
User creates a new project.
doc = [[UIManagedDocument alloc] initWithFileURL:docURL];
if (![[NSFileManager defaultManager] fileExistsAtPath:[docURL path]]) {
[doc saveToURL:docURL forSaveOperation:UIDocumentSaveForCreating completionHandler:^(BOOL success) {
if (success) {
completionBlock(doc);
}
else {
DLog(#"Failed document creation: %#", doc.localizedName);
}
}];
Later the managedObjectContext is used to associate the object instances and hydrate the CoreData model
TheProject *theProject = [TheProject projectWithInfo:theProjectInfo
inManagedObjectContext:doc.managedObjectContext];
The user later creates a "CustomAction" object, adds a "ChElement" to it and associates a "MusicElement" with the ChElement. (These are pseudonyms for the CoreData model objects). The MusicElement is added via the IPOD library.
#define PLAYER [MPMusicPlayerController iPodMusicPlayer]
The user saves this project, then switches to an existing project that already has one CustomAction object created, with a ChElement and a MusicElement.
The user selects that ChElement from a tableView and navigates to a detailView.
When navigating away from the ChElementTVC (a subclass of a CoreData TableViewController class similar to that found in Apple docs), this is required:
- (void)viewWillDisappear:(BOOL)animated
{
[super viewWillDisappear:animated];
self.fetchedResultsController.delegate = nil;
}
In the detail View, the user changes an attribute of the ChElement object and saves the project. The detailView calls its delegate (ChElementTVC) to do the saving. The save is to the UIManagedDocument instance that holds the NSManagedObject.
#define SAVEDOC(__DOC__) [ProjectDocumentHelper saveProjectDocument:__DOC__]
// Delegate
- (void)chAddElementDetailViewController:(ChDetailViewController *)sender didPressSaveButton:(NSString *)message
{
SAVEDOC(THE_CURRENT_PROJECT_DOCUMENT);
[self.navigationController popViewControllerAnimated:YES];
}
// Helper Class
+ (void)saveProjectDocument:(UIManagedDocument *)targetDocument
{
NSManagedObjectContext *moc = targetDocument.managedObjectContext;
[moc performBlockAndWait:^{
DLog(#" Process Pending Changes before saving : %#, Context = %#", targetDocument.description, moc);
[moc processPendingChanges];
[targetDocument saveToURL:targetDocument.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:NULL];
}];
}
Since the delegate (ChElementTVC) popped the detailView off the navigation stack,
its viewWillAppear is called, and the fetchedResultsController.delegate is restored.
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
if (!self.fetchedResultsController.delegate) {
DLog(#"Sleep Now %#", self);
//http://mobiledevelopertips.com/core-services/sleep-pause-or-block-a-thread.html
[NSThread sleepForTimeInterval:0.1];
DLog(#"Wake up %#", self);
[self fetchedResultsControllerWithPredicate:_savedPredicate]; // App Hangs Here ... This is sending messages to CoreData objects.
[self.tableView reloadData];
}
Without the [NSThread sleepForTimeInterval:0.1]; the app hangs. When I send a SIGINT via Xcode, I get the debugger and reveal the following:
(lldb) bt
* thread #1: tid = 0x1c03, 0x30e06054 libsystem_kernel.dylib semaphore_wait_trap + 8, stop reason = signal SIGINT
frame #0: 0x30e06054 libsystem_kernel.dylib semaphore_wait_trap + 8
frame #1: 0x32c614f4 libdispatch.dylib _dispatch_thread_semaphore_wait$VARIANT$mp + 12
frame #2: 0x32c5f6a4 libdispatch.dylib _dispatch_barrier_sync_f_slow + 92
frame #3: 0x32c5f61e libdispatch.dylib dispatch_barrier_sync_f$VARIANT$mp + 22
frame #4: 0x32c5f266 libdispatch.dylib dispatch_sync_f$VARIANT$mp + 18
frame #5: 0x35860564 CoreData _perform + 160
(lldb) frame select 5
frame #5: 0x35860564 CoreData _perform + 160
CoreData _perform + 160:
-> 0x35860564: add sp, #12
0x35860566: pop {r4, r5, r7, pc}
CoreData -[NSManagedObjectContext(_NestedContextSupport) executeRequest:withContext:error:]:
0x35860568: push {r4, r5, r6, r7, lr}
0x3586056a: add r7, sp, #12
(lldb) disassemble -f
CoreData _perform:
0x358604c4: push {r4, r5, r7, lr}
... snipped ...
0x35860560: blx 0x35938bf4 ; symbol stub for: dispatch_sync_f
-> 0x35860564: add sp, #12
0x35860566: pop {r4, r5, r7, pc}
Another work-around is possible. Coding the fetchedResultsController.delegate restoration in -[ChElementTVC viewDidAppear:] also effectively delays this setting on the main queue.
An additional work-around is to execute the nav pop in a completion block after the project saving is finished:
#define SAVEDOCWITHCOMPLETION(__DOC__,__COMPLETION_BLOCK__)[ProjectDocumentHelper saveProjectDocument:__DOC__ completionHandler:__COMPLETION_BLOCK__]
void (^completionBlock)(BOOL) = ^(BOOL success) {
[self.navigationController popViewControllerAnimated:YES];
};
SAVEDOCWITHCOMPLETION(THE_CURRENT_PROJECT_DOCUMENT, completionBlock);
I think the save operation runs in the background concurrently with the delegate restoration on the main queue, but I do not know how to examine/prove/disprove that theory.
So, with that, can someone explain what's going on and what is the best practice in this situation? Also, references for study are appreciated.
I ended up implementing the third method, that is, saving the document with a completion block to serialize the transactions interacting with the CoreData store.