ios metal: limit on number of variables used in a shader - ios

I started receiving the following error today after adding some complexity to my shader:
Execution of the command buffer was aborted due to an error during execution. Discarded (victim of GPU error/recovery) (IOAF code 5)
What i discovered is that it has nothing to do with the actual added code but with fact i added more variables and function calls. I tried removing other complexities from the shader and the error was removed.
Another thing i discovered is that problem also removed when i set fast math to false.
My first guess is that there is some kind of a limit on number of variables when fast math is on. Is there such a limit? Any other ideas why such error might occur?

Most probable cause is that Metal buffers or shaders are overloaded
there is a limitations of using metal technology and it is described here
https://developer.apple.com/library/content/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/BestPracticesforShaders/BestPracticesforShaders.html
I was having this problem before and i was about to switch from Metal to OpenGL But the advantages of Metal let me try again then i discover that my issue that i was Calculating and sorting an Array of Heavy data (Mostly floats and doubles) within the renderer function
My Mistake was Here See the remarked line
- (void)renderer:(id <SCNSceneRenderer>)renderer updateAtTime:(NSTimeInterval)time
{
[self calculateMyData]; // This Is Wrong
}
- (void)calculateMyData
{
// the Heavy Calculations
}
to avoid most IOAF Errors try not to do heavy or complex calculations like sorting data or such within the renderer try use External Loops to call this calculations. This is what i have done
- (void)viewDidLoad
{
// I Use A Timer Loop Every 0.3 Sec to call the heave calculations and sort data
NSTimer *timerCounter2 = [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:#selector(calculateMyData) userInfo:nil repeats: YES];
}
- (void)renderer:(id <SCNSceneRenderer>)renderer updateAtTime:(NSTimeInterval)time
{
//[self calculateMyData]; // Now I Avoid The Mistake
}
- (void)calculateMyData
{
// the Heavy Calculations
}

Related

Reduce memory usage of AVAssetWriter

As the title says, I am having some trouble with AVAssetWriter and memory.
Some notes about my environment/requirements:
I am NOT using ARC, but if there is a way to simply use it and get it all working I'm all for it. My attempts have not made any difference though. And the environment I will be using this in requires memory to be minimised / released ASAP.
Objective-C is a requirement
Memory usage must be as low as possible, the 300mb it takes up now is unstable when testing on my device (iPhone X).
The code
This is the code used when taking the screenshots below https://gist.github.com/jontelang/8f01b895321d761cbb8cda9d7a5be3bd
The problem / items kept around in memory
Most of the things that seem to take up a lot of memory throughout the processing seems to be allocated in the beginning.
So at this point it doesn't seem to me that the issues are with my code. The code that I personally have control over seems to not be an issue, namely loading the images, creating the buffer, releasing it all seems to not be where the memory has a problem. For example if I mark in Instruments the majority of the time after the one above, the memory is stable and none of the memory is kept around.
The only reason for the persistent 5mb is that it is deallocated just after the marking period ends.
Now what?
I actually started writing this question with the focus being on wether my code was releasing things correctly or not, but now it seems like that is fine. So what are my options now?
Is there something I can configure within the current code to make the memory requirements smaller?
Is there simply something wrong with my setup of the writer/input?
Do I need to use a totally different way of making a video to be able to make this work?
A note on using CVPixelBufferPool
In the documentation of CVPixelBufferCreate Apple states:
If you need to create and release a number of pixel buffers, you should instead use a pixel buffer pool (see CVPixelBufferPool) for efficient reuse of pixel buffer memory.
I have tried with this as well, but I saw no changes in the memory usage. Changing the attributes for the pool didn't seem to have any effect as well, so there is a small possibility that I am not actually using it 100% properly, although from comparing to code online it seems like I am, at least. And the output file works.
The code for that, is here https://gist.github.com/jontelang/41a702d831afd9f9ceeb0f9f5365de03
And here is a slightly different version where I set up the pool in a slightly different way https://gist.github.com/jontelang/c0351337bd496a6c7e0c94293adf881f.
Update 1
So I looked a bit deeper into a trace, to figure out when/where the majority of the allocations are coming from. Here is an annotated image of that:
The takeaway is:
The space is not allocated "with" the AVAssetWriter
The 500mb that is held until the end is allocated within 500ms after the processing starts
It seems that it is done internally in AVAssetWriter
I have the .trace file uploaded here: https://www.dropbox.com/sh/f3tf0gw8gamu924/AAACrAbleYzbyeoCbC9FQLR6a?dl=0
When creating Dispatch Queue, ensure you create a queue with Autorlease Pool. Replace DISPATCH_QUEUE_SERIAL with DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL.
Wrap each iteration of for loop into autorelease pool as well
like this:
[assetWriterInput requestMediaDataWhenReadyOnQueue:recordingQueue usingBlock:^{
for (int i = 1; i < 200; ++i) {
#autoreleasepool {
while (![assetWriterInput isReadyForMoreMediaData]) {
[NSThread sleepForTimeInterval:0.01];
}
NSString *path = [NSString stringWithFormat:#"/Users/jontelang/Desktop/SnapperVideoDump/frames/frame_%i.jpg", i];
UIImage *image = [UIImage imageWithContentsOfFile:path];
CGImageRef ref = [image CGImage];
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:ref pool:writerAdaptor.pixelBufferPool];
CMTime presentTime = CMTimeAdd(CMTimeMake(i, 60), CMTimeMake(1, 60));
[writerAdaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
CVPixelBufferRelease(buffer);
}
}
[assetWriterInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{}];
}];
No, I see it is around 240 mb peaking in app. It's my first time using this allocation - interesting.
I'm using AssetWriter to write a video file by streaming cmSampleBuffer : CMSampleBuffer. It gets from AVCaptureVideoDataOutputSampleBufferDelegate by Camera CaptureOutput Realtime.
While I have not yet found the actual issue, the memory problem I described in this question was solved by simply doing it on the actual device instead of the simulator.
#Eugene_Dudnyk Answer is on spot, the auto release pool INSIDE the for or while loop is the key, here is how I got it working for Swift, also, please use AVAssetWriterInputPixelBufferAdaptor for pixel buffer pool:
videoInput.requestMediaDataWhenReady(on: videoInputQueue) { [weak self] in
while videoInput.isReadyForMoreMediaData {
autoreleasepool {
guard let sample = assetReaderVideoOutput.copyNextSampleBuffer(),
let buffer = CMSampleBufferGetImageBuffer(sample) else {
print("Error while processing video frames")
videoInput.markAsFinished()
DispatchQueue.main.async {
videoFinished = true
closeWriter()
}
return
}
// Process image and render back to buffer (in place operation, where ciProcessedImage is your processed new image)
self?.getCIContext().render(ciProcessedImage, to: buffer)
let timeStamp = CMSampleBufferGetPresentationTimeStamp(sample)
self?.adapter?.append(buffer, withPresentationTime: timeStamp)
}
}
}
My memory usage stopped rising.

Xcode performance test always results in huge time in first attempt

I'm implementing the performance test for iOS app, and a strange result is derived from it and don't know why.
As Xcode performance test runs given code 10 times, test run on my machine always results such that the first attempt runs in a huge amount of time compared to other 9 attempts.
Jobs that I want to test performance is very simple, it iterates through given array and investigates with a certain conditions. There's no network connection, array manipulation, or reusing of previous variables whatsoever.
Test codes that I wrote is like the following (all of these result in the same behavior).
a. Initialize variables outside the measure block
- (void)testSomething {
// do some initialization
[self measureBlock:^{
// run code to test performance
}];
}
b. Initialize variables inside the measure block
- (void)testSomething {
[self measureBlock:^{
// do some initialization
// run code to test performance
}];
}
c. Initialize variables in -setUp, and use them in test method
- (void)setUp {
// do some initialization here, and capture the pointers as properties
}
- (void)testSomething {
[self measureBlock:^{
// run code to test performance, using properties initialized in setUp
}];
}
- (void)tearDown {
// clean up properties
}
The results are very strange, and the other 9 attempts seem not to be reliable because of the number of first attempt. Why does the first iteration take so much longer?

Does metal have a back buffer?

I'm currently tracking down some visual popping in my Metal app, and believe it is because I'm drawing directly to framebuffer, not a back-buffer
// this is when I've finished passing commands to the render buffer and issue the draw command. I believe this sends all the images directly to the framebuffer instead of using a backbuffer
[renderEncoder endEncoding];
[mtlCommandBuffer presentDrawable:frameDrawable];
[mtlCommandBuffer commit];
[mtlCommandBuffer release];
//[frameDrawable present]; // This line isn't needed (and I believe is performed by presentDrawable
Several googles later, I haven't found any documentation of back-buffers in metal. I know I could roll my own, but I can't believe metal doesn't support a back buffer.
Here is the code snippet of how I've setup my CAMetalLayer object.
+ (id)layerClass
{
return [CAMetalLayer class];
}
- (void)initCommon
{
self.opaque = YES;
self.backgroundColor = nil;
...
}
-(id <CAMetalDrawable>)getMetalLayer
{
id <CAMetalDrawable> frameDrawable;
while (!frameDrawable && !frameDrawable.texture)
{
frameDrawable = [self->_metalLayer nextDrawable];
}
return frameDrawable;
}
Can I enable a backbuffer on my CAMetalLayer object, or will I need to roll my own?
I assume by back-buffer, you mean a renderbuffer that is being rendered to, while the corresponding front-buffer is being displayed?
In Metal, the concept is provided by the drawables that you extract from CAMetalLayer. The CAMetalLayer instance maintains a small pool of drawables (generally 3), retrieves one of them from the pool each time you invoke nextDrawable, and returns it back to the pool after you've invoked presentDrawable and once rendering is complete (which may be some time later, since the GPU runs asynchronously from the CPU).
Effectively, on each frame loop, you grab a back-buffer by invoking nextDrawable, and make it eligible to become the front-buffer by invoking presentDrawable: and committing the MTLCommandBuffer.
Since there are only 3 drawables in the pool, the catch is that you have to manage this lifecycle yourself, by adding appropriate CPU resource synchronization at the time you invoke nextDrawable and in the callback you get once rendering is complete (as per the MTLCommandBuffer addCompletedHandler: callback set-up).
Typically you use a dispatch_semaphore_t for this:
_resource_semaphore = dispatch_semaphore_create(3);
then put the following just before you invoke nextDrawable:
dispatch_semaphore_wait(_resource_semaphore, DISPATCH_TIME_FOREVER);
and this in your addCompletedHandler: callback handler:
dispatch_semaphore_signal(_resource_semaphore);
Have a look at some of the simple Metal sample apps from Apple to see this in action. There is not a lot in terms of Apple documentation on this.

changing textures using nineveh gl frsmework

If i am trying to change the another texture when the previous one is still in progress application is crashing..
Here is my code.
-(IBAction)changeTexture:(id)sender{
self.text = [arrayEyes objectAtIndex:[sender tag]];
NGLTexture *texture;
texture = [NGLTexture texture2DWithFile:self.text];
NGLMaterialMulti *material = (NGLMaterialMulti *)mesh.material;
[[material materialWithName:#"lambert16SG"] setDiffuseMap:texture];
mesh.material = material;
[mesh compileCoreMesh];
}
I'm going to assume that this code is hit right at the beginning of program execution. So for a bit there the model is still being loaded in a background thread.
So it's likely NGLTexture is being assigned to the mesh's material while it's being processed in another thread. You may run into assignment issues that will either throw an exception or outright crash. Try waiting for the model loader to finish processing before making assignments to it. Look up the NGLMeshDelegate protocol and try making the assignment in your -meshLoadingDidFinish: handler.

NSDecimalNumberPlaceHolder Leak

I have an iPad app that I am testing in Instruments before beta testing. I have gotten rid of all memory leaks except one, and I can't find any information on it. I am baffled as to what to do, since my code never mentions the leaking object which is an instance of NSDecimalNumberPlaceHolder.
For sure I am using NSDecimalNumber. I create 2 decimals per user operation and each I time I run a cycle of the app (which performs some math operation on the two NSDecimalNumbers) I generate four instances of this NSDecimalPlaceHolder thing. Since I do not know how it gets created, I do not know how to release or dealloc it so as to not generate these 16 btye leaks over and over again.
Is it possible that these are not really leaks?
I have run the XCode Analyzer and it reports no issues.
What I'm doing is this:
I send a decimal number from my controller over to my model (analyzer_) which performs the operations and sends back the result.
[[self analyzer_] setOperand:[NSDecimalNumber decimalNumberWithString:anotherStringValue]];
The setOperand method looks like this:
-(void)setOperand:(NSDecimalNumber*)theOperand
{
NSLog(#"setOperand called");
operand_ = theOperand;
//[operand_ retain];
}
Note that if I don't retain operand_ "somewhere" I get a BAD_ACCESS crash. I currently retain and release it later where the operand and the previously provided operand (queuedOperand_) are operated upon. For example:
{
[self performQueuedOperation];
queuedOperation_ = operation;
queuedOperand_ = operand_;
}
return operand_;
[operand_ release];
where performQueuedOperation is:
-(void)performQueuedOperation
{
[operand_ retain];
if ([#"+" isEqualToString:queuedOperation_])
{
#try
{
operand_ = [queuedOperand_ decimalNumberByAdding:operand_];
}
#catch (NSException *NSDecimalNumberOverFlowException)
{
//viewController will send decimal point error message
}
<etc for the other operations>
}
Let me know if this is not clear. Thanks.
Try Heapshot in Instruments, see: When is a Leak not a Leak?
If there is still a pointer to the memory that is no longer used it is not a leak but it is lost memory. I use Heapshot often, it really works great. Also turn on recording reference counting in the Allocations tool and drill down. Here is a screenshot:

Resources