How to make iOS slow - ios

I'm a mobile QA. Now we have an issue about a race condition between network response and UI rendering. We guess if the UI rendering is slower than the network response, then it will crash.
We already tried to speed up the network response, by using Charles' map local feature. But the duration is still about 20ms. This is the best way we can think to speed up the network.
So I'm asking if there is any way to slow down the UI rendering on iOS, real device or simulator. Is there a way to set the CPU usage or memory for iOS? Or if there is a way to keep the iOS system in high CPU / memory usage?

You can do it if you add to the application many background tasks, which will execute on CPU and GPU.
This tasks should execute on background concurrent thread and do not interact with the main code of the application.
For example you can create NSOperation which will calculate some value < pseudocode >:
<in the operation's main>
- (void)main{
double value = 100000009900.0
for (int i = 0; i<INT_MAX; i++)
{
value = sqrt(value) + rand(time())
}
}
and add operations, that will doing something with GPU < pseudocode >:
<in GPU' operation>
- (void)main{
CIImage* image = <load image>;
CIFilter* filter = <some complicated filter>;
filter.inputImage = image;
CIImage* result = filter.outputImage;
CIContext* context = <create context or share same for all operations>
NSData* imageData = [context JPEGRepresentationOfImage:result colorSpace:NULL options:nil];
CGImageRef image = [context renderImage];
if (image)
{
CGImageRelease(image);
}
}
After that add many operations to a queue. For this you should add some button to the interface.
I understand, what this is a special application, but if you create all right, you will get right result. (Sometimes I do that to check memory pressure and performance issues)
Also you can use old mac and run the application on the simulator.

Related

Reduce memory usage of AVAssetWriter

As the title says, I am having some trouble with AVAssetWriter and memory.
Some notes about my environment/requirements:
I am NOT using ARC, but if there is a way to simply use it and get it all working I'm all for it. My attempts have not made any difference though. And the environment I will be using this in requires memory to be minimised / released ASAP.
Objective-C is a requirement
Memory usage must be as low as possible, the 300mb it takes up now is unstable when testing on my device (iPhone X).
The code
This is the code used when taking the screenshots below https://gist.github.com/jontelang/8f01b895321d761cbb8cda9d7a5be3bd
The problem / items kept around in memory
Most of the things that seem to take up a lot of memory throughout the processing seems to be allocated in the beginning.
So at this point it doesn't seem to me that the issues are with my code. The code that I personally have control over seems to not be an issue, namely loading the images, creating the buffer, releasing it all seems to not be where the memory has a problem. For example if I mark in Instruments the majority of the time after the one above, the memory is stable and none of the memory is kept around.
The only reason for the persistent 5mb is that it is deallocated just after the marking period ends.
Now what?
I actually started writing this question with the focus being on wether my code was releasing things correctly or not, but now it seems like that is fine. So what are my options now?
Is there something I can configure within the current code to make the memory requirements smaller?
Is there simply something wrong with my setup of the writer/input?
Do I need to use a totally different way of making a video to be able to make this work?
A note on using CVPixelBufferPool
In the documentation of CVPixelBufferCreate Apple states:
If you need to create and release a number of pixel buffers, you should instead use a pixel buffer pool (see CVPixelBufferPool) for efficient reuse of pixel buffer memory.
I have tried with this as well, but I saw no changes in the memory usage. Changing the attributes for the pool didn't seem to have any effect as well, so there is a small possibility that I am not actually using it 100% properly, although from comparing to code online it seems like I am, at least. And the output file works.
The code for that, is here https://gist.github.com/jontelang/41a702d831afd9f9ceeb0f9f5365de03
And here is a slightly different version where I set up the pool in a slightly different way https://gist.github.com/jontelang/c0351337bd496a6c7e0c94293adf881f.
Update 1
So I looked a bit deeper into a trace, to figure out when/where the majority of the allocations are coming from. Here is an annotated image of that:
The takeaway is:
The space is not allocated "with" the AVAssetWriter
The 500mb that is held until the end is allocated within 500ms after the processing starts
It seems that it is done internally in AVAssetWriter
I have the .trace file uploaded here: https://www.dropbox.com/sh/f3tf0gw8gamu924/AAACrAbleYzbyeoCbC9FQLR6a?dl=0
When creating Dispatch Queue, ensure you create a queue with Autorlease Pool. Replace DISPATCH_QUEUE_SERIAL with DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL.
Wrap each iteration of for loop into autorelease pool as well
like this:
[assetWriterInput requestMediaDataWhenReadyOnQueue:recordingQueue usingBlock:^{
for (int i = 1; i < 200; ++i) {
#autoreleasepool {
while (![assetWriterInput isReadyForMoreMediaData]) {
[NSThread sleepForTimeInterval:0.01];
}
NSString *path = [NSString stringWithFormat:#"/Users/jontelang/Desktop/SnapperVideoDump/frames/frame_%i.jpg", i];
UIImage *image = [UIImage imageWithContentsOfFile:path];
CGImageRef ref = [image CGImage];
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:ref pool:writerAdaptor.pixelBufferPool];
CMTime presentTime = CMTimeAdd(CMTimeMake(i, 60), CMTimeMake(1, 60));
[writerAdaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
CVPixelBufferRelease(buffer);
}
}
[assetWriterInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{}];
}];
No, I see it is around 240 mb peaking in app. It's my first time using this allocation - interesting.
I'm using AssetWriter to write a video file by streaming cmSampleBuffer : CMSampleBuffer. It gets from AVCaptureVideoDataOutputSampleBufferDelegate by Camera CaptureOutput Realtime.
While I have not yet found the actual issue, the memory problem I described in this question was solved by simply doing it on the actual device instead of the simulator.
#Eugene_Dudnyk Answer is on spot, the auto release pool INSIDE the for or while loop is the key, here is how I got it working for Swift, also, please use AVAssetWriterInputPixelBufferAdaptor for pixel buffer pool:
videoInput.requestMediaDataWhenReady(on: videoInputQueue) { [weak self] in
while videoInput.isReadyForMoreMediaData {
autoreleasepool {
guard let sample = assetReaderVideoOutput.copyNextSampleBuffer(),
let buffer = CMSampleBufferGetImageBuffer(sample) else {
print("Error while processing video frames")
videoInput.markAsFinished()
DispatchQueue.main.async {
videoFinished = true
closeWriter()
}
return
}
// Process image and render back to buffer (in place operation, where ciProcessedImage is your processed new image)
self?.getCIContext().render(ciProcessedImage, to: buffer)
let timeStamp = CMSampleBufferGetPresentationTimeStamp(sample)
self?.adapter?.append(buffer, withPresentationTime: timeStamp)
}
}
}
My memory usage stopped rising.

iPad Pro 3rd Gen Killing Foreground App Without Cause

I have an app that has been out in the wild for many years.
This app, in order to be 100% functional while offline, needs to download hundreds of thousands of images (1 for each object) one time only (delta updates are processed as needed).
The object data itself comes down without issue.
However, recently, our app has started crashing while downloading just the images, but only on newer iPads (3rd gen iPad Pros with plenty of storage).
The image download process uses NSURLSession download tasks inside an NSOperationQueue.
We were starting to see Energy Logs stating that CPU usage was too high, so we modified our parameters to add a break between each image, as well as between each batch of image, using
[NSThread sleepForTimeInterval:someTime];
This reduced our CPU usage from well above 95% (which, fair enough) to down below 18%!
Unfortunately, the app would still crash on newer iPads after only a couple of hours. However, on our 2016 iPad Pro 1st Gen, the app does not crash at all, even after 24 hours of downloading.
When pulling crash logs from the devices, all we see is that CPU usage was over 50% for more than 3 minutes. No other crash logs come up.
These devices are all plugged in to power, and have their lock time set to never in order to allow the iPad to remain awake and with our app in the foreground.
In an effort to solve this issue, we turned our performance way down, basically waiting 30 seconds in between each image, and 2 full minutes between each batch of images. This worked and the crashing stopped, however, this would take days to download all of our images.
We are trying to find a happy medium where the performance is reasonable, and the app does not crash.
However, what is haunting me, is that no matter the setting, and even at full-bore performance, the app never crashes on the older devices, it only crashes on the newer devices.
Conventional wisdom would suggest that should not be possible.
What am I missing here?
When I profile using Instruments, I see the app sitting at a comfortable 13% average while downloading, and there is a 20 second gap in between batches, so the iPad should have plenty of time to do any cleanup.
Anyone have any ideas? Feel free to request additional information, I'm not sure what else would be helpful.
EDIT 1: Downloader Code Below:
//Assume the following instance variables are set up:
self.operationQueue = NSOperationQueue to download the images.
self.urlSession = NSURLSession with ephemeralSessionConfiguration, 60 second timeoutIntervalForRequest
self.conditions = NSMutableArray to house the NSConditions used below.
self.countRemaining = NSUInteger which keeps track of how many images are left to be downloaded.
//Starts the downloading process by setting up the variables needed for downloading.
-(void)startDownloading
{
//If the operation queue doesn't exist, re-create it here.
if(!self.operationQueue)
{
self.operationQueue = [[NSOperationQueue alloc] init];
[self.operationQueue addObserver:self forKeyPath:KEY_PATH options:0 context:nil];
[self.operationQueue setName:QUEUE_NAME];
[self.operationQueue setMaxConcurrentOperationCount:2];
}
//If the session is nil, re-create it here.
if(!self.urlSession)
{
self.urlSession = [NSURLSession sessionWithConfiguration:[NSURLSessionConfiguration ephemeralSessionConfiguration]
delegate:self
delegateQueue:nil];
}
if([self.countRemaining count] == 0)
{
[self performSelectorInBackground:#selector(startDownloadForNextBatch:) withObject:nil];
self.countRemaining = 1;
}
}
//Starts each batch. Called again on observance of the operation queue's task count being 0.
-(void)startDownloadForNextBatch:
{
[NSThread sleepForTimeInterval:20.0]; // 20 second gap between batches
self.countRemaining = //Go get the count remaining from the database.
if (countRemaining > 0)
{
NSArray *imageRecordsToDownload = //Go get the next batch of URLs for the images to download from the database.
[imageRecordsToDownload enumerateObjectsUsingBlock:^(NSDictionary *imageRecord,
NSUInteger index,
BOOL *stop)
{
NSInvocationOperation *invokeOp = [[NSInvocationOperation alloc] initWithTarget:self
selector:#selector(downloadImageForRecord:)
object:imageRecord];
[self.operationQueue addOperation:invokeOp];
}];
}
}
//Performs one image download.
-(void)downloadImageForRecord:(NSDictionary *)imageRecord
{
NSCondition downloadCondition = [[NSCondition alloc] init];
[self.conditions addObject:downloadCondition];
[[self.urlSession downloadTaskWithURL:imageURL
completionHandler:^(NSURL *location,
NSURLResponse *response,
NSError *error)
{
if(error)
{
//Record error below.
}
else
{
//Move the downloaded image to the correct directory.
NSError *moveError;
[[NSFileManager defaultManager] moveItemAtURL:location toURL:finalURL error:&moveError];
//Create a thumbnail version of the image for use in a search grid.
}
//Record the final outcome for this record by updating the database with either an error code, or the file path to where the image was saved.
//Sleep for some time to allow the CPU to rest.
[NSThread sleepForTimeInterval:0.05]; // 0.05 second gap between images.
//Finally, signal our condition.
[downloadCondition signal];
}]
resume];
[downloadCondition lock];
[downloadCondition wait];
[downloadCondition unlock];
}
//If the downloads need to be stopped, for whatever reason (i.e. the user logs out), this function is called to stop the process entirely:
-(void)stopDownloading
{
//Immediately suspend the queue.
[self.operationQueue setSuspended:YES];
//If any conditions remain, signal them, then remove them. This was added to avoid deadlock issues with the user logging out and then logging back in in rapid succession.
[self.conditions enumerateObjectsUsingBlock:^(NSCondition *condition,
NSUInteger idx,
BOOL * _Nonnull stop)
{
[condition signal];
}];
[self setConditions:nil];
[self setConditions:[NSMutableArray array]];
[self.urlSession invalidateAndCancel];
[self setImagesRemaining:0];
[self.operationQueue cancelAllOperations];
[self setOperationQueue:nil];
}
EDIT 2: CPU usage screenshot from Instruments. Peaks are ~50%, valleys are ~13% CPU usage.
EDIT 3: Running the app until failure in Console, observed memory issue
Alright! Finally observed the crash on my iPhone 11 Pro after over an hour downloading images, which matches the scenario reported by my other testers.
The Console reports my app was killed specifically for using too much memory. If I am reading this report correctly, my apps used over 2 GB of RAM. I'm assuming that this has to do more with the internal management of NSURLSESSIOND, since it is not showing this leak during debugging with either Xcode or Instruments.
Console reports: "kernel 232912.788 memorystatus: killing_specific_process pid 7075 [PharosSales] (per-process-limit 10) 2148353KB - memorystatus_available_pages: 38718"
Thankfully, I start receiving memory warnings around the 1 hour mark. I should be able to pause (suspend) my operation queue for some time (let's say 30 seconds) in order to let the system clear its memory.
Alternatively, I could call stop, with a gcd dispatch after call to start again.
What do you guys think about this solution? Is there a more elegant way to respond to memory warnings?
Where do you think this memory usage is coming from?
EDIT 4: Eureka!! Found internal Apple API memory leak
After digging I 'killing specific process' memory-related console message, I found the following post:
Stack Overflow NSData leak discussion
Based on this discussion surrounding using NSData writeToFile:error:, I looked around to see if I was somehow using this function.
Turns out, the logic that I was using to generate a thumbnail from the original image used this statement to write the generated thumbnail image to disk.
If I commented out this logic, the app no longer crashed at all (was able to pull down all of the images without failure!).
I had already planned on swapping this legacy Core Graphics code out for the WWDC 2018-demonstrated usage of ImageIO.
After recoding this function to use ImageIO, I am pleased to report that the app no longer crashes, and the thumbnail logic is super-optimized as well!
Thanks for all your help!

Does metal have a back buffer?

I'm currently tracking down some visual popping in my Metal app, and believe it is because I'm drawing directly to framebuffer, not a back-buffer
// this is when I've finished passing commands to the render buffer and issue the draw command. I believe this sends all the images directly to the framebuffer instead of using a backbuffer
[renderEncoder endEncoding];
[mtlCommandBuffer presentDrawable:frameDrawable];
[mtlCommandBuffer commit];
[mtlCommandBuffer release];
//[frameDrawable present]; // This line isn't needed (and I believe is performed by presentDrawable
Several googles later, I haven't found any documentation of back-buffers in metal. I know I could roll my own, but I can't believe metal doesn't support a back buffer.
Here is the code snippet of how I've setup my CAMetalLayer object.
+ (id)layerClass
{
return [CAMetalLayer class];
}
- (void)initCommon
{
self.opaque = YES;
self.backgroundColor = nil;
...
}
-(id <CAMetalDrawable>)getMetalLayer
{
id <CAMetalDrawable> frameDrawable;
while (!frameDrawable && !frameDrawable.texture)
{
frameDrawable = [self->_metalLayer nextDrawable];
}
return frameDrawable;
}
Can I enable a backbuffer on my CAMetalLayer object, or will I need to roll my own?
I assume by back-buffer, you mean a renderbuffer that is being rendered to, while the corresponding front-buffer is being displayed?
In Metal, the concept is provided by the drawables that you extract from CAMetalLayer. The CAMetalLayer instance maintains a small pool of drawables (generally 3), retrieves one of them from the pool each time you invoke nextDrawable, and returns it back to the pool after you've invoked presentDrawable and once rendering is complete (which may be some time later, since the GPU runs asynchronously from the CPU).
Effectively, on each frame loop, you grab a back-buffer by invoking nextDrawable, and make it eligible to become the front-buffer by invoking presentDrawable: and committing the MTLCommandBuffer.
Since there are only 3 drawables in the pool, the catch is that you have to manage this lifecycle yourself, by adding appropriate CPU resource synchronization at the time you invoke nextDrawable and in the callback you get once rendering is complete (as per the MTLCommandBuffer addCompletedHandler: callback set-up).
Typically you use a dispatch_semaphore_t for this:
_resource_semaphore = dispatch_semaphore_create(3);
then put the following just before you invoke nextDrawable:
dispatch_semaphore_wait(_resource_semaphore, DISPATCH_TIME_FOREVER);
and this in your addCompletedHandler: callback handler:
dispatch_semaphore_signal(_resource_semaphore);
Have a look at some of the simple Metal sample apps from Apple to see this in action. There is not a lot in terms of Apple documentation on this.

Is it ok to create EAGLContext for each thread?

I want to do some work in my OpenGL ES project in concurrent GCD queues. Is it ok if to create EAGLContext for each thread? I'm going to do it with such way:
queue_ = dispatch_queue_create("test.queue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue_, ^{
NSMutableDictionary* threadDictionary = [[NSThread currentThread] threadDictionary];
EAGLContext* context = threadDictionary[#"context"];
if (!context) {
context = /* creating EAGLContext with sharegroup */;
threadDictionary[#"context"] = context;
}
if ([EAGLContext setCurrentContext:context]) {
// rendering
[EAGLContext setCurrentContext:nil];
}
});
If it is not correct what is the best practice to parallelize OpenGL rendering?
Not only is it okay, this is the only way you can share OpenGL resources between multiple threads. Note that shareable resources are typically limited to resources that allocate memory (e.g. buffer objects, textures, shaders). They do not include objects that merely store state (e.g. the global state machine, Framebuffer Objects or Vertex Array Objects). But if you are considering modifying data that you are using for rendering, I would strongly advise against this.
Whenever GL has a command in the pipeline that has not finished, any attempt to modify a resource used by that command will block until the command finishes. A better solution would be to double-buffer your resources, have a copy you use for rendering and a separate copy you use for updating. When you finish updating, the next time your drawing thread uses that resource, have it swap the buffers used for updating and drawing. This will reduce the amount of time the driver has to synchronize your worker threads with the drawing thread.
Now, if you are suggesting here that you want to draw from multiple threads, then you should re-think your strategy. OpenGL generally does not benefit from issuing draw commands from multiple threads, it just creates a synchronization nightmare. Multi-threading is useful mostly for controlling VSYNC on multiple windows (probably not something you will ever encounter in ES) or streaming resource data in the background.

changing textures using nineveh gl frsmework

If i am trying to change the another texture when the previous one is still in progress application is crashing..
Here is my code.
-(IBAction)changeTexture:(id)sender{
self.text = [arrayEyes objectAtIndex:[sender tag]];
NGLTexture *texture;
texture = [NGLTexture texture2DWithFile:self.text];
NGLMaterialMulti *material = (NGLMaterialMulti *)mesh.material;
[[material materialWithName:#"lambert16SG"] setDiffuseMap:texture];
mesh.material = material;
[mesh compileCoreMesh];
}
I'm going to assume that this code is hit right at the beginning of program execution. So for a bit there the model is still being loaded in a background thread.
So it's likely NGLTexture is being assigned to the mesh's material while it's being processed in another thread. You may run into assignment issues that will either throw an exception or outright crash. Try waiting for the model loader to finish processing before making assignments to it. Look up the NGLMeshDelegate protocol and try making the assignment in your -meshLoadingDidFinish: handler.

Resources