I am working with SceneKit in an iMessage extension and have run into a peculiar little beast of an issue. I am trying to render a custom scn model and rig nodes to a users facial expression using blend shape anchors. I am able to do this succesfully in the iOS app that this iMessage extension is born from without an issue. However, once placed into a MessageViewController the program exits with code 0 every time I try to run it.
I did a bit of digging and it seems "exited with code 0" is indicative of memory overload so I started playing around with my models nodes. I discovered that if I delete all nodes but one, I am able to animate that node with its corresponding blend shape. Any more than one node and it crashes.
Does anyone have any ideas as to why this is happening? Or any proof that iMessage extensions are only granted a certain amount of processing power before they are killed off (another theory of mine)?
Appreciate any help!
from the App Extension Programming Guide we learn that
Memory limits for running app extensions are significantly lower than the memory limits imposed on a foreground app. On both platforms, the system may aggressively terminate extensions because users want to return to their main goal in the host app. Some extensions may have lower memory limits than others: For example, widgets must be especially efficient because users are likely to have several widgets open at the same time.
Your app extension doesn’t own the main run loop, so it’s crucial that you follow the established rules for good behavior in main run loops. For example, if your extension blocks the main run loop, it can create a bad user experience in another extension or app.
Keep in mind that the GPU is a shared resource in the system. App extensions do not get top priority for shared resources; for example, a Today widget that runs a graphics-intensive game might give users a bad experience. The system is likely to terminate such an extension because of memory pressure. Functionality that makes heavy use of system resources is appropriate for an app, not an app extension.
One option may be to try to optimize your geometry in your DCC so you don't run into system resource constraints.
Related
When using a texture with a purgeability state of volatile my app crashes with this error:
"MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion `MTLResource is in volatile or empty purgeable state at commit"
It works perfectly fine when I run the app by itself (not using the play button in Xcode but just clicking on the build icon) and also works when testing on iOS. This is a recent problem since updating to a newer version of Xcode recently. Is this something I can turn off so that the command buffers don't lock purgeable objects?
It's working as intended. Let me explain.
First, the fact that you are not seeing this problem in your app is due to the fact that by default, apps that are launched from Xcode run with Metal Validation Layer. This is an API layer that sits between an actual API and your app and verifies that all the objects are in a consistent state and meet all the required preconditions and such. Apps run outside of Xcode don't have this layer enabled by default, because doing all the validation has its cost that you don't want to pass to the users, because Metal Validation Layer exists to be used during development. You can learn more about it by typing man MetalValidation in your terminal. You can also run your app with Validation enabled without Xcode, by prepending the invocation from the terminal with MTL_DEBUG_LAYER=1.
The fact that the app is not actually crashing and seems to work fine without validation layer does not necessarily mean that it will work in every case and on every platform. Some drivers may be more strict, some less. That's why Validation Layer exists.
Second, let's address what the actual problem is. Purgable state exists so that Metal can have an option of discarding some resources when the memory pressure on the system gets too high, instead of jetsaming your app. Only those resources that are marked volatile can be discarded in such a way. But you can't just "set it and forget it". It's intended to be used for non-frequently used resources that are pretty big and can be discarded safely. The general pattern is described in this WWDC video starting at around min 39. Basically, if you are going to use a volatile resource, you need to make sure that it wasn't already discarded and also make it non-volatile. You need to explicitly call setPurgeableState with a nonVolatile state and check if it returns empty (setPurgeableState returns the state the resource was in before the call). If it does, then the resource was discarded and you need to re-generate or reload the resource. If it didn't, then the resource is still there. You can safely use it in a command buffer, for example, and then set it back to volatile in a completion handler.
I would suggest watching that part of the video, because it goes more in depth.
Also, refer to an article Reducing the Memory Footprint of Metal Apps
, WWDC video Debug GPU-side errors in Metal
and documentation page for setPurgableState
I have an app with a CallKit call blocking extension. Generally it works fine, but on some devices it occasionally becomes disabled by itself (well, in fact by iOS I guess) after a while – so user has to go to Settings and re-enable it again manually. However, there is no any visible reason for such behavior: the volume of blocked phone numbers data is small enough (the app works even with larger datasets most of the time, so it shouldn't be a memory issue), numbers are sorted in a right order (ascending), there are no duplicated numbers etc. It is also quite hard to reproduce the problem. It feels like something from "outer world" interferes with the extension (another app with extension?), but there are no proofs.
What may be the reason of such auto-disabling and how to avoid that?
Any help would be appreciated.
Thanks!
I have an app on the app store that uses AFNetworking 2.x to download large files in the background with NSURLSession-based downloads, because the user will often put the app in the background (it gets terminated after a while of course, but the downloads finish all the same. Wonderful). This app is working well. Usually users are only downloading a few files at a time.
Now I need to make another similar app, but this time instead of a few large files, it is very likely that the user will want to download a large number of smallish files: for example, 500 files that are 1-5mb each. Again, the app will often be put in the background, so I want to stay with NSURLSessionDownloadTask unless there's a really good reason not to.
My question is, can I simply create 500 NSURLSessionDownloadTasks all at once? Does AFNetworking do some clever throttling so as not to overload the system? Or does iOS do it? Or does nothing do it, and I have to painfully track & organize the state of transfers across restarts of my app (ie. because it gets put in the background eventually terminated) ?
If anyone knows the limits of how many NSURLSessionDownloadTasks you can create reliably simultaneously, that would be awesome...
thanks!
p.s. I greatly prefer obj-c to swift, thx :)
Last I checked (haven't looked at the iOS 9 betas), task creation was unexpectedly expensive and also superlinear. On my test runs:
50 tasks -> ~1.5s
200 tasks -> ~11.5s
500 tasks -> ~55s
Since my file count was often a 5-digit number, scheduling everything at once wasn't a solution for me. My approach (which isn't in production yet, I stopped working on the feature in favour of other things), combines persistence with NSURLSessionDownloadTask and uses the session identifiers to sort out which logical download a particular file belongs to. Further downloads are scheduled from one of the delegates depending on whether I'm on the normal lifecycle or coming from -application:handleEventsForBackgroundURLSession:completionHandler: (debugging this situation can get painful; NSUserDefaults is your friend). The theory seems sound, I can see that tasks do get scheduled, but I'm currently stuck getting the iOS downloader daemon to conform to my will.
If the server-side zip as suggested by Benjamin Jimenez is an option for you, do yourself a favour and use that instead.
The Apple staff member "eskimo" on apple developer forums helped me find the answer, which you can see in this forum post:
https://forums.developer.apple.com/thread/11621
Pasting here the relevant parts:
(me) I've read through this thread and the one you linked to here
(https://devforums.apple.com/message/938057#938057) and I have a
question about best practices to download 10,000-20,000 files via
NSURLSessionDownloadTasks. (Disclaimer, i'm using AFNetworking 2.x).
I'm targeting iOS 8 and newer, so answers do not have to work on iOS
7. How can we compute a reasonable batch (group) size ? I understand the resume-rate limiter means one wants the batch size to be higher,
but there's an unknown max limit of simultaneous task requests that
will crash the daemon.
(me) My assumption here is that when the user opens my app and it runs
for some time in the foreground, then the rate limiter is "reset" or
similar -- so now things will flow nicely again. Is this assumption
correct?
(eskimo) Yes. Also, starting with iOS 8, if the user brings your app
to the front then iOS will automatically give tasks a 'kick'. I've
forgotten the exact mechanics of this but I'm pretty sure it's covered
in WWDC 2014 Session 707 What's New in Foundation Networking.
I have a question that I can't really answer, so I wonder if someone may shed some light here.
Basically I am interested in knowing what is going on in iOS before and while I run an app...but from the OS perspective.
I've seen a lot of posts regarding what happens when the user tap on an app in the main screen, but I am interested in knowing basically what happens behind the scenes, before that the app takes control and main runs the singleton for UIApplication. And also once that the app is running, is the whole OS blocked in the main run loop of the app or something else is going on?
In particular, I would like to understand who creates the process where UIApplication will run (so the whole app will run inside that process, I assume).
Also would like to know what is the OS doing when for example, I open a connection in an app...since I see that a new thread is created (looking at a crash report I see a bunch of threads running, not just the main one), but I don't get where and who creates them (UIApplication itself?, where they running already before launching the app?).
Hope that the question is clear; I've search all over to find info but all that I get is that when you tap an app, main() runs and calls UIApplication,which takes control, deal with the delegate and views and so on...but what is going in the OS is a mystery.
Is there any resource related to the iOS part? Thanks!
The operating system of the iPhone works really similar to any other modern operating system. There is a kernel which provides low level functions, an API that provides high level functions (either to applications either to the OS itself) and so on.
There are a lot of processes always alive in the OS itself, just think about the fact that the device is able to receive notifications, receive calls, manage connections and whatever it needs to run.
When you launch an application the only thing that changes is that a process is launched and the control of it is given to the application.
And also once that the app is running, is the whole OS blocked in the main run loop of the app or something else is going on?
The whole OS is not blocked, the process launched is just scheduled together with many other processes that constantly run. This is achieved by multi-tasking.
In particular, I would like to understand who creates the process where UIApplication will run (so the whole app will run inside that process, I assume).
The process is created by the OS itself, which istantiates a new process structure to manage the just launched application and schedule it (with a high priority since it will run in foreground).
(UIApplication itself?, where they running already before launching the app?).
Threads are similar to processes in the sense that they have their code and they actually do something but a thread is lightweight because many thread can be managed by just one process. This mean that your application (or an API call) can create a thread which will run together with the main thread of your application and manage their operations but all these thread will share the same CPU allocation time and the same memory space and whatever. Actually Cocoa hides many of the details from a developer point of view, so that you don't care exactly about which threads are automatically started by the application because you don't need to: they are used to dispatch messages between objects, to manage asyncronous events and whatever.
But this is just the tip of the iceberg, before understanding how iOS works you should learn how a lower level infrastructure works, eg BSD Unix which is actually one of the ancestors of Darwin, which is the kernel on which iOS is operating. After understanding how it works you will undersand also how the infrastructure over it works (which is iOS + its API).
We need to drive 8 to 12 monitors from one pc, all rendering different views of a single 3d scenegraph, so have to use several graphics cards. We're currently running on dx9, so are looking to move to dx11 to hopefully make this easier.
Initial investigations seem to suggest that the obvious approach doesn't work - performance is lousy unless we drive each card from a separate process. Web searches are turning up nothing. Can anybody suggest the best way to go about utilising several cards simultaneously from a single process with dx11?
I see that you've already come to a solution, but I thought it'd be good to throw in my own recent experiences for anyone else who comes onto this question...
Yes, you can drive any number of adapters and outputs from a single process. Here's some information that might be helpful:
In DXGI and DX11:
Each graphics card is an "Adapter". Each monitor is an "Output". See here for more information about enumerating through these.
Once you have pointers to the adapters that you want to use, create a device (ID3D11Device) using D3D11CreateDevice for each of the adapters. Maybe you want a different thread for interacting with each of your devices. This thread may have a specific processor affinity if that helps speed things up for you.
Once each adapter has its own device, create a swap chain and render target for each output. You can also create your depth stencil view for each output as well while you're at it.
The process of creating a swap chain will require your windows to be set up: one window per output. I don't think there is much benefit in driving your rendering from the window that contains the swap chain. You can just create the windows as hosts for your swap chain and then forget about them entirely afterwards.
For rendering, you will need to iterate through each Output of each Device. For each output change the render target of the device to the render target that you created for the current output using OMSetRenderTargets. Again, you can be running each device on a different thread if you'd like, so each thread/device pair will have its own iteration through outputs for rendering.
Here are a bunch of links that might be of help when going through this process:
Display Different images per monitor directX 10
DXGI and 2+ full screen displays on Windows 7
http://msdn.microsoft.com/en-us/library/windows/desktop/ee417025%28v=vs.85%29.aspx#multiple_monitors
Good luck!
Maybe you not need to upgrade the Directx.
See this article.
Enumerate the available devices with IDXGIFactory, create a ID3D11Device for each and then feed them from different threads. Should work fine.