I'm scared to ask this question because it doesn't include specifics and doesn't have any code samples, but that's because I've encountered it on three entirely different apps that I've worked on in the past few weeks, and I'm thinking specific code might just cloud the issue.
Scoured the web and found no reference to the phenomenon I'm encountering, so I'm just going to throw this out there and hope someone else has seen the same thing:
The 'problem' is that all the iOS OpenGL apps I've built, to a man, run MUCH FASTER when I'm profiling them in Instruments than when they're running standalone. As in, a frame rate roughly twice as fast (jumping from, eg, 30fps to 60fps). This is both measured with a code-timing loop and from watching the apps run. Instruments appears to be doing something magical.
This is on a device, not the iOS simulator.
If I profile my OpenGL apps and upload to a device — specifically, iPad 3 running iOS 5.1 — via Instruments, the frame rate is just flat-out much, much faster than running standalone. There appears to be no frame skipping or shennanigans like that. It simply does the same computation at around twice the speed.
Although I'm not including any code samples, just assume I'm doing the normal stuff. OpenGL ES 2.0, with VBOs and VAOs. Multithreading some computationally intensive code areas with dispatch queues/blocks. Nothing exotic or crazy.
I'd just like to know if anyone has experienced anything vaguely similar. If not, I'll just head back to my burrow and continue stabbing myself in the leg with a fork.
Could be that when you profile, a release build is used (by default) instead of a debug build when you just hit run.
Related
This is a followon to this question about using the DX11VideoRenderer sample (a replacement for EVR that uses DirectX11 instead of EVR's DirectX9).
I've been trying to track down why it uses so much more CPU than the EVR. Task Manager shows me that most of that time is kernel mode.
Using profiling tools, I see that a LOT of time is being spent in numerous calls to NtDelayExecution (aka Sleep). How many calls? ~100,000 over the course of ~12 seconds. Ok, yeah, I'm sending a lot of frames in those 12 seconds, but that's still a lot of calls, every one of which requires a kernel mode transition.
The callstack shows the last call in "my" code is to IDXGISwapChain1::Present(0, 0). The actual call seems be Sleep(0) and comes from nvwgf2umx.dll (which is why this question is tagged NVidia: hopefully someone there can call up the code and see what the logic is behind such frequent calls).
I couldn't quite figure out why it would need to do /any/ Sleeping during Present. It's not like we wait for vertical retrace anymore, is it? But the other reason to use Sleep has to do with yielding to other threads. Which led me to a serious clue:
If I use D3D11_CREATE_DEVICE_PREVENT_INTERNAL_THREADING_OPTIMIZATIONS, the CPU utilization drops. Along with some other fixes, the DX11 version is now faster and uses less CPU time than the DX9 version (which is what I would hope/expect). Profiling shows that Sleep has dropped from >30% to <1%.
Unfortunately, this page tells me:
This flag is not recommended for general use.
Oh.
So, any ideas on how to get decent performance without using debug flags?
I am writing a game in Swift using SpriteKit with Xcode 7.3 on a Mac mini running El Capitan (both updated in last couple of days).
Shortly after I started, my Mac mini started crashing. The error log indicated that a Kernel Panic had occurred likely due to a memory leak & the process was identified as Xcode. Looking at the Activity Monitor, I can see that when this specific app is loaded in Xcode, the memory used by Xcode fairly rapidly increases even though the App is just sitting there doing nothing.
When other apps are loaded & idle, the memory usage stays more or less constant as you would expect. I have Googled for clues for several hours but can find only info. about memory leaks when an App is running & how to detect them with Instruments.
This is a weird one as far as I'm concerned. I do not have any idea how to start to figure out what's causing this other than starting with a fresh project & gradually adding code to see if/where it starts happening again. I would appreciate any ideas other Xcode users may have.
You don't happen to have
skView.showPhysics = true
turned on?
That is know to be causing memory issues exactly as you described.
There are a number of things you can do to start diagnosing this issue. Firstly you say that only one app is doing this. So this would indicate that the problem is an app problem rather than XCode. This is a good thing :-)
Next I would start using the profiler to monitor the memory and allocated objects in the app. try taking snap shots of the memory at regular intervals and look at what has been allocated since the prior snap shot. The profiler can then help you to dig down into the leaking objects and see where they are being retained and released. This might give you the clue you need.
Another thing is try is the profilers leaks monitoring. That also might tell you whats going on inside your app.
Lastly, is there anything in your code that is executing in some sort of loop. Something that animates on the home screen for example. Perhaps that is leaking.
Thanks very much to Drekka & Adrian B for your speedy answers but, as always happens, as soon as I post a question, I stumble across some information that leads to an understanding & (in this case partial) solution. Looking for an answer to a different question, I came across a thread on the Apple Developers Forum where several others are reporting the same issue.
It is related to the use of the SpriteKit Scene Editor. Hence it is app related in that for me it occurred with the only app for which I have ever used the Scene Editor but in reality it is an Xcode bug. What happens is that if the Scene Editor window is open (i.e. the .sks file is selected), even if the scene is blank, Xcode starts to leak memory at a rate of about 1MB/sec. If you close that window, the leakage stops. It happens even if Xcode is minimized. Apparently, it has been reported as a bug. I guess the work around for now is to accept the leakage while you're modifying your scene & then close the editor when you've finished (or do everything in code).
Cheers,
RB
First of all, there's not a lot of detail I can offer, so I realize this question may seem incomplete. At this stage, I'm really looking for any ideas. Frankly, I'm just baffled by this one.
I'm building a graphics-heavy app that really maxes out the CPU. CPU utilization on the devices tends to be around 150% according to XCode (I know that sounds weird, it seems to be of a possible 200% because of the device having two cores). I've instrumented the tasks that do the most processing so I can see how long they take in the debug output. Also note that I am compiling with -Ofast (aggressive optimizations), even for debug builds.
Here's the weird thing. About 5-10 seconds into running the CPU intensive mode of the app, everything slows down. It's very visible. Because of my instrumentation, I can see that suddenly everything takes about 3 times as long as it did before. It's pretty uniform across all tasks, and it doesn't speed back up. Here's the really weird thing. If I break execution in the debugger and resume, I get another 5-10 seconds of fast execution before it slows down again.
Looking at the CPU and memory usage reported by XCode, everything stays about the same. The app uses no more than 90MB of memory at any point.
Is there a feature of iOS that slows down CPU intensive apps or underclocks the device to conserve battery life? I realize I'm sharing resources with the OS, but this is behavior I can reliably reproduce every time.
Again, I realize my question is vague, and there's no relevant code I can post. Any ideas about causes or even debugging methods are welcome.
First of all, thank you #thst for trying really hard to help me out. My question was kind of dumb since it really could have been anything.
I eventually solved the problem by rendering (via OpenGL, a detail I forgot to include, again showing how bad my question was) only when there is actually a change to the state of the objects and textures being rendered. It used to render at 60FPS all the time.
The app also uses CIDetector to detect faces. I think, but I am not sure, that CIDetector uses the GPU to perform its detection. If so, there might have been some contention for GPU resources. CIDetector blocking on a wait may have caused slowdown throughout the app.
Thanks everyone.
All over this document Apple mention iOS terminates apps under certain conditions, and the most popular reason seems to be freeing up some RAM. And that causes issues for apps that do not implement state restoration - some of the content user is working on and stepped away from for a moment could be easily erased. There's even a 16 page thread on Apple forums where users complaining about that.
Is anyone aware why iOS actually terminates apps instead of moving memory occupied by them onto disc/swap?
Does termination actually provide considerable performance improvement compared to other means?
What you are describing is paging, or more accurately, page swapping. The iOS version of BSD Unix does not perform paging, for lots of reasons. Here are a few educated guesses:
It's too power-hungry for a mobile device.
Flash memory can't handle the churn involved in paging. Flash memory has a limited number of lifetime write cycles per storage location, and paging would chew through the life of the flash chip.
As the other poster pointed out, swapping to disk would use up available disk space, which is also limited. Not a problem when you have a 500 GB drive, but it is a big problem on a device with only 16 GB of HD and 1 GB of RAM.
You're not going to get an answer for this question here. Apple don't explain the inner workings of iOS and anything else is going to be guesswork.
Here's my guesswork:
iOS is a heavily resource constrained environment. Memory is limited but so is disk space - a 16GB iPhone has 1GB of RAM, so "swapping to disk" isn't really something that can be freely applied. When do you stop? How do you know this isn't already being done, but there is only a limited swap in place?
The primary goal of iOS has always been to prioritise responsiveness of the foreground app. Anything other than warning, then closing background apps would probably impact this too much. If there are 15 apps in the background then imagine the processor load on nicely swapping the memory out for each process?
Because the RAM that was saved onto the disc would be much slower. It's better to cut the program then having it run poorly. I think that answered both questions.
Thanks everybody for responses. I had to do some research to answer this question, though. So I was looking for more understanding that led into "app termination" decisions. I know, there are some smart people working in Apple, but for me it always help to understand the reason something is build "this way" rather then just following it.
It turned down into these 2 questions
Why iOS terminates apps instead of freeing memory by paging out (swapping)?
Does termination provide considerable performance win?
To understand that I dug a bit into the history of iPhone. There's a video that was accessible on iTunes, unfortunately the link does not work anymore. Anyways, the video was introducing the very first version of multitasking on iPhone 3G (or was it 3GS? Not sure which device starts to support multitasking).
Nowadays iPhone devices are quite advanced in terms of hardware. Those are actually more advanced then some desktops we had 7-10 years ago, which already have had incorporated swapping long ago. But if we look for first iPhone releases, those are not that much advanced in terms of hardware. iPhone 3G is 620 Mhz ARM and 128 RAM. iPod touch 1gen had 400mhz ARM. And multitasking was supposed to run on all the devices of that time.
If we take a look at iOS, it was always has the smoothness of animations in priority; taking look at hardware I see it would be challenging to have both snappy and responsive device along with processing swapping background applications memory, so it seems very logical and very fair to terminate apps. A year or two later Apple provided APIs to facilitate implementation state restoration.
But if we look at the current iPhones and iPads - they do have enough power in order not to terminate apps and just drop their memory on disk without any drop downs in animations and foreground app performance. Why not add that on latest devices? I assume this is common for the software industry; new features often prioritised higher then improvements on existing workflows; Apple has been releasing MobileMe, support for Retina displays, AutoLayout, iCloud - so I can understand that cool improvements of already existing features has been sacrificed.
The issue with apps that don't provide state restoration is easily solved by providing state restoration.
Just killing apps when the system runs out of memory is a huge performance gain. Consider that the system usually runs out of memory when you launch another app, and any action that is done instead of killing old apps would have to be done before launching the new app; that's about the most performance critical point in time.
And for at least five years you have been told that when your app goes to the background, you should store just enough state to come back to that state if your app is restarted.
I am creating apps for the Ipad and its driving me crazy.
The memory that is usable by the apps changes depending on what other apps were ran before it.
There is no reliable set amount of memory that can be used by your app.
i.e. If safari is ran then even after it closes it takes up some amount of memory which effects other apps.
Does anyone know if there is a way to clear the memory before my app runs so I can get the same running environment every time?
I have created several prototype apps to show to other people and it seems like after a few days they always come back to me and tell me that it crashes and to fix it.
When I test it, the reason is always because there is not enough memory (when there was enough before when I was testing). So I need to squeeze every bit of memory (which usually effects performance due to heavy loading and releasing) out of the app and tell them to restart their ipad if it continues to happen.
I read in a book that generally apps can use at max 40mb or so, most of the apps that crash are crashing at around 27mb. I want my remaining 13mb!!
While you will get a pretty good state after a reboot, what you really should look for is clean memory management and avoiding leaks.
Using the available memory wisely is solely up to the programmer. Don't tell your users to reboot the device, ever. And with every update of the OS memory things might change anyway.