Nearing the end of my first (very simple) game using spritekit.
I have noted a long delay (about 10 -13 seconds) when running the app on a real device at start up.
I am assuming this is the time taken to load resources and execute code in initWithSize().
Is there a workaround or what is the recommend approach i.e. use a splash screen while game loads.
I am using a texture atlas but my understanding is that is for optimising resource calls during runtime.
Cheers
Using the instruments tool can help you locate where bottlenecks appear in code. I doubt it is 10-13 seconds from app launch as any app that takes 10 seconds or longer to launch is killed by the system.
Try to load resources intelligently. Load what is absolutely necessary when launching for what I assume would be your menu. If there are variations in different gameplay roles or similar assets used across multiple levels. Load them up, then when a selection has been made load anymore initial resources.
Also try to recycle resources where possible. For instance if an enemy is destroyed by the player. Don't destroy the object. Reuse it, the memory space it took up is likely to be the same again and creation is more expensive than reusing.
I found a lot of brilliant pointers in using Profiling tools through the WWDC talks, they are always a great resource.
Related
I have an application that does some drawing via Direct2D and captures screen contents via the Desktop Duplication API. It is a C# application, but I have custom layered windows to do my displaying and I am rendering via Direct2D to get around the "smart" drawing in WPF that was previously causing my application to not compete very aggressively for graphics resources.
If I run this application at the same time as a DirectX game that takes lots of graphics resources, then the behavior I get is as follows:
if my application is the foreground application, it works ok
if my application is visible but not the foreground application, then the calls to Map Subresource and Copy Resource take highly variable amounts of time, sometimes blocking for hundreds of milliseconds (according to the profiler.) I am not sure I believe the outliers in the hundreds, but certainly Map Subresource averages 50ms so the capture rate drops to below 20FPS. When running in the foreground, the same call takes 4ms on average.
I already checked if manipulating process and thread priorities can do anything, but it does not. It appears that DirectX is throttling my stuff when I am not foreground, independent of priorities. This is a problem because my application needs to run full frame rate without being foreground.
If it is strictly prioritizing GPU commands from the foreground application, then I could see that maybe my app is being starved out on the channels to/from the GPU, because that would explain stalling out like that. If I could somehow exempt this application from such policing, that would be optimal. Like for example some sort of manifest entry or DirectX call that notifies the driver (or stack) not to consider my application to be background?
I compared the code of OBS (as far as I can work it out) and I don't see what they do different. How does OBS manage to continue duplicating / capturing even though they are clearly not the foreground application when streaming even demanding games?
[edit]
The game in question is DCS, which always uses all available GPU resources if you let it, and uses about 1.5 cores. When I say it starts to slow down when not in foreground, I am literally talking about clicking on a window from my app versus clicking in the game window (not even minimizing or anything.) When I am in foreground, DCS slows down as I consume resources. If DCS is in the foreground, it continues to render at its full rate and uses all resources and my capture slows way down.
I have found that I can make things work well if I just limit DCS to 60 FPS via vsync, so that it can't use all resources. Then everything works perfectly fine. Obviously, that does not solve the problem, but it further shows this is some resource contention most likely. I am fine competing for the resources, but it does seem to prioritize the foreground app in DirectX (or Nvidia's driver?)
[/edit]
I Encountered Exact same issue and Found that it's because of Window's Gamemode feature. Turning it off and rebooting fixed it for me.
Also, I was using Direct x video processor for scaling my images And It's performance is highly inconsistent with various driver updates so for now i've decided to use "as is" without scaling.
I’m currently working on a tvOS app. This is my first native (Swift) app. The app will be a digital signage app, used during events or in offices of companies.
One big difference compared to a typical app on iOS/tvOS is that it needs to run pretty much 24/7, so memory is a big topic for this app. The smallest leak will eventually cause the app to crash.
The app is constantly looping through a set of fullscreen slides. At the bottom of the screen there is a ticker with 10 articles (refreshed every 10 seconds - now during development). Below is a screenshot of the weather slide, to get an idea.

Currently the app is crashing after a period of time and I’m pretty sure I’ve narrowed it down to the ticker component (when disabling it, the app lives for days). If I use the ‘Leaks’ preset in Instruments I get the following result:

It looks like it’s leaking Article instances. I’m recreating Article instances every 10 seconds and providing them to the ticker component. I think that is why new instances leak every ~10 seconds.
Before I started using the ‘Leaks’ preset in Instruments, I used the ‘Allocations’ preset, while using this all seemed fine to me. But I’m probably misreading the results…
Using allocations:

The way I read this is that currently 10 Article instances exist in memory, and 31 have existed but are cleaned up now - so I’m safe.
But the app still crashes.
I’ve read a lot on retain cycles, implemented weak/unowned where I believe I should.
So my question is not so much about code, but more about how to read this data, what does a Leak mean in this context, and why do I see these ‘leaks’ not as persistent objects in the Allocations window?
(tests are done on multiple devices + simulator)
If you see a steady (i.e., approximately n GB / minute or hour) increase in memory usage in Instruments, that is a good sign that objects are being created, but not dealloced. Your allusion to weak and unowned vars makes me think that you know this, but you may not have found all sources of your leak. I would suggest taking a few generation summaries in Instruments, and looking at specific classes/objects in Heap allocations. Your problem classes will steadily increase in number, and likely never decrease. Try to debug the problem from there.
As for what 'leak' means in this context, it's what it always means: Your computer is not releasing memory resources. It may seem different, because we are used to thinking of a leak as something that eats through memory at a much faster rate (like an infinite loop running on four cores, or something), but that kind of leak and this are actually the same thing; yours is just slower.
I’m back after weeks trying to figure out what was wrong. The good news, I found my leak, and solved it!
The issue was solved by removing a closure inside another closure keeping a reference to a variable in the first closure. This caused a retain cycle.
I really don’t understand why I didn’t find it earlier, I asked a new question for this here: getting-different-data-in-instruments-based-on-method-of-profiling.
First of all, there's not a lot of detail I can offer, so I realize this question may seem incomplete. At this stage, I'm really looking for any ideas. Frankly, I'm just baffled by this one.
I'm building a graphics-heavy app that really maxes out the CPU. CPU utilization on the devices tends to be around 150% according to XCode (I know that sounds weird, it seems to be of a possible 200% because of the device having two cores). I've instrumented the tasks that do the most processing so I can see how long they take in the debug output. Also note that I am compiling with -Ofast (aggressive optimizations), even for debug builds.
Here's the weird thing. About 5-10 seconds into running the CPU intensive mode of the app, everything slows down. It's very visible. Because of my instrumentation, I can see that suddenly everything takes about 3 times as long as it did before. It's pretty uniform across all tasks, and it doesn't speed back up. Here's the really weird thing. If I break execution in the debugger and resume, I get another 5-10 seconds of fast execution before it slows down again.
Looking at the CPU and memory usage reported by XCode, everything stays about the same. The app uses no more than 90MB of memory at any point.
Is there a feature of iOS that slows down CPU intensive apps or underclocks the device to conserve battery life? I realize I'm sharing resources with the OS, but this is behavior I can reliably reproduce every time.
Again, I realize my question is vague, and there's no relevant code I can post. Any ideas about causes or even debugging methods are welcome.
First of all, thank you #thst for trying really hard to help me out. My question was kind of dumb since it really could have been anything.
I eventually solved the problem by rendering (via OpenGL, a detail I forgot to include, again showing how bad my question was) only when there is actually a change to the state of the objects and textures being rendered. It used to render at 60FPS all the time.
The app also uses CIDetector to detect faces. I think, but I am not sure, that CIDetector uses the GPU to perform its detection. If so, there might have been some contention for GPU resources. CIDetector blocking on a wait may have caused slowdown throughout the app.
Thanks everyone.
I have released all the display objects and my every scene got destroyed after executing. I have multiple screens in app and I am using storyboard for transactions. When I made transaction from one to another very slightly memory usage increased, but if I run my app for a long time , it starts hanging and sometimes respond very slowly.
Does this happen to you running the storyboard sample? If so, you should file a bug report and consider using Director as an alternative in the meantime. If not you should carefully review your code to ensure you are properly disposing of sprite sheets, audio, etc. and that you aren't using globals.
I am creating apps for the Ipad and its driving me crazy.
The memory that is usable by the apps changes depending on what other apps were ran before it.
There is no reliable set amount of memory that can be used by your app.
i.e. If safari is ran then even after it closes it takes up some amount of memory which effects other apps.
Does anyone know if there is a way to clear the memory before my app runs so I can get the same running environment every time?
I have created several prototype apps to show to other people and it seems like after a few days they always come back to me and tell me that it crashes and to fix it.
When I test it, the reason is always because there is not enough memory (when there was enough before when I was testing). So I need to squeeze every bit of memory (which usually effects performance due to heavy loading and releasing) out of the app and tell them to restart their ipad if it continues to happen.
I read in a book that generally apps can use at max 40mb or so, most of the apps that crash are crashing at around 27mb. I want my remaining 13mb!!
While you will get a pretty good state after a reboot, what you really should look for is clean memory management and avoiding leaks.
Using the available memory wisely is solely up to the programmer. Don't tell your users to reboot the device, ever. And with every update of the OS memory things might change anyway.