I'm using instruments to profile the CPU activity of an iOS game.
The problem I'm having is that I'm not entirely sure what the data I'm looking at represents.
Here is the screen I see after running my game for a couple of minutes,
I can expand the call tree to see exactly what methods are using the most CPU time. I'm unsure if this data represents CPU usage for the entire duration the profiler was running or is it just at that point in time.
I've tried running the slider along the timeline to see what effect that has on the numbers and it doesn't seem to have any. So that leeds me to believe the data represents CPU usage for the duration the game was running.
If this is the case then is it possible to access CPU usage at a particular point in time. There are a few spikes along the time line, I would like to see exactly what was happening at that time to see if there are any improvements I can make.
Thanks in advance for any responses.
To select a time range, use the "inspection range" buttons at the top of the window (left of the stop watch).
First select the start of the range by clicking on the graph ruler, the press the left most button to select the left edge. Then select the end of the range on the graph ruler and press the right most button to select the right edge.
Related
Is it possible to do an effect like in the provided picture, where the screen would glitch at certain time intervals? Also, would it be possible if you have many thing going on in the screen(many separate moving parts such as shown below )
local ball
local background
local goal
local scoreTxt
You could take a screen capture of the group, slice it up horizontally and then adjust the x positions slightly between each slice, but this would be a little CPU intensive.
To capture/save the screen:
https://docs.coronalabs.com/api/library/display/capture.html
To import the saved screenshot and slice it up:
https://docs.coronalabs.com/guide/media/imageSheets/index.html
As for the greenish/purple linear effects, you might have to manually pre-create them for each object, and show them before you take a screen capture, then hide them right away.
In my app, a user can "speed-read" text by having words flashed on the screen at a speed that they set. I have coded up this functionality in my UIViewController using a repeating NSTimer and updating the UILabel by displaying the next word index, but it's not going as fast as it should be.
For example, I tested it with 100 words at 1000 words per minute. Instead of taking 6 seconds like it should be, it takes 6.542045 to finish flashing all of the words. This is a big problem since I'm supposed to spit back to user user how long it took for them to read the text.
How do I find out what part of the code is taking so long? Is it the updating of the UILabel that's eating up 0.54~~ of the time?
EDIT
My sample project can be viewed here: https://github.com/cnowak7/RSVPTesting
The flashText method that I have should be firing only 100 times. Well, 101 if we count the time when the method realizes there are no more words and terminates the NSTimer. In the console, at the end of reading, I can see that the method is being fired 111 times. I think I might be doing this the wrong way.
Your specific question seems to be: How do I find out what part of the code is taking so long? Is it the updating of the UILabel that's eating up 0.54~~ of the time?
Inside Instruments, provided with Xcode, is a Time Profiler tool.
https://developer.apple.com/library/ios/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/Instrument-TimeProfiler.html
You can run your code and watch this tool to see exactly how much time is being spent executing every part of your routines. It will break down exactly which method is taking the most time, by percentage of overall time and concrete time spans, giving you a precise understanding of where you should focus your efforts in shaving off those precious partial seconds through refactoring/optimizations.
I'm an Objective-C guy, so rather than try to muddle my way through a Swift example, I'll let this guy do the talking.
https://www.raywenderlich.com/97886/instruments-tutorial-with-swift-getting-started
Whenever you want to know about time consumption in iOS you should always go for Instruments and select time profiler as shown in the image.
Time profiler will help you get to the code which is taking too much time.
I'd appreciate some help on how I should interpret some results I get from Time Profiler and Activity Monitor. I couldn't find anything on this on the site, probably because the question is rather specific. However, I imagine I'm not the only one not sure what to read into the spikes they get on the Time Profiler.
I'm trying to figure out why my game is having regular hiccups on the iPhone 4. I'm trying to run it at 60 FPS, so I know it's tricky on such an old device, but I know some other games manage that fine. I'm using Unity, but this is a more general question about interpreting Instruments results. I don't have enough reputation to post images, and I can only post two links, so I can't post everything I'd like.
Here is what I get running my game on Time Profiler:
Screenshot of Time Profiler running my game
As far as I understand (but please correct me if I'm wrong), this graph is showing how much CPU my game uses during each sample the Time Profiler takes (I've set the samples to be taken once per millisecond). As you can see, there are frequent downward spikes in that graph, which (based on looking at the game itself as it plays) coincide with the hiccups in the game.
Additionally, the spikes are more common while I touch the device, especially if I move my finger on it continuously (which is what I did while playing our game above). (I couldn't make a comparable non-touching version because my game requires touching, but see below for a comparison.)
What confuses me here is that the spikes are downward: If my code was inefficient, doing too many calculations on some frames, I'd expect to see upward spikes, now downward. So here are the theories I've managed to come up with:
1) The downward spikes represent something else stealing CPU time (like, a background task, or the CPU's speed itself varying, or something). Because less time is available for my processing, I get hiccups, and it also shows as my app using less CPU.
2) My code is in fact inefficient, causing spikes every now and then. Because the processing takes isn't finished in one frame, it continues onto the next, but only needs a little extra time. That means that on that second frame, it uses less CPU, resulting in a downward spike. (It is my understanding that iOS frames are always equal legnth, say, 1/60 s, and so the third frame cannot start early even if we spent just a little extra time on the second.)
3) This is just a sampling problem, caused by the fact that the sampling frequency is 1ms while the frame length is about 16ms.
Both theories would make sense to me, and would also explain why our game has hiccups but some lighter games don't. 1) Lighter games would not suffer so badly from CPU stolen, because they don't need that much CPU to begin with. 2) Lighter games don't have as many spikes of their own.
However, some other tests seem to go against each of these theories:
1) If frames always get stolen like this, I'd expect similar spikes to appear on other games too. However, testing with another game (from the App Store, also using Unity), I don't get them (I had an image to show that but unfortunately I cannot post it).
Note: This game has lots of hiccups while running in the Time Profiler as well, so hiccups don't seem to always mean downward spikes.
2) To test the hypothesis that my app is simply spiking, I wrote a program (again in Unity) that wastes a consistent amount of milliseconds per frame (by running a loop until the specified time has passed according to the system clock). Here's what I get on Time Profiler when I make it waste 8ms per frame:
Screenshot of Time Profiler running my time waster app
As you can see, the downward spikes are still there, even though the app really shouldn't be able to cause spikes. (You can also see the effect of touching here, as I didn't touch it for the first half of the visible graph, and touched it continuously for the second.)
3) If this was due to unsync between the framerate and the sampling, I'd expect there to be a lot more oscillation there. Surely, my app would use 100% of the milliseconds until it's done with a frame, then drop to zero?
So I'm pretty confused about what to make of this. I'd appreciate any insight you can provide into this, and if you can tell me how to fix it, all the better!
Best regards,
Tommi Horttana
Have you tried unity's profiler? Does it show simillar results? Note that unity3d has two profilers on ios:
editor profiler - pro only (but there is a 30 day trial)
internal profiler - you have to enable it in xcode project's source
Look at http://docs.unity3d.com/Manual/MobileProfiling.html, maybe something will hint you.
If i had to guess, I'd check one of the most common source timing hickups - the mono garbage collector.
Try running it yourself in a set frequency (like every 250ms) and see if there is a difference in the pattern:
System.GC.Collect();
I'm trying to learn how to use instruments. I wonder is I can get some opinions or insight at to what is going on here.
Firstly, shortly after 02:00 my app crashes due to creating many high resolution views. You can see the graph peak when the views are created. I think around 20 - 30 rendered views at aprox 1000 points.
My question is this: Note how the graph peaks and flattens at the end of the track (see the red arrow), starting just before 02:00 when the views are created, does this mean that the device (an iPhone 5) has "run out of memory". I see that the allocations listed as "All Allocations" is 17.76MB. Could this be the reason for the crash? Or is it the graphics crashing?
does this mean that the device (an iPhone 5) has "run out of memory"?
No. It is relative to peak active memory used in that run. Illustration: you would likely see the allocation amount at 1 minute "shrink" when you start creating all those views around 2 minutes.
As the Instruments trace below shows, I have some regular plunges in my Physical Free Memory. At the same time the percent user load rises. Allocations remains steady, but the tabular listings show some operations have a high peak to average ratio.
I don't understand enough about Instruments to identify what activity is happening at the time the free memory plunges. I can look at Time Profiler in the detail pane, but this appears to be cumulative, from the beginning. I would like to see what's happening over a the narrow window of time where memory use goes up.
Also, I don't understand why the tack for the Time Profiler shows no activity during these times when memory use goes up, but the Activity Monitor shows high activity.
Would someone provide some guidance on how to interpret this and how to get more out of it so I can understand the memory using problem? Thanks.
Click the button below the tracks ("Statistics"?) and choose "Objects List".
Order the table by Timestamp.
Zoom in on the track for the region you want to analyze.
Click in the track/graph -- Instruments will jump to that time.
You may also benefit by using Heapshot Analysis.