My takes has a fixed size of around 100Mo, which is normal.
However, I noticed the "Files and data" category was increasing EXTREMELY fast while the app has been opened two or three times. (Up to 300Mo, it even went to 4Go!).
Since I do have a lot of code and I do know how to spot what increases that much the size of the app, I won't post any code (except if requested) but could you please tell me what kind of actions usually creates that kind of problem and with which instrument I can spot the problem?
I would be glad to know someone can help me out.
Related
I can't share much code because it's proprietary, but this is a bug that's been haunting me for awhile. We have SceneKit geometry added to the ARKit face node and displayed inside an ARSCNView. It works perfectly almost all of the time, but about 1 in 100 times, nothing shows up at all. The ARSession is running, and none of the parent nodes are set to hidden. Further, when I look at Debug Memory Graph function in Xcode, the geometry appears to be entirely visible there (and doesn't seem to be set to hidden). I can see all the nodes attached to the face node perfectly within the ARSCNView of the memory graph, but on the screen, nothing shows up. This has been an issue for multiple iOS versions, so it didn't just appear with a recent update.
Has anybody run into a similar problem, or does anybody have any ideas to look into? Is it an apple bug, or is there a timing issue I might not be aware of? It's been really hard to debug because of how infrequent it is, and I haven't found it discussed on any other forums (but point me in the right direction if there is a previous discussion). Thanks!
This is pretty common practice if AR tracking is poor for some reason.
I ran into a similar problem too. I think it's definitely a tracking error which arises due to the fault of the user of AR app. Sometimes, if you're using World Tracking Config in ARKit and track a surrounding environment offhandedly or if you are tracking under inappropriate conditions – you get a sloppy tracking data which results in situation when your World Grid/Axis may be unpredictably shifted aside and your model may fly away somewhere. If such a situation arises - look for your model somewhere nearby – maybe it’s behind you.
If you're using a gadget with a LiDAR, the aforementioned situation is almost impossible, but if you're using a gadget with no LiDAR you need thoroughly track your room. Also there must be good lighting conditions and high-contrast real-world objects with distinguishable non-repetitive textures.
let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.
I'd appreciate some help on how I should interpret some results I get from Time Profiler and Activity Monitor. I couldn't find anything on this on the site, probably because the question is rather specific. However, I imagine I'm not the only one not sure what to read into the spikes they get on the Time Profiler.
I'm trying to figure out why my game is having regular hiccups on the iPhone 4. I'm trying to run it at 60 FPS, so I know it's tricky on such an old device, but I know some other games manage that fine. I'm using Unity, but this is a more general question about interpreting Instruments results. I don't have enough reputation to post images, and I can only post two links, so I can't post everything I'd like.
Here is what I get running my game on Time Profiler:
Screenshot of Time Profiler running my game
As far as I understand (but please correct me if I'm wrong), this graph is showing how much CPU my game uses during each sample the Time Profiler takes (I've set the samples to be taken once per millisecond). As you can see, there are frequent downward spikes in that graph, which (based on looking at the game itself as it plays) coincide with the hiccups in the game.
Additionally, the spikes are more common while I touch the device, especially if I move my finger on it continuously (which is what I did while playing our game above). (I couldn't make a comparable non-touching version because my game requires touching, but see below for a comparison.)
What confuses me here is that the spikes are downward: If my code was inefficient, doing too many calculations on some frames, I'd expect to see upward spikes, now downward. So here are the theories I've managed to come up with:
1) The downward spikes represent something else stealing CPU time (like, a background task, or the CPU's speed itself varying, or something). Because less time is available for my processing, I get hiccups, and it also shows as my app using less CPU.
2) My code is in fact inefficient, causing spikes every now and then. Because the processing takes isn't finished in one frame, it continues onto the next, but only needs a little extra time. That means that on that second frame, it uses less CPU, resulting in a downward spike. (It is my understanding that iOS frames are always equal legnth, say, 1/60 s, and so the third frame cannot start early even if we spent just a little extra time on the second.)
3) This is just a sampling problem, caused by the fact that the sampling frequency is 1ms while the frame length is about 16ms.
Both theories would make sense to me, and would also explain why our game has hiccups but some lighter games don't. 1) Lighter games would not suffer so badly from CPU stolen, because they don't need that much CPU to begin with. 2) Lighter games don't have as many spikes of their own.
However, some other tests seem to go against each of these theories:
1) If frames always get stolen like this, I'd expect similar spikes to appear on other games too. However, testing with another game (from the App Store, also using Unity), I don't get them (I had an image to show that but unfortunately I cannot post it).
Note: This game has lots of hiccups while running in the Time Profiler as well, so hiccups don't seem to always mean downward spikes.
2) To test the hypothesis that my app is simply spiking, I wrote a program (again in Unity) that wastes a consistent amount of milliseconds per frame (by running a loop until the specified time has passed according to the system clock). Here's what I get on Time Profiler when I make it waste 8ms per frame:
Screenshot of Time Profiler running my time waster app
As you can see, the downward spikes are still there, even though the app really shouldn't be able to cause spikes. (You can also see the effect of touching here, as I didn't touch it for the first half of the visible graph, and touched it continuously for the second.)
3) If this was due to unsync between the framerate and the sampling, I'd expect there to be a lot more oscillation there. Surely, my app would use 100% of the milliseconds until it's done with a frame, then drop to zero?
So I'm pretty confused about what to make of this. I'd appreciate any insight you can provide into this, and if you can tell me how to fix it, all the better!
Best regards,
Tommi Horttana
Have you tried unity's profiler? Does it show simillar results? Note that unity3d has two profilers on ios:
editor profiler - pro only (but there is a 30 day trial)
internal profiler - you have to enable it in xcode project's source
Look at http://docs.unity3d.com/Manual/MobileProfiling.html, maybe something will hint you.
If i had to guess, I'd check one of the most common source timing hickups - the mono garbage collector.
Try running it yourself in a set frequency (like every 250ms) and see if there is a difference in the pattern:
System.GC.Collect();
EDIT: My code for this is actually open source, if anyone would be able to look and comment.
Things I can think of that might be an issue: using a custom font, using bright green, updating the label too fast?
The repo is: https://github.com/andrewljohnson/StopWatch-of-Gaia
The class for the time label: https://github.com/andrewljohnson/StopWatch-of-Gaia/blob/master/src/SWPTimeLabel.m
The class that runs the timer to update the label: https://github.com/andrewljohnson/StopWatch-of-Gaia/blob/master/src/SWPViewController.m
=============
My StopWatch app reportedly screen burns a number of iPads, for temporary periods. Does anyone have a suggestion about how I might prevent this screen persistence? Some known workaround to blank the pixels occasionally?
I get emails all the time about it, and you can see numerous reviews here: http://itunes.apple.com/us/app/stopwatch+-timer-for-gym-kitchen/id518178439?mt=8
Apple can not advise me. I sent an email to appreview, and I was told to file a technical support request (DTS). When I filled the DTS, they told me it was not a code issue, and when I further asked for help from DTS, a "senior manager" told me that this was not an issue Apple knew about. He further advised me to file a bug with the Apple Radar bug tracker if I considered it to be a real issue.
I filed the Radar bug a few weeks ago, but it has not been acknowledged. Updated radar link for Apple employees, per commenter's notes rdar://12173447
It's not really a "burn in" on a non-CRT display, but there can be an image persistance/retention issue on some LCD display panel types.
One way to avoid both is to very slowly drift your image around, much more slowly than a screen saver. If you move your clock face around a small amount and very slowly (say several minutes to make a full circuit of only a few dozen pixels), the user may not even notice this happening. But this motion will blur all fine lines and sharp edges over time, so even if there is a persistance, the lack of sharp edges will make it harder to see.
Added:
There is also one (unconfirmed) report that flashing pixels at the full frame rate may increase the possibility of this problem. So any in-place text/numeric updates should happen at a more humanly readable pace (say 5 to 10 fps instead of 30 to 60 fps), if repeated for very long periods of time. The app can always update the ending number to a more accurate count if necessary.
"Burn in" is due to phosphor wearing in CRTs. LCDs cant have burn in since they dont use phosphor.
More likely it is image retention/Image Persistence. An image can remain 'stuck' on the screen for up to 48 hours. Usually it shouldnt last that long so it may be a defect in their hardware too. MacRumors has a thread about iPad image retention, it discusses this very issue. As for a solution, there is nothing you can do about the actual screen because its a just how LCD's work. What I would try if you are still concerned is using more subtle colors. Unless something is actively changing the pixels (think screen saver) you arent going to be able to completely eliminate the problem.
I asked a similar question here a while ago about boosting the speed at which MKOverlays are added to an MKMapView by using threading during their creation, but I soon realized that the part of the process that was really dragging me down was not the creation of the overlays, but their addition to the map. Creating many overlays (even 3000+) takes an acceptable amount of time, but adding them all to the map takes far too long (15 seconds).
I know 'what are your favorite' questions usually aren't considered 'right' for Stack Overflow, but I think this question is okay because although it is subjective in a way, there is still a 'right' answer- the one that provides significant changes in the performance of an MKMapView with lots of MKOverlayViews.
Basically, I'd love to know if anyone has any tips or tricks (any at all) for tuning the speed of the addition of many different MKOverlays to a map view. Right now my alternative is combining them all into one big line, which is much faster, but then I lose the ability to treat each segment as an individual line (i.e. being able to show a callout for each segment), which is one of the cooler features in the app, so I'd really like to find a way to make this work. Right now, all of the lines load, given enough time, but even after, scrolling is a nightmare.
I'd really like to hear your thoughts! Thanks!