app burns numbers into iPad screens, how can I prevent this? - ios

EDIT: My code for this is actually open source, if anyone would be able to look and comment.
Things I can think of that might be an issue: using a custom font, using bright green, updating the label too fast?
The repo is: https://github.com/andrewljohnson/StopWatch-of-Gaia
The class for the time label: https://github.com/andrewljohnson/StopWatch-of-Gaia/blob/master/src/SWPTimeLabel.m
The class that runs the timer to update the label: https://github.com/andrewljohnson/StopWatch-of-Gaia/blob/master/src/SWPViewController.m
=============
My StopWatch app reportedly screen burns a number of iPads, for temporary periods. Does anyone have a suggestion about how I might prevent this screen persistence? Some known workaround to blank the pixels occasionally?
I get emails all the time about it, and you can see numerous reviews here: http://itunes.apple.com/us/app/stopwatch+-timer-for-gym-kitchen/id518178439?mt=8
Apple can not advise me. I sent an email to appreview, and I was told to file a technical support request (DTS). When I filled the DTS, they told me it was not a code issue, and when I further asked for help from DTS, a "senior manager" told me that this was not an issue Apple knew about. He further advised me to file a bug with the Apple Radar bug tracker if I considered it to be a real issue.
I filed the Radar bug a few weeks ago, but it has not been acknowledged. Updated radar link for Apple employees, per commenter's notes rdar://12173447

It's not really a "burn in" on a non-CRT display, but there can be an image persistance/retention issue on some LCD display panel types.
One way to avoid both is to very slowly drift your image around, much more slowly than a screen saver. If you move your clock face around a small amount and very slowly (say several minutes to make a full circuit of only a few dozen pixels), the user may not even notice this happening. But this motion will blur all fine lines and sharp edges over time, so even if there is a persistance, the lack of sharp edges will make it harder to see.
Added:
There is also one (unconfirmed) report that flashing pixels at the full frame rate may increase the possibility of this problem. So any in-place text/numeric updates should happen at a more humanly readable pace (say 5 to 10 fps instead of 30 to 60 fps), if repeated for very long periods of time. The app can always update the ending number to a more accurate count if necessary.

"Burn in" is due to phosphor wearing in CRTs. LCDs cant have burn in since they dont use phosphor.
More likely it is image retention/Image Persistence. An image can remain 'stuck' on the screen for up to 48 hours. Usually it shouldnt last that long so it may be a defect in their hardware too. MacRumors has a thread about iPad image retention, it discusses this very issue. As for a solution, there is nothing you can do about the actual screen because its a just how LCD's work. What I would try if you are still concerned is using more subtle colors. Unless something is actively changing the pixels (think screen saver) you arent going to be able to completely eliminate the problem.

Related

Why does my app "data" space increases so fast?

My takes has a fixed size of around 100Mo, which is normal.
However, I noticed the "Files and data" category was increasing EXTREMELY fast while the app has been opened two or three times. (Up to 300Mo, it even went to 4Go!).
Since I do have a lot of code and I do know how to spot what increases that much the size of the app, I won't post any code (except if requested) but could you please tell me what kind of actions usually creates that kind of problem and with which instrument I can spot the problem?
I would be glad to know someone can help me out.

SceneKit scenes lag when resuming app

In my app, I have several simple scenes (a single 80 segment sphere with a 500px by 1000px texture, rotating once a minute) displaying at once. When I open the app, everything goes smoothly. I get constant 120fps with less than 50mb of memory usage and around 30% cpu usage.
However, if I minimize the app and come back to it a minute later, or just stop interacting with the app for a while, the scenes all lag terribly and get around 4 fps, despite Xcode reporting 30fps, normal memory usage, and super low (~3%) cpu usage.
I get this behavior when testing on a real iPhone 7 iOS 10.3.1, and I'm not sure if this behavior exists on other devices or the emulator.
Here is a sample project I pulled together to demonstrate this issue. (link here) Am I doing something wrong here? How can I make the scenes wake up and resume using as much cpu as they need to maintain good fps?
I won't probably answer the question you've asked directly, but can give you some points to think about.
I launched you demo app on my iPod 6-th gen (64-bit), iOS 10.3.1 and it lags from the very beginning up to about a minute with FPS 2-3. Then after some time it starts to spin smoothly. The same after going background-foreground. It can be explained with some caching of textures.
I resized one of the SCNView's so that it fits the screen, other views stayed behind. Set v4.showsStatistics = true
And here what I got
as you can see Metal flush takes about 18.3 ms for one frame and its only for one SCNView.
According to this answer on Stackoverflow
So, if my interpretation is correct, that would mean that "Metal
flush" measures the time the CPU spends waiting on video memory to
free up so it can push more data and request operations to the GPU.
So we might suspect that problem is in 4 different SCNViews working with GPU simultaneously.
Let's check it. Comparing to the 2-nd point, I've deleted 3 SCNViews behind and put 3 planets from those views to the front one. So that one SCNView has 4 planets at once. And here is the screenshot
and as you can see Metal flush takes up to 5 ms and its from the beginning and everything goes smoothly. Also you may notice that amount of triangles (top right icon) is four times as many as what we can see on the first screenshot.
To sum up, just try to combine all SCNNodes on one SCNView and possibly you'll get a speed up.
So, I finally figured out a partially functional solution, even though its not what I thought it would be.
The first thing I tried was to keep all the nodes in a single global scene as suggested by Sander's answer and set the delegate on one of the SCNViews as suggested in the second answer to this question. Maybe this used to work or it worked in a different context, but it didn't work for me.
How Sander ended up helping me was the use of the performance statistics, which I didn't know existed. I enabled them for one of my scenes, and something stood out to me about performance:
In the first few seconds of running, before the app gets dramatic frame drops, the performance display read 240fps. "Why was this?", I thought. Who would need 240 fps on a mobile phone with a 60hz display, especially when the SceneKit default is 60. Then it hit me: 60 * 4 = 240.
What I guess was happening is that each update in a single scene triggered a "metal flush", meaning that each scene was being flushed 240 times per second. I would guess that this fills the gpu buffer (or memory? I have no idea) slowly, and eventually SceneKit needs to start clearing it out, and 240 fps across 4 views is simply too much for it to keep up with. (which explains why it initially gets good performance before dropping completely.).
My solution (and this is why I said "partial solution"), was to set the preferedFramesPerSecond for each SceneView to 15, for a total of 60 (I can also get away with 30 on my phone, but I'm not sure if this holds up on weaker devices). Unfortunately 15fps is noticeably choppy, but way better than the terrible performance I was getting originally.
Maybe in the future Apple will enable unique refreshes per SceneView.
TL;DR: set preferredFramesPerSecond to sum to 60 over all of your SceneViews.

Which is a better option for displaying irregular shapes in Swift?

let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.

Interpreting downward spikes in Time Profiler

I'd appreciate some help on how I should interpret some results I get from Time Profiler and Activity Monitor. I couldn't find anything on this on the site, probably because the question is rather specific. However, I imagine I'm not the only one not sure what to read into the spikes they get on the Time Profiler.
I'm trying to figure out why my game is having regular hiccups on the iPhone 4. I'm trying to run it at 60 FPS, so I know it's tricky on such an old device, but I know some other games manage that fine. I'm using Unity, but this is a more general question about interpreting Instruments results. I don't have enough reputation to post images, and I can only post two links, so I can't post everything I'd like.
Here is what I get running my game on Time Profiler:
Screenshot of Time Profiler running my game
As far as I understand (but please correct me if I'm wrong), this graph is showing how much CPU my game uses during each sample the Time Profiler takes (I've set the samples to be taken once per millisecond). As you can see, there are frequent downward spikes in that graph, which (based on looking at the game itself as it plays) coincide with the hiccups in the game.
Additionally, the spikes are more common while I touch the device, especially if I move my finger on it continuously (which is what I did while playing our game above). (I couldn't make a comparable non-touching version because my game requires touching, but see below for a comparison.)
What confuses me here is that the spikes are downward: If my code was inefficient, doing too many calculations on some frames, I'd expect to see upward spikes, now downward. So here are the theories I've managed to come up with:
1) The downward spikes represent something else stealing CPU time (like, a background task, or the CPU's speed itself varying, or something). Because less time is available for my processing, I get hiccups, and it also shows as my app using less CPU.
2) My code is in fact inefficient, causing spikes every now and then. Because the processing takes isn't finished in one frame, it continues onto the next, but only needs a little extra time. That means that on that second frame, it uses less CPU, resulting in a downward spike. (It is my understanding that iOS frames are always equal legnth, say, 1/60 s, and so the third frame cannot start early even if we spent just a little extra time on the second.)
3) This is just a sampling problem, caused by the fact that the sampling frequency is 1ms while the frame length is about 16ms.
Both theories would make sense to me, and would also explain why our game has hiccups but some lighter games don't. 1) Lighter games would not suffer so badly from CPU stolen, because they don't need that much CPU to begin with. 2) Lighter games don't have as many spikes of their own.
However, some other tests seem to go against each of these theories:
1) If frames always get stolen like this, I'd expect similar spikes to appear on other games too. However, testing with another game (from the App Store, also using Unity), I don't get them (I had an image to show that but unfortunately I cannot post it).
Note: This game has lots of hiccups while running in the Time Profiler as well, so hiccups don't seem to always mean downward spikes.
2) To test the hypothesis that my app is simply spiking, I wrote a program (again in Unity) that wastes a consistent amount of milliseconds per frame (by running a loop until the specified time has passed according to the system clock). Here's what I get on Time Profiler when I make it waste 8ms per frame:
Screenshot of Time Profiler running my time waster app
As you can see, the downward spikes are still there, even though the app really shouldn't be able to cause spikes. (You can also see the effect of touching here, as I didn't touch it for the first half of the visible graph, and touched it continuously for the second.)
3) If this was due to unsync between the framerate and the sampling, I'd expect there to be a lot more oscillation there. Surely, my app would use 100% of the milliseconds until it's done with a frame, then drop to zero?
So I'm pretty confused about what to make of this. I'd appreciate any insight you can provide into this, and if you can tell me how to fix it, all the better!
Best regards,
Tommi Horttana
Have you tried unity's profiler? Does it show simillar results? Note that unity3d has two profilers on ios:
editor profiler - pro only (but there is a 30 day trial)
internal profiler - you have to enable it in xcode project's source
Look at http://docs.unity3d.com/Manual/MobileProfiling.html, maybe something will hint you.
If i had to guess, I'd check one of the most common source timing hickups - the mono garbage collector.
Try running it yourself in a set frequency (like every 250ms) and see if there is a difference in the pattern:
System.GC.Collect();

Size, Type, and Brightness of Display for Healthy Development [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
If you stare at a monitor for 8-12 hours a day looking at code, I have a couple questions for those that may have researched the health factors of this or have tried a few options.
To be easy on the eyes, can a monitor be "too big"?
Is there a particular type of display technology over another that reduces eye fatigue?
How bright should your display be in relation to your environment? Is it less fatigue to have a bright environment and a bright monitor over a darker environment?
If you're worried about eye-strain, don't forget the low-tech solution: every 30 minutes, lean back, close your eyes, and rest them for 10 seconds. Or, if you don't want to look like you're napping, gaze out a window or across the room. You should do this regardless of whether you're staring at a monitor, a book, or a sheet of music. Staring at anything for hours at a time is going to strain your eyes.
I use a free timer program to tell me when 30 minutes is up. Whenever I forget to do this, my eyes always feel itchy and tired by the end of the day.
I know this doesn't answer the precise question you asked, but I think you're looking in the wrong place for a solution. Rather than investing in a new monitor, just rest your eyes on a regular basis. There. I just saved you a few hundred bucks.
EDIT: References have been requested, so here they are. There's a decent scientific article on the value of microbreaks here and a review of the literature here.
I've always used the analogy between monitor size (resolution) and desktop size - larger screen, more space to spread out and work.
More important than the physical size is how you set it up - most people have their monitors set way way too bright.
I typically start with maximum contrast and minimum brightness, and work from there. The black on your screen should be real black, not dark gray; the white on your screen should be no brighter than a piece of paper held up next to it.
That said, I do have good screens. At work, dual 22" 1680x1050 LCD; at home, dual 19" 1200x1024 CRT; and my laptop is 1920x1200 17". I've trialled a single 24" LCD - was really nice, not as wide as either dual monitor setup.
Updated 1 Mar: The suggestion from rtpearson to look away from the monitor regularly is good advice.
I was told (years ago) that it is important for your eyes to change focal length regularly.
If you have a seat next to a window, glancing outside while you think is a good way to achieve this. "Walking an email" to a colleague on the same floor can help as well. Using a timer (such as this one I wrote) to remind you to take breaks and rest your eyes is also useful.
I'm not sure it matters. I've worked in investment banks where multiple high-res screens were the norm and am currently doing development work at home on a 9-year old Sony laptop with a 1024 x 768 screen. I haven't noticed any difference in my productivity or my eyestrain in those very different envirobments.
In terms of brightness, what works for me is to adjust the brightness of the display to match the ambient light in the room. At the moment I am running a 24" Samsung Syncmaster and I have to say that I consider leaving it on the brightest setting to be a health hazard.
There are lots of websites to help you calibrate your monitor brightness/contrast. This is just one http://www.displaycalibration.com/brightness_contrast.html
I have a 24" Dell at home, but I doubt many companies would consider that for a development machine.
22" Wide with a resolution of 1680 x 1050 is good, and the price of those monitors are relatively cheap now.
Currently I am working on a 17" 1280 x 1024, as the laptop I got to dev on only got a meager 1280 x 800 screen, which is pretty much useless for coding.
IMO 2 x 17" or 19", or 1 x 22" or larger.
Note: most cheap LCD's have terrible color, example the orange on SO, looks a pale yellow, and thats the best I can get it. The Dell at 5 times the price of a cheap 24" does not have these issues, but you pay for it :( (I still think it was a damn good investment)!
24 inch is minimum for me, and 1680x1050 is too few dots for effective coding. I prefer dual monitors at 1920 x 1200 or better .. i'd really like a pair of 30 inch Samsungs but I need to get richer. Brightness and all that other stuff has never much affected me .. since i'm always coding at night anyway not much of an issue
If you use a CRT screen, make sure you set the refresh rate nice and high. 85Hz is a good value. The default rate on Windows of 60Hz is too low. The flickering makes me feel nauseous. The refresh rate on LCD screens doesn't matter due to high "persistence".
Most people don't know this and leave their screens at 60Hz. Strangely, however, from personal experience, if you tell them directly, "Your refresh rate is wrong", many of them will get defensive—about their refresh rate!—which they probably don't even know what it is. People are strange. I'm glad LCDs are replacing CRTs.
Firstly, yes, there is a limitation of the screen size. I think a monitor is better not bigger than 30 inches. That's also the reason most brands only released monitors with screen size from 19 to 27 inches. Although you can also find a monitor with 100 inches screen size, it's not common. I guess the manufactories did research and find out the most acceptable range of screen size.
Secondly, there are some technologies already. For instance, BenQ has a technology called Flicker-free. “The Flicker-free technology eliminates flickering at all brightness levels and effectively reducing eye fatigue. ” There are also other specifications, too. I also heard about some labs are working on e-ink monitors.
Thirdly, it's difficult to give an exact number of brightness. It depends on the environment light. On the other hand, somebody is sensitive to the brightness, others are not. It's better to try different values and find the best way for yourself.
The app f.lux can really help at night if you're coding on a bright background. It reduces the blue in the screen and thus eye strain. Change the setting to 1hr instead of the default 20 seconds and you won't even notice it.
https://justgetflux.com/

Resources