I'm now programming ios apps for a while. But still my apps crash regularly and it takes time to make them very stable. I find this very annoying.
So, are there any programming patterns regarding crash proof programming ios app?
Turn up the compiler warnings. Remove all warnings.
Run the static analyzer. Remove all warnings.
Run with GuardMalloc and/or Scribbling.
Remove all Leaks
Remove all Zombies
Write Unit Tests
Write a high level of live error detection (e.g. assertions)
Fix bugs before adding features.
Use source control and continuous integration.
Read. Improve.
1) Use ARC.
ARC has a small learning curve, and the big problem I read about on SO is that people instantiate objects but forget to assign the object to a strong ivar (or property), so the object mysteriously goes away. In any case the benefit is so compelling you should master this. When you do, most of all the memory management stuff you have to keep straight now goes away.
2) Build clean
You should never ever have warnings when you build. If you have a bunch, when a real problem comes up, it gets buried in the lines of cruft you got use to ignoring. Professional programmers build clean.
3) Use Analyze
Xcode/llvm has a phenomenal analyzer - use it - its under the Product menu item. Then clean up every single warning it gives you.
4) Use the right Configuration
I always have a Release (or Distribution) configuration, a ReleaseWithAsserts, and Debug. I only use Debug when using lldb, as the code is much bigger and it will execute differently than an optimized compile will. ReleaseWithAsserts is similar to Debug, but the Debug=1 preprocessor flag is removed, and the optimizer is set to -Os (the default for Release).
Finally, the Release/Distribution configuration has the following in the Preprocessing Macros:
NS_BLOCK_ASSERTIONS=1 NDEBUG
The first turns off all 'NSAssert()' lines, and the second all 'assert()' statements. This way no asserts are active in your shipped code
5) Use copious amounts of asserts.
Asserts are one of the best friends you can have as a programmer. I see sooooo many questions here that would never get written if programmers only would use them. The overhead is in typing them, since (in my usage) they are compiled away in the Release configuration.
Whenever you get an object from another source, assert its existence (ie:
MyObject *foo = [self someMethod];
assert(foo);
This is sooo easy to type, but will save you hours of debugging when a nil object causes problems thousands of instructions later. You can assert on class too:
MyObject *foo = [self someMethod];
assert([foo isMemberOfClass:[MyObject class]]);
This is particularly useful when you are pulling objects out of dictionaries.
You can put asserts at the start of methods, to verify that you received objects not nil too.
You can assert that some variable has a value:
assert(i>1 && i<5);
Again, in all but the Release/Distribution configuration, the assert is "live", but in that configuration they are compiled out.
NSAssert is similar - but you have to use different macros depending on the number of arguments.
NSAssert(foo, #"Foo was nil");
NSAssert1(foo, #"Foo was nil, and i=%d", i);
...
6) Use selective Logs to insure that things that should happen are
You can define your own Log macro, which gets compiled out for Release/Distribution. Add this to your pch file:
#ifndef NDEBUG
#define MYLog NSLog
#else
#define MYLog(format, ...)
#endif
This points up another point - you can use:
#ifndef NDEBUG
...
#endif
to block out a bit of code that does more complex checks - and its just active for your development builds, not for Release/Distribution.
You may already do these, but here's what I do:
Go to every screen in the app using the debugger/simulator, and select "Simulate Memory Warning". Go back to the previous screen. This causes existing UIView objects and variable assignments in viewDidLoad to be reassigned (which could result in bad pointer references).
Make sure to invalidate any running timers in the dealloc.
If an object A has made itself a delegate of object B, make sure to clear it in the dealloc for object A.
Make sure to clear any notification observers in the dealloc.
Related
In vulkan.h, every instance of VkAccessFlagBits appears in a pair that contains a srcAccessMask and a dstAccessMask:
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
In every case, according to my understanding, the purpose of these masks is to help designate two sets of operations, such that results of operations in the first set will be visible to operations in the second set. For instance, write operations occurring prior to a barrier should not get hung up in caches but should instead propagate all the way to locations from which they can be read after the barrier. Or something like that.
The access flags come in both READ and WRITE forms:
/* ... */
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
/* ... */
But it seems to me that srcAccessMask should probably always be some sort of VK_ACCESS_*_WRITE_BIT combination, while dstAccessMask should always be a combination of VK_ACCESS_*_READ_BIT values. If that is true, then the READ/WRITE distinction is identical to and implicit in the src/dst distinction, and so it should be good enough to just have VK_ACCESS_SHADER_BIT etc., without READ_ or WRITE_ variants.
Why are there READ_ and WRITE_ variants, then? Is it ever useful to specify that some read operations must fully complete before some other operations have begun? Note that all operations using VkAccessFlagBits produce (I think) execution dependencies as well as memory dependencies. It seems to me that the execution dependencies should be good enough to prevent earlier reads from receiving values written by later writes.
While writing this question I encountered a statement in the Vulkan specification that provides at least part of an answer:
Memory dependencies are used to solve data hazards, e.g. to ensure that write operations are visible to subsequent read operations (read-after-write hazard), as well as write-after-write hazards. Write-after-read and read-after-read hazards only require execution dependencies to synchronize.
This is from the section 6.4. Execution And Memory Dependencies. Also, from earlier in that section:
The application must use memory dependencies to make writes visible before subsequent reads can rely on them, and before subsequent writes can overwrite them. Failure to do so causes the result of the reads to be undefined, and the order of writes to be undefined.
From this I surmise that, yes, the execution dependencies produced by the Vulkan commands that involve these access flags probably do free you from ever having to put a VK_ACCESS_*_READ_BIT into a srcAccessMask field--but that you might in fact want to have READ_ flags, WRITE_ flags, or both in some of your dstAccessMask fields, because apparently it's possible to use an explicit dependency to prevent read-after-write hazards in such a way that write-after-write hazards are NOT prevented. (And maybe vice-versa?)
Like, maybe your Vulkan will sometimes decide that a write does not actually need to be propagated all the way through a particular cache to its final specified destination for the sake of a subsequent read operation, IF Vulkan happens to know that that read operation will simply read from that same cache, saving some time? But then a second write might happen, and write to a different cache, and there'll be two caches left in a race (with the choice of winner undefined) to send their two values to the same spot. Or something? Maybe my mental model of these caches is entirely wrong.
It is fairly solidly established, at least, that memory barriers are confusing.
Let's go over all the possibilities:
read–read — well yeah that one is pretty useless. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
read–write — execution dependency should be sufficient to synchronize without this. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
write–read — that's the obvious and most common one.
write–write — similar reason to write–read above. Without it the order of the writes would be undefined. It is a bit pointless for most situations to write something you haven't even read in between. But hey, now you have a way to synchronize it.
You can provide bitmask of more of these masks to both src and dst. In which case it makes sense to have both masks for driver to sort the dependencies out for you. (I don't expect performance overhead from this on API level, so it is allowed as convenience)
From API design perspective, it could mean adding different enum for srcAccess. But perhaps _READ variants could just be forbidden in srcAccess through "Valid Usage", making this argument weak. The src == READ variant might have been kept, because it is benign.
I've seen a number of questions about this, but I'm still confused as hell about what is happening.
I HATE blaming the language or compiler, but, with Swift, it's a definite possibility.
When I build and run in debug mode, all is right with the world.
However, when I compile in release mode, the compiler pukes with this error:
"Bitcast requires both operands to be pointer or neither"...Yadda, yadda
The offending line appears to be this one:
let lastURI:AnyObject? = NSUserDefaults.standardUserDefaults().valueForKey(s_currentURIKey)
It's right here, if you want to follow along in your textbook (It's an open-source project).
If I comment out this line (and the 7 lines below it), then the sun shines, and birds tweet happily. If I leave it in, then dark clouds and swarms of orcs spew forth from Mordor.
In order to make it fail, I simply run the profiler, which compiles in release mode.
Anyone have any experience with this kind of agita?
All the other versions of this issue tend to resolve to issues with specific Cocoa API calls. It's not clear to me whether or not the user is at fault.
I really want it to be my fault, which means there's something I can do about it.
If it isn't my fault, then I need to file a RADAR issue, and grow a long gray beard before I can test this app with a profiler.
It will be a while before it will be ready for release, but I like to test my apps in release mode very early on.
OK. Here's what I found out.
I had an array of tuples as a variable:
var arrayOfTuples:[(greeting: String, query: String)]! = []
I was appending to it using +=, and a single element array, like so:
arrayOfTuples += [(greeting: "HI", query: "HOWAYA")]
This worked and compiled fine in debug mode, but caused that strange error in release mode.
I fixed it it by replacing "+=" with ".append()", like so:
arrayOfTuples.append((greeting: "HI", query: "HOWAYA"))
Go figure. In any case, append was more appropriate, so good coding practice prevails.
I am running into a bizarre situation where a unit test's execution is behaving differently than the normal execution of a piece of code.
In particular, I am using a library called JSONModel, and when I am attempting to deserialize an object from a JSON string, one line in particular is causing problems when I step through the executing test case:
if ( [[property.type class] isSubclassOfClass:[JSONModel class]] ) ...
If I put a breakpoint before (or after) this line and execute:
expr [[property.type class] isSubclassOfClass:[JSONModel class]]
...in the debugger, it prints \x01 as the value (i.e. true), however when I actually step the instruction pointer, it behaves as though it is false, going instead into the else block. Again, typing the expression into the debugger again shows it as true even still.
I'm curious if anyone has seen similar behavior before and has any suggestions as to what could possibly be going wrong. I'm quite certain I'm not including different definitions for anything, unless Xcode has different internal cocoa class implementations for testing.
Update: Here's some even weirder evidence: I've added some NSLog statements to get an idea for how the execution is seeing things. If I log property.type.superclass I get JSONModel back (as expected); however if I log property.type.superclass == [JSONModel class] I get false!
To me this is indicating that the JSONModel the unit test execution is seeing is somehow a different JSONModel class that I'm seeing at runtime (and what it should be seeing). However, how that is happening I can't figure out.
Could this be caused by a forward class declaration or something like that?
Well I seem to have discovered a "solution" by experimentation. It turns out if I replace [JSONModel class] with NSClassFromString(#"JSONModel") it works swimmingly!
Now why this is I cannot say, and will give the answer to whoever can explain it!
I had the exact same problem, here's what was happening.
As expected with this kind of behaviour, it was an issue with the class being duplicated. As with you, [instance class] and NSClassFromString would return different values. However, the class were identical in all points : same ivar, same methods (checked with the obj runtime). No warning was displayed at compile, link and/or runtime
In my case, the tests were about a static library used in my main application (Bar.app).
Basically, I had 3 targets:
libFoo
libFooTests
Bar.app
The tests were performing on the device, and not on simulator. As such, they needed to be logic tests, and not unit tests, and had to be loaded into an application. The bundle loader was my main app, Bar.app.
Only additional linker flag was -ObjC
Now, Bar.app was linking libFoo.
It turns out, libFooTests was also linking libFoo.
When injecting libFooTests in the test host (Bar.app), symbols were duplicated, but no warning were presented to me. This is how this particular class got duplicated.
Simply removing libFoo from the list of libraries to link against in libFooTests did the trick.
I'm encountering this error when I'm running my DirectX10 program in debug mode:
D3D10: WARNING: ID3D10Buffer::SetPrivateData: Existing private data of same name with different size found! [ STATE_SETTING WARNING #55: SETPRIVATEDATA_CHANGINGPARAMS ]
I'm trying to make the project highly OOP as a learning exercise, so there's a chance that this may be occurring, but is there a way to get some more details?
It appears this warning is raised by D3DX10CreateSprite, which is internally called by font->DrawText
You can ignore this warning, seems to be a bug in the Ms code :)
Direct3D11 doesn't have built-in text rendering anymore, so you won't encounter it in the future.
Since this is a D3D11 warning, you could always turn it off using ID3D11InfoQueue:
D3D11_MESSAGE_ID hide [] = {
D3D11_MESSAGE_ID_SETPRIVATEDATA_CHANGINGPARAMS,
// Add more message IDs here as needed
};
D3D11_INFO_QUEUE_FILTER filter;
memset(&filter, 0, sizeof(filter));
filter.DenyList.NumIDs = _countof(hide);
filter.DenyList.pIDList = hide;
d3dInfoQueue->AddStorageFilterEntries(&filter);
See this page for more. I found your question while googling for the answer and had to search a bit more to find the above snippet, hopefully this will help someone :)
What other data are you looking for or interested in?
The warning is pretty clear about what is going on, but if you want to hunt down a bit more data, there may be a few things to try.
Try calling ID3D10Buffer::GetPrivateData with the same name or do some other check to see if there is data with that name already, and if so, what the contents are. Print your results to a file, output window, or console. This may be combined with breakpoints to see where the duplicate is occurring (break when there's already data).
You may (not positive) be able to set the D3D runtimes to debug mode and to break on warnings (not sure if it can do warnings or just errors). Debug your app in VS or your preferred debugger, and when the warning is shown, it will break and you can look at the parameters.
Go through your code and track down all calls to ID3D10Buffer::SetPrivateData and look to see if there are any obvious duplicates. If there are, work up the program flow and see why and what you can do about them (this may work best after you use one of the former methods to know where to start).
How are your data names set up, and what is the buffer used for? Examining one or both may lead you to a conflict somewhere.
You may also try unicorns, they've been known to help with this kind of problem.
When I'm debugging something that goes wrong inside a loop, say on the 600th iteration, it can be a pain to have to break for every one. So I tried setting a conditional breakpoint, to only break if I = 600. That works, but now it takes almost a full minute to reach that point, where before it was almost instantaneous. What's going on, and is there any way to fix it?
When you hit a breakpoint, Windows stops the process and notifies the debugger. It has to switch contexts, evaluate the condition, decide that no, you don't want to be notified about it, restart the process and switch back. That can take a lot of processor cycles. If you're doing it in a tight loop, it'll take a couple orders of magnitude more processor cycles than one iteration of the loop takes.
If you're willing to mess with your code a little, there's a way to do conditional breakpoints without incurring all this overhead.
if <condition here> then
asm int 3 end;
This is a simple assembly instruction that manually sends a breakpoint notification to the OS. Now you can evaluate your condition inside the program, without switching contexts. Just make sure to take it out again when you're done with it. If an int 3 goes off inside a program that's not connected to a debugger, it'll raise an exception.
It slows it down because every time you reach that point, it has to check your condition.
What I tend to do is to temporarily create another variable like this (in C but should be doable in Delphi).
int xyzzynum = 600;
while (true) {
doSomething();
if (--xyzzynum == 0)
xyzzynum = xyzzynum;
}
then I put a non-conditional breakpoint on the "xyzzynum = xyzzynum;" line.
The program runs at full speed until it's been through the loop 600 times, because the debugger is just doing a normal breakpoint interrupt rather than checking conditions every time.
You can make the condition as complicated as you want.
Further to Mason's answer, you could make the int 3 assember only be compiled in if the program is built with the debug conditional defined:
{$ifdef debug}
{$message warn 'debug breakpoint present in code'}
if <condition here> then
asm int 3 end;
{$endif}
So, when you are debugging in the ide, you have the debug conditional in the project options. When you build the final product for your customers (with your build script?), you wouldn't include that symbol, so it wont get compiled in.
I also included the $message compiler directive, so you will see a warning when you compile letting you know that the code is still there. If you do that everywhere you use int 3, you will then have a nice list of places which you can double click on to take you straight to the offending code.
N#
Mason's explanations are quite good.
His code could be made a bit more secure by testing that you run under the debugger:
if (DebugHook <> 0) and <your specific condition here> then
asm int 3 end;
This will not do anything when the application is running normally and will stop if it's running under the debugger (whether launched from the IDE or attached to the debugger).
And with boolean shortcut <your specific condition here> won't even be evaluated if you're not under the debugger.
Conditional breakpoints in any debugger (I'm just surmising here) require the process to flip back and forth every time between your program and the debugger every time the breakpoint is hit. This process is time consuming but I do not think there is anything you can do.
Normally condition breakpoints work by inserting the appropriate break instruction into the code and then checking for the conditions you have specified. It'll check at every iteration and it might well be that the way in which the check is implemented is responsible for the delay as it's unlikely that the debugger compiles and inserts the complete check and breakpoint code into the existing code.
A way that you might be able to accelerate this is if you put the condition followed by an op with no side effect into the code directly and break on that op. Just remember to remove the condition and the op when you're done.