Are PTHREAD_MUTEX_* and PTHREAD_MUTEX_ERRORCHECK mutually exclusive? - pthreads

The Open Group has a specification for pthread_mutex_lock, pthread_mutex_trylock, pthread_mutex_unlock and friends located here.
The page lists four mutex attribute values: PTHREAD_MUTEX_NORMAL, PTHREAD_MUTEX_ERRORCHECK, PTHREAD_MUTEX_RECURSIVE, and PTHREAD_MUTEX_DEFAULT.
Are all the values mutually exclusive? In a Debug configuration, are we allowed to OR those values together? For example, I'd like full error checking in Debug, so is PTHREAD_MUTEX_ERRORCHECK | PTHREAD_MUTEX_RECURSIVE a valid configuration?
The reason I ask is I'm catching an error pthread_mutexattr_settype. I'm not sure if its a valid configuration and an OS X implementation bug; or if its an invalid configuration and expected standard behavior. If its an OS X bug, I can still enjoy the enhanced error checking in debug configurations on other platforms.

A mutex can be of only one "type". You cannot combine them.
It doesn't really make sense to do so, anyway - PTHREAD_MUTEX_ERRORCHECK mutexes always return an error if you try to relock a mutex already locked by the same thread, whereas PTHREAD_MUTEX_RECURSIVE mutexes always succeed in that case. In the other error-checking cases (unlocking a mutex which another thread has locked, and unlocking an unlocked mutex) both PTHREAD_MUTEX_ERRORCHECK and PTHREAD_MUTEX_RECURSIVE have the same behaviour (always returning an error).
This means that your PTHREAD_MUTEX_RECURSIVE mutexes should remain the same type in "debug" builds, but it might make sense to substitute PTHREAD_MUTEX_ERRORCHECK for PTHREAD_MUTEX_DEFAULT and PTHREAD_MUTEX_NORMAL mutexes.

Related

why the assert function was disabled in dart?

import 'dart:io';
main() {
print("Enter an even number : ");
int evenNo = int.parse(stdin.readLineSync());
assert(evenNo % 2 == 0, 'wrong input');
print("You have entered : $evenNo");
}
to get this code to work properly
I had to run the dart file with '--enable-asserts' tag and before assert function was executed without passing the '--enable-asserts' tag. why was this function disabled ?
What is an assertion?
In many languages, including Dart, "assertions" specifically are meant to catch logical errors. (Dart calls these Errors.) These are errors that are due to a programming mistake. These types of errors should never happen. Conceptually, a sufficiently advanced static analyzer could prove that assertions will never fail. In practice, such analysis is difficult, so assertions are verified at runtime as a matter of practicality.
This is in contrast to runtime errors, which are unpredictable errors that occur when the program is actually running. (Dart calls these Exceptions.) Often these types of errors are due to invalid user input, but they also include file system errors and hardware failures, among other things.
Assertions are intended to be used to verify assumptions (or catch faulty ones) when debugging, and programming languages that have assertions typically allow them to be disabled for production (non-debug) code. Since assertions logically should never occur, there is no point in incurring the extra runtime cost of checking them. Since assertions can be disabled, that also should provide additional discouragement against using them improperly.
Dart chose to leave assertions disabled by default, so you must opt-in to using them with --enable-asserts. Some other languages (e.g. C) chose an opt-out system instead. I don't know the rationale for this choice for Dart, but since asserts should be used only for debugging, it makes sense to me that a language like Dart (which often might be interpreted) makes it easier for users to execute code in production mode. In contrast, for compiled languages like C, the onus of enabling or disabling assertions is placed on the developer rather than on the user.
What does this mean for your code?
Your code does not use assert properly: You use it to check runtime input. That instead should be a check that you always perform and that produces a runtime error if it fails:
if (evenNo % 2 != 0) {
throw FormatException('wrong input');
}

Why does VkAccessFlagBits include both read bits and write bits?

In vulkan.h, every instance of VkAccessFlagBits appears in a pair that contains a srcAccessMask and a dstAccessMask:
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
In every case, according to my understanding, the purpose of these masks is to help designate two sets of operations, such that results of operations in the first set will be visible to operations in the second set. For instance, write operations occurring prior to a barrier should not get hung up in caches but should instead propagate all the way to locations from which they can be read after the barrier. Or something like that.
The access flags come in both READ and WRITE forms:
/* ... */
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
/* ... */
But it seems to me that srcAccessMask should probably always be some sort of VK_ACCESS_*_WRITE_BIT combination, while dstAccessMask should always be a combination of VK_ACCESS_*_READ_BIT values. If that is true, then the READ/WRITE distinction is identical to and implicit in the src/dst distinction, and so it should be good enough to just have VK_ACCESS_SHADER_BIT etc., without READ_ or WRITE_ variants.
Why are there READ_ and WRITE_ variants, then? Is it ever useful to specify that some read operations must fully complete before some other operations have begun? Note that all operations using VkAccessFlagBits produce (I think) execution dependencies as well as memory dependencies. It seems to me that the execution dependencies should be good enough to prevent earlier reads from receiving values written by later writes.
While writing this question I encountered a statement in the Vulkan specification that provides at least part of an answer:
Memory dependencies are used to solve data hazards, e.g. to ensure that write operations are visible to subsequent read operations (read-after-write hazard), as well as write-after-write hazards. Write-after-read and read-after-read hazards only require execution dependencies to synchronize.
This is from the section 6.4. Execution And Memory Dependencies. Also, from earlier in that section:
The application must use memory dependencies to make writes visible before subsequent reads can rely on them, and before subsequent writes can overwrite them. Failure to do so causes the result of the reads to be undefined, and the order of writes to be undefined.
From this I surmise that, yes, the execution dependencies produced by the Vulkan commands that involve these access flags probably do free you from ever having to put a VK_ACCESS_*_READ_BIT into a srcAccessMask field--but that you might in fact want to have READ_ flags, WRITE_ flags, or both in some of your dstAccessMask fields, because apparently it's possible to use an explicit dependency to prevent read-after-write hazards in such a way that write-after-write hazards are NOT prevented. (And maybe vice-versa?)
Like, maybe your Vulkan will sometimes decide that a write does not actually need to be propagated all the way through a particular cache to its final specified destination for the sake of a subsequent read operation, IF Vulkan happens to know that that read operation will simply read from that same cache, saving some time? But then a second write might happen, and write to a different cache, and there'll be two caches left in a race (with the choice of winner undefined) to send their two values to the same spot. Or something? Maybe my mental model of these caches is entirely wrong.
It is fairly solidly established, at least, that memory barriers are confusing.
Let's go over all the possibilities:
read–read — well yeah that one is pretty useless. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
read–write — execution dependency should be sufficient to synchronize without this. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
write–read — that's the obvious and most common one.
write–write — similar reason to write–read above. Without it the order of the writes would be undefined. It is a bit pointless for most situations to write something you haven't even read in between. But hey, now you have a way to synchronize it.
You can provide bitmask of more of these masks to both src and dst. In which case it makes sense to have both masks for driver to sort the dependencies out for you. (I don't expect performance overhead from this on API level, so it is allowed as convenience)
From API design perspective, it could mean adding different enum for srcAccess. But perhaps _READ variants could just be forbidden in srcAccess through "Valid Usage", making this argument weak. The src == READ variant might have been kept, because it is benign.

Using pthreads with MPICH

I am having trouble using pthreads in my MPI program. My program runs fine without involving pthreads. But I then decided to execute a time-consuming operation in parallel and hence I create a pthread that does the following (MPI_Probe, MPI_Get_count, and MPI_Recv). My program fails at MPI_Probe and no error code is returned. This is how I initialize the MPI environment
MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided_threading_support);
The provided threading support is '3' which I assume is MPI_THREAD_SERIALIZED. Any ideas on how I can solve this problem?
The provided threading support is '3' which I assume is MPI_THREAD_SERIALIZED.
The MPI standard defines thread support levels as named constants and only requires that their values are monotonic, i.e. MPI_THREAD_SINGLE < MPI_THREAD_FUNNELED < MPI_THREAD_SERIALIZED < MPI_THREAD_MULTIPLE. The actual numeric values are implementation-specific and should never be used or compared against.
MPI communication calls by default never return error codes other than MPI_SUCCESS. The reason for that is, MPI calls the communicator's error handler before an MPI call returns and all communicators are initially created with MPI_ERRORS_ARE_FATAL installed as their error handler. That error handler terminates the program and usually prints some debugging information, e.g. the reason for the failure. Both MPICH (and its countless variants) and Open MPI produce quite elaborate reports on what led to the termination.
To enable user error handling on communicator comm, you should make the following call:
MPI_Comm_set_errhandler(comm, MPI_ERRORS_RETURN);
Watch out for the error codes returned - their numerical values are also implementation-specific.
If your MPI implementation isn't willing to give you MPI_THREAD_MULTIPLE, there's three things you can do:
Get a new MPI implementation.
Protect MPI calls with a critical section.
Cut it out with the threading thing.
I would suggest #3. The whole point of MPI is parallelism -- if you find yourself creating multiple threads for a single MPI subprocess, you should consider whether those threads should have been independent subprocesses to begin with.
Particularly with MPI_THREAD_MULTIPLE. I could maybe see a use for MPI_THREAD_SERIALIZED, if your threads are sub-subprocess workers for the main subprocess thread... but MULTIPLE implies that you're tossing data around all over the place. That loses you the primary convenience offered by MPI, namely synchronization. You'll find yourself essentially reimplementing MPI on top of MPI.
Okay, now that you've read all that, the punchline: 3 is MPI_THREAD_MULTIPLE. But seriously. Reconsider your architecture.

How to make Crash resistant ios apps

I'm now programming ios apps for a while. But still my apps crash regularly and it takes time to make them very stable. I find this very annoying.
So, are there any programming patterns regarding crash proof programming ios app?
Turn up the compiler warnings. Remove all warnings.
Run the static analyzer. Remove all warnings.
Run with GuardMalloc and/or Scribbling.
Remove all Leaks
Remove all Zombies
Write Unit Tests
Write a high level of live error detection (e.g. assertions)
Fix bugs before adding features.
Use source control and continuous integration.
Read. Improve.
1) Use ARC.
ARC has a small learning curve, and the big problem I read about on SO is that people instantiate objects but forget to assign the object to a strong ivar (or property), so the object mysteriously goes away. In any case the benefit is so compelling you should master this. When you do, most of all the memory management stuff you have to keep straight now goes away.
2) Build clean
You should never ever have warnings when you build. If you have a bunch, when a real problem comes up, it gets buried in the lines of cruft you got use to ignoring. Professional programmers build clean.
3) Use Analyze
Xcode/llvm has a phenomenal analyzer - use it - its under the Product menu item. Then clean up every single warning it gives you.
4) Use the right Configuration
I always have a Release (or Distribution) configuration, a ReleaseWithAsserts, and Debug. I only use Debug when using lldb, as the code is much bigger and it will execute differently than an optimized compile will. ReleaseWithAsserts is similar to Debug, but the Debug=1 preprocessor flag is removed, and the optimizer is set to -Os (the default for Release).
Finally, the Release/Distribution configuration has the following in the Preprocessing Macros:
NS_BLOCK_ASSERTIONS=1 NDEBUG
The first turns off all 'NSAssert()' lines, and the second all 'assert()' statements. This way no asserts are active in your shipped code
5) Use copious amounts of asserts.
Asserts are one of the best friends you can have as a programmer. I see sooooo many questions here that would never get written if programmers only would use them. The overhead is in typing them, since (in my usage) they are compiled away in the Release configuration.
Whenever you get an object from another source, assert its existence (ie:
MyObject *foo = [self someMethod];
assert(foo);
This is sooo easy to type, but will save you hours of debugging when a nil object causes problems thousands of instructions later. You can assert on class too:
MyObject *foo = [self someMethod];
assert([foo isMemberOfClass:[MyObject class]]);
This is particularly useful when you are pulling objects out of dictionaries.
You can put asserts at the start of methods, to verify that you received objects not nil too.
You can assert that some variable has a value:
assert(i>1 && i<5);
Again, in all but the Release/Distribution configuration, the assert is "live", but in that configuration they are compiled out.
NSAssert is similar - but you have to use different macros depending on the number of arguments.
NSAssert(foo, #"Foo was nil");
NSAssert1(foo, #"Foo was nil, and i=%d", i);
...
6) Use selective Logs to insure that things that should happen are
You can define your own Log macro, which gets compiled out for Release/Distribution. Add this to your pch file:
#ifndef NDEBUG
#define MYLog NSLog
#else
#define MYLog(format, ...)
#endif
This points up another point - you can use:
#ifndef NDEBUG
...
#endif
to block out a bit of code that does more complex checks - and its just active for your development builds, not for Release/Distribution.
You may already do these, but here's what I do:
Go to every screen in the app using the debugger/simulator, and select "Simulate Memory Warning". Go back to the previous screen. This causes existing UIView objects and variable assignments in viewDidLoad to be reassigned (which could result in bad pointer references).
Make sure to invalidate any running timers in the dealloc.
If an object A has made itself a delegate of object B, make sure to clear it in the dealloc for object A.
Make sure to clear any notification observers in the dealloc.

What's the point of NSAssert, actually?

I have to ask this, because: The only thing I recognize is, that if the assertion fails, the app crashes. Is that the reason why to use NSAssert? Or what else is the benefit of it? And is it right to put an NSAssert just above any assumption I make in code, like a function that should never receive a -1 as param but may a -0.9 or -1.1?
Assert is to make sure a value is what its supposed to be. If an assertion fails that means something went wrong and so the app quits. One reason to use assert would be if you have some function that will not behave or will create very bad side effects if one of the parameters passed to it is not exactly some value (or a range of values) you can put an assert to make sure that value is what you expect it to be, and if it's not then something is really wrong, and so the app quits. Assert can be very useful for debugging/unit testing, and also when you provide frameworks to stop the users from doing "evil" things.
I can't really speak to NSAssert, but I imagine that it works similarly to C's assert().
assert() is used to enforce a semantic contract in your code. What does that mean, you ask?
Well, it's like you said: if you have a function that should never receive a -1, you can have assert() enforce that:
void gimme_positive_ints(int i) {
assert(i > 0);
}
And now you'll see something like this in the error log (or STDERR):
Assertion i > 0 failed: file example.c, line 2
So not only does it safe-guard against potentially bad inputs but it logs them in a useful, standard way.
Oh, and at least in C assert() was a macro, so you could redefine assert() as a no-op in your release code. I don't know if that's the case with NSAssert (or even assert() any more), but it was pretty useful to compile out those checks.
NSAssert gives you more than just crashing the app. It tells you the class, method, and the line where the assertion occurred. All the assertions can also be easily deactivated using NS_BLOCK_ASSERTIONS. Thus making it more suitable for debugging. On the other hand, throwing an NSException only crashes the app. It also does not tell about the location of the exception, nor can it be disabled so simply. See the difference in the images below.
The app crashes because an assertion also raises an exception, as the NSAssert documentation states:
When invoked, an assertion handler prints an error message that
includes the method and class names (or the function name). It then
raises an NSInternalInconsistencyException exception.
NSAssert:
NSException:
Apart from what everyone said above, the default behaviour of NSAssert() (unlike C’s assert()) is to throw an exception, which you can catch and handle. For instance, Xcode does this.
Just to clarify, as somebody mentioned but not fully explained, the reason for having and using asserts instead of just creating custom code (doing ifs and raising an exception for bad data, for instance) is that asserts SHOULD be disabled for production applications.
While developing and debugging, asserts are enabled for you to catch errors. The program will halt when an assert is evaluated as false.
But, when compiling for production, the compiler omits the assertion code and actually MAKE YOUR PROGRAM RUN FASTER. By then, hopefully, you have fixed all the bugs.
In case your program still has bugs while in production (when assertions are disabled and the program "skips over" the assertions), your program will probably end up crashing at some other point.
From NSAssert's help: "Assertions are disabled if the preprocessor macro NS_BLOCK_ASSERTIONS is defined."
So, just put the macro in your distribution target [only].
NSAssert (and its stdlib equivalent assert) are to detect programming errors during development. You should never have an assertion that fails in a production (released) application. So you might assert that you never pass a negative number to a method that requires a positive argument. If the assertion ever fails during testing, you have a bug. If, however, the value that's passed is entered by the user, you need to do proper validation of the input rather than relying on the assertion in production (you can set a #define for release builds that disables NSAssert*.
Assertions are commonly used to enforce the intended use of a particular method or piece of logic. Let's say you were writing a method which calculates the sum of two greater than zero integers. In order to make sure the method was always used as intended you would probably put an assert which tests that condition.
Short answer: They enforce that your code is used only as intended.
It's worthwhile to point out that aside from run time checking, assert programming is a important facility used when you design your code by contract.
More info on the subject of assertion and design by contract can be found below:
Assertion (software development)
Design by contract
Programming With Assertions
Design by Contract, by Example [Paperback]
To fully answer his question, the point of any type of assert is to aid debugging. It is more valuable to catch errors at their source, then to catch them in the debugger when they cause crashes.
For example, you may pass a value to a function expects values in a certain range. The function may store the value for later use, and on later use the application crashes. The call stack seen in this scenario would not show the source of the bad value. It's better to catch the bad value as it comes in to find out who's passing the bad value and why.
NSAssert make app crash when it match with the condition. If not match with the condition the next statements will execute. Look for the EX below:
I just create an app to test what is the task of NSAssert is:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self testingFunction:2];
}
-(void)testingFunction: (int)anNum{
// if anNum < 2 -> the app will crash
// and the NSLog statement will not execute
// that mean you cannot see the string: "This statement will execute when anNum < 2"
// into the log console window of Xcode
NSAssert(anNum >= 2, #"number you enter less than 2");
// If anNum >= 2 -> the app will not crash and the below
// statement will execute
NSLog(#"This statement will execute when anNum < 2");
}
into my code the app will not crash.And the test case is:
anNum >= 2 -> The app will not crash and you can see the log string:"This statement will execute when anNum < 2" into the outPut log console window
anNum < 2 -> The app will crash and you can not see the log string:"This statement will execute when anNum < 2"

Resources