stack and heap condition on recursive function or block? - ios

Imagine a situation like this:
- (void)doSomethingWithView:(UIView)view {
for (UIView *oneView in view.subviews) {
[self doSomethingWithView:oneView];
}
}
or a block like
void (^doSomething)(NSArray *numbers);
doSomething = ^void(NSArray *numbers){
// ... bla bla
if (condition) {
doSomething(numbers);
}
};
What happens in terms of stack and heap? My feeling is that the blocks/functions may generate a lot of stuff on the stack and heap that will never released up to a point the the app will crash without memory.
Do I run this risk?

All depend from your code and quantity of recursion.
But in any case, you can do something like this:
void (^doSomething)(NSArray *numbers);
doSomething = ^void(NSArray *numbers){
// ... bla bla
if (condition) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0ull), ^{
doSomething(numbers);
});
}
};
in this case you launch again the block but due the fact that you are using dispatch_async, the outer block return and the new block continue separately in the concurrent queue.

Related

Elegant way to execute code on function exit in Dart

Suppose we need to execute some code when a function finishes, no matter how.
Example:
void myFunc() async {
await myLock.acquire();
if(...) {
myLock.release();
return;
}
...
myLock.release();
}
Many languages have features that allow to achieve this in a more elegant way than just manually calling myLock.release() before every return statement (For example defer in Go). Is something like that also possible in Dart?
Dart does not have RAII. You instead would need to use try-finally.
(Dart did recently (in 2.17) add Finalizers, but those would fire when objects are garbage collected, which might happen at some non-deterministic time, if ever.)
And just for the record, an example of using try/finally:
void myFunc() async {
await myLock.acquire();
try {
if(...) {
return;
}
...
} finally {
myLock.release();
}
}
You'd want to start the try after allocating the resource, so that you don't try to release if allocation throws.

Whether performBlockAndWait calling twice on single thread cause deadlock?

I have found something like this in performBlockAndWait documentation:
This method may safely be called reentrantly.
My question is whether it means that it never cause deadlock when I e.g. will invoke it like that on single context?:
NSManageObjectContext *context = ...
[context performBlockAndWait:^{
// ... some stuff
[context performBlockAndWait:^{
}];
}];
You can try it yourself with a small code snippet ;)
But true, it won't deadlock.
I suspect, the internal implementation uses a queue specific token in order to identify the current queue on which the code executes (see dispatch_queue_set_specific and dispatch_queue_get_specific).
If it determines that the current executing code executes on its own private queue or on a children-queue, it simply bypasses submitting the block synchronously - which would cause a dead-lock, and instead executing it directly.
A possible implementation my look as below:
func executeSyncSafe(f: () -> ()) {
if isSynchronized() {
f()
} else {
dispatch_sync(syncQueue, f)
}
}
func isSynchronized() -> Bool {
let context = UnsafeMutablePointer<Void>(Unmanaged<dispatch_queue_t>.passUnretained(syncQueue).toOpaque())
return dispatch_get_specific(&queueIDKey) == context
}
And the queue might be created like this:
private var queueIDKey = 0 // global
init() {
dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_SERIAL,
QOS_CLASS_USER_INTERACTIVE, 0))
let context = UnsafeMutablePointer<Void>(Unmanaged<dispatch_queue_t>.passUnretained(syncQueue).toOpaque())
dispatch_queue_set_specific(syncQueue, &queueIDKey, context, nil)
}
dispatch_queue_set_specific associates a token (here context - which is simply the pointer value of the queue) with a certain key for that queue. And later, you can try to retrieve that token for any queue specifying the key and check whether the current queue is the same queue or a children-queue. If this is true, bypass dispatching to the queue and instead call the function f directly.

Passing variable to void ^() block

I have a method with a callback that looks something like this:
- (void)doStuff:(void ^())callback
{
//Do a whole bunch of stuff
//Perform callback
callback();
}
I would then call this method later on like this:
[self doStuff:^{[self callbackMethod];}];
This works just fine when there is no data to pass, but now I have some data that I need to pass between the methods.
Take the following method:
- (void)showAViewWithOptions:(int)options
In this method, I show a view with certain options, but if there's something else already on the screen, I call the method to hide it with a callback back to this method.
So the implementation looks like this.
- (void)hideOldView:(void ^())callback
{
//Hide all objects in _oldViews and set _oldViews = nil
callback();
}
- (void)showAViewWithOptions:(int)options
{
if(_oldViews != nil)
{
[self hideOldView:^(int options){[self showAViewWithOptions:options];}];
return;
}
//Show the new view
}
This compiles and runs without issue, but options loses its value after being passed.
Quite frankly, it surprised me that it compiled, since I thought it wouldn't accept a block with arguments.
For instance, if I call [self showAViewWithOptions:4];, when the callback is fired, options = -1730451212.
How do I bind the value options to the block? Or a better question, is this simply not possible because when I call the callback:
callback();
I'm not putting anything into the parentheses?
If so, then a good follow-up question would be: why does this even compile in the first place?
This should work:
- (void)showAViewWithOptions:(int)options
{
if(_oldViews != nil)
{
[self hideOldView:^(){
// Recursion doesn't feel right; be careful!
// Why can't whatever is being done by this call be done
// within this block?
[self showAViewWithOptions:options];
}];
return;
}
//Show the new view
}
A block with a return value and parameters looks like this:
^ return_type (parameter1_type parameter1_name, parameter2_type parameter2_name, ...) {
do_stuff;
};
you can pass vairable into method... Callback method you call inside method:
- (void)hideOldViewWithId:(float)f callback:(void (^)(float f))callback{
f = f + 2.0f;
callback(f);
}
and then call
[self hideOldViewWithId:1.0f callback:^(float f) {
NSLog(#"callback with float: %f", f);
}];

How do I randomize methods?

I have 4 void statements and I want to randomize them so one out of the 4 triggers at a time. Like for example the first void triggers then the next time maybe the third void triggers and so fourth. Could I use arc4random() or do I need another approach?
Sure you can use arc4Random. (It's better to use arc4random_uniform, as #JustSid pointed out in his comments. Off to fix my sample code...) There are a nearly infinite number of ways to do this.
First, a gripe of mine. Don't call methods "voids". That's inaccurate and misleading. (And it makes you sound ignorant about programming.) They're methods. The text inside the parenthesis at the beginning of the method tells you what kind of value it returns. If it doesn't return anything, the word "void" is C language notation for "nothing."
So the method:
-(void) foo;
Takes no parameters and doesn't return anything, where the method:
-(BOOL) bar;
...also takes no parameters, but it returns a boolean result.
The first method is not a "void". It is a method that doesn't return a result.
Now, to your question:
You could do something like this:
- (void) foo;
{
NSLog(#"foo");
}
- (void) bar;
{
NSLog(#"bar");
}
- (void) foobar;
{
NSLog(#"foobar");
}
- (void) randomMethod;
{
int index = arc4random_uniform(3);
switch (index)
{
case 0:
[self foo];
break;
case 1:
[self bar];
break;
case 2:
[self foobar];
break;
}
}
You could also use blocks. You could set up an array of block pointers, use arc4random_uniform() to pick an array index, and execute the appropriate block from the array. (Blocks are objects so you can add them to an array.)
The syntax of blocks and block pointers is a little tricky to follow, so for simplicity I'm not going to write that out. If you're interested I can amend my answer to show how that's done.
arc4random() is perfect for that.
int a = arc4random() % 4;
switch (a) {
case 0:
[self void0];
break;
case 1:
[self void1];
break;
// ...
}
For a scalable solution, you can use NSSelectorFromString.
The NSSelectorFromString will generate this warning: "performSelector may cause a leak because its selector is unknown". If you can't live with the warning, there are solution for that.
NSArray *methods = #[#"method1", #"method2", #"method3", #"method4"]; //add more if needed
int index = arc4random_uniform((int)methods.count);
NSString *selectedMethod = [methods objectAtIndex:index];
SEL s = NSSelectorFromString(selectedMethod);
[self performSelector:s];
-(void)method1
{
NSLog(#"method1 is called");
}
-(void)method2
{
NSLog(#"method2 is called");
}
-(void)method3
{
NSLog(#"method3 is called");
}
-(void)method4
{
NSLog(#"method4 is called");
}

Why am I getting deadlock with dispatch_once?

Why am I deadlocking?
- (void)foo
{
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
[self foo];
});
// whatever...
}
I expect foo to be executed twice on first call.
Neither of the existing answers are quite accurate (one is dead wrong, the other is a bit misleading and misses some critical details). First, let's go right to the source:
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
struct _dispatch_once_waiter_s * volatile *vval =
(struct _dispatch_once_waiter_s**)val;
struct _dispatch_once_waiter_s dow = { NULL, 0 };
struct _dispatch_once_waiter_s *tail, *tmp;
_dispatch_thread_semaphore_t sema;
if (dispatch_atomic_cmpxchg(vval, NULL, &dow)) {
dispatch_atomic_acquire_barrier();
_dispatch_client_callout(ctxt, func);
dispatch_atomic_maximally_synchronizing_barrier();
//dispatch_atomic_release_barrier(); // assumed contained in above
tmp = dispatch_atomic_xchg(vval, DISPATCH_ONCE_DONE);
tail = &dow;
while (tail != tmp) {
while (!tmp->dow_next) {
_dispatch_hardware_pause();
}
sema = tmp->dow_sema;
tmp = (struct _dispatch_once_waiter_s*)tmp->dow_next;
_dispatch_thread_semaphore_signal(sema);
}
} else {
dow.dow_sema = _dispatch_get_thread_semaphore();
for (;;) {
tmp = *vval;
if (tmp == DISPATCH_ONCE_DONE) {
break;
}
dispatch_atomic_store_barrier();
if (dispatch_atomic_cmpxchg(vval, tmp, &dow)) {
dow.dow_next = tmp;
_dispatch_thread_semaphore_wait(dow.dow_sema);
}
}
_dispatch_put_thread_semaphore(dow.dow_sema);
}
}
So what really happens is, contrary to the other answers, the onceToken is changed from its initial state of NULL to point to an address on the stack of the first caller &dow (call this caller 1). This happens before the block is called. If more callers arrive before the block is completed, they get added to a linked list of waiters, the head of which is contained in onceToken until the block completes (call them callers 2..N). After being added to this list, callers 2..N wait on a semaphore for caller 1 to complete execution of the block, at which point caller 1 will walk the linked list signaling the semaphore once for each caller 2..N. At the beginning of that walk, onceToken is changed again to be DISPATCH_ONCE_DONE (which is conveniently defined to be a value that could never be a valid pointer, and therefore could never be the head of a linked list of blocked callers.) Changing it to DISPATCH_ONCE_DONE is what makes it cheap for subsequent callers (for the rest of the lifetime of the process) to check the completed state.
So in your case, what's happening is this:
The first time you call -foo, onceToken is nil (which is guaranteed by virtue of statics being guaranteed to be initialized to 0), and gets atomically changed to become the head of the linked list of waiters.
When you call -foo recursively from inside the block, your thread is considered to be "a second caller" and a waiter structure, which exists in this new, lower stack frame, is added to the list and then you go to wait on the semaphore.
The problem here is that this semaphore will never be signaled because in order for it to be signaled, your block would have to finish executing (in the higher stack frame), which now can't happen due to a deadlock.
So, in short, yes, you're deadlocked, and the practical takeaway here is, "don't try to call recursively into a dispatch_once block." But the problem is most definitely NOT "infinite recursion", and the flag is most definitely not only changed after the block completes execution -- changing it before the block executes is exactly how it knows to make callers 2..N wait for caller 1 to finish.
You could alter code a little, so that the calls are outside the block and there's no deadlock, something like this:
- (void)foo
{
static dispatch_once_t onceToken;
BOOL shouldRunTwice = NO;
dispatch_once(&onceToken, ^{
shouldRunTwice = YES;
});
if (shouldRunTwice) {
[self foo];
}
// whatever...
}

Resources