Detox tests hang with pending items on dispatch queue - ios

I am writing e2e tests on Detox to test a Firebase app in React Native. It looks like the call to firebase.auth().signInWithPhoneNumber(number) dispatches some items on the dispatch queue but these items don't ever seem to be dequeued and therefore the tests cannot proceed. My hunch is that there is a network request being made by the sign in call that never resolves.
Here is the log:
detox[41991] INFO: [APP_STATUS] The app is busy with the following tasks:
• There are 2 work items pending on the dispatch queue: "Main Queue (<OS_dispatch_queue_main: com.apple.main-thread>)".
• Run loop "Main Run Loop" is awake.
I have read through this troubleshooting guide and it looks like the operation is on the Main thread (native) and the issue is a waiting too much issue.
Is there a way to inspect the items on the dispatch queue to further understand what they are? I have tried running the /usr/bin/xcrun simctl spawn <device> log stream --level debug --style compact --predicate 'process == "myapp"' but I don't understand the output. If it is useful I can upload the logs.
I'm hoping I can post some logs of some sort and someone can help me to find the reason for the items on the dispatch queue or point me in the right direction.
I have no experience with native development so device system logs and Objective C/Swift code mean nothing to me.
Thanks
Detox version: 19.4.2
React Native version: 0.67.4
Node version: v12.22.6
Device model: iPhone 11 Simulator
OS: iOS
Test-runner (select one): jest-circus

To answer the question: No,there is no easy way to inspect the dispatch queue. You must go through the internals of the app, add logging, and comment-out portions of the code until you figure out what is causing the issue.
EDIT: As of 2022-08-07, updating your #react-native-firebase/* packages to >=v15.3.0 should fix the issue.
About your specific synchronization problem...
This problem is caused by a bug in the #react-native-firebase/messaging implementation of application: didReceiveRemoteNotification: fetchCompletionHandler:[!].
Their implementation gets into a race with FIRAuth/didReceiveRemoteNotification which causes react-native-firebase/messaging to not callback the completion handler if FIRAuth's function ran first.
A PR is in progress for this to be fixed upstream, but in the meantime you can use the following patch if you have set-up patch-package:
diff --git a/node_modules/#react-native-firebase/messaging/ios/RNFBMessaging/RNFBMessaging+AppDelegate.m b/node_modules/#react-native-firebase/messaging/ios/RNFBMessaging/RNFBMessaging+AppDelegate.m
index ec26b70..743fe41 100644
--- a/node_modules/#react-native-firebase/messaging/ios/RNFBMessaging/RNFBMessaging+AppDelegate.m
+++ b/node_modules/#react-native-firebase/messaging/ios/RNFBMessaging/RNFBMessaging+AppDelegate.m
## -123,6 +123,18 ## - (void)application:(UIApplication *)application
completionHandler(UIBackgroundFetchResultNoData);
return;
}
+
+ // If the notification is a probe notification, always call the completion
+ // handler with UIBackgroundFetchResultNoData.
+ //
+ // This fixes a race condition between `FIRAuth/didReceiveRemoteNotification` and this
+ // module causing detox to hang when `FIRAuth/didReceiveRemoteNotification` is called first.
+ // see https://stackoverflow.com/questions/72044950/detox-tests-hang-with-pending-items-on-dispatch-queue/72989494
+ NSDictionary *data = userInfo[#"com.google.firebase.auth"];
+ if (data && data[#"warning"]) {
+ completionHandler(UIBackgroundFetchResultNoData);
+ return;
+ }
#endif
[[NSNotificationCenter defaultCenter]
[!]in #react-native-firebase/messaging/ios/RNFBMessaging/RNFBMessaging+AppDelegate.m

Related

print - Background or main thread operation

This might sound quite basic and stupid but it has been bothering me for a while. How can print be classified in terms of operation - main or background ?
As a small test, on putting print in a background task - web service call :
Webservice().loadHeadlinesForSource(source: source) { headlines in
print("background print")
self.headlineViewModels = headlines.map(HeadlineViewModel.init)
DispatchQueue.main.async {
print("main thread print")
completion()
}
}
Both the print statements get printed. From previous experience, if print was a main thread task, Xcode would have given me a warning saying that I need to put that in main thread. This is an evidence that print is not a main thread operation. Note that I am not saying print is a background task.
However, I have this understanding that since print displays output on Console, it is not a background operation. As a matter of fact all logging operations are not.
How would one justify the classification ?
It seems what you consider to be a main thread operation is a call that needs to be performed on the main thread. From that perspective you are correct and have found an evidence of this call not being a main thread operation.
But does this have anything to do with anything else? Internally if needed this method may still execute its real operation on the main thread or any other thread for what we care. So in this sense a main thread operation is a restriction that call needs to be performed on main thread but has nothing to do with its execution or multithreading.
Without looking into what print does in terms of coding we can see that it works across multiple "computers". You can run your app on your device (iPhone) while plugged and Xcode on your computer will print out logs. This makes a suspicion that print is much like call to the remote server in which case the server is responsible for serializing the events so it makes no difference what thread the client is on. There are other possibilities such as dropping logs into file and then sending it which really makes little difference.
So How can print be classified in terms of operation - main or background? The answer is probably none. The call is not restricted to any thread so it is not main. It will probably lock whatever thread it is on until the operation is complete so it is not background either. Think of it like Data(contentsOf: <#T##URL#>) which will block the thread until data from given URL is retrieved (or exception is thrown).

Completion block thread

I have this piece of code:
[[FBController sharedController] getUserDetailsWithCompletionBlock:^(NSDictionary *details)
{
// Updating UI elements
}];
I don't understand a thing: when the block is fired, the secondary thread is still running. Isn't it more correct that the completion of a block should be executed on main thread automatically?
I know that I am wrong with something and I need a couple of explanations.
The Facebook SDK documentation should give you more details, but in general a well-behaved SDK would call completion blocks on the same thread that the SDK was called from. Any long-running or asynchronous operations that the SDK may perform should operate on a separate thread, usually only visible to the SDK. Whether or not that separate thread is still running or not, is an implementation detail of the SDK - and you shouldn't care about it from the client code perspective.
You can visualise it like this:
Client Code (Main Thread) : [Request]--[Response]-[Continue Thread]-------[Completion Block]
v ^ ^
SDK Code (Main Thread) : [Immediate Operations] |
v |
SDK Code (Private Thread) : [Long Running / Asynchronous Operations]----[Finished]
In the specific example you posted, there's no 'Response' from the getUserDetailsWithCompletionBlock method, so the thread carries on as usual.
The missing piece to the jigsaw puzzle might be - "How does my completion block get executed on the main thread". Essentially this comes down to the Runloop system. Your main thread isn't actually owned and operated by your code, it's behind the scenes. There's a Main Runloop which periodically looks for things to do. When there's something to do, it operates those somethings on the main thread sequentially. When those somethings have finished, it goes back to looking for something else to do. The SDK basically adds your completion block to the main runloop, so the next time it fires, your block is there waiting to be executed.
Other things that the runloop might be doing are:
UI Updates
Delegate callbacks from UI code
Handling Timers
Touch Handling
etc... etc...

Debugging with semaphores in Xcode - Grand Central Dispatch - iOS

Say have a few blocks of code in my project with this pattern:
dispatch_semaphore_wait(mySemaphore);
// Arbitrary code here that I could get stuck on and not signal
dispatch_semaphore_signal(mySemaphore);
And let's say I pause in my debugger to find that I'm stuck on:
dispatch_semaphore_wait(mySemaphore);
How can I easily see where the semaphore was last consumed? As in, where can I see dispatch_semaphore_wait(mySemaphore); was called and got through to the next line of code? The trivial way would be to use NSLog's, but is there a fancier/faster way to do this in the debugger with Xcode 4?
You can print debugDescription of the semaphore object in the debugger (e.g. via po), which will give you the current and original value (i.e. value at creation) of the semaphore.
As long as current value < 0, dispatch_semaphore_wait will wait for somebody else to dispatch_semaphore_signal to increment the value.
There is currently no automatic built-in way to trace calls to dispatch_semaphore_signal/dispatch_semaphore_wait over time, but that is a useful feature request to file at bugreport.apple.com
One way to trace this yourself would be by creating symbolic breakpoints on those functions in Xcode, adding a 'Debugger Command' breakpoint action that executes bt and setting the flag to "Automatically continue after evaluating" the breakpoint.
Another option would be to use DTrace pid probes to trace those functions with an action that calls ustack().

Google Analytics for iOS not dispatching events

I'm using the latest SDK version, and the basic code to register and send a page view:
[[GANTracker sharedTracker] startTrackerWithAccountID:#"UA-MY_ACCOUNT_ID-1"
dispatchPeriod:10
delegate:self];
NSError *error;
if (![[GANTracker sharedTracker] trackPageview:#"/firstpage"
withError:&error]) {
NSLog(#"tracker failed: %#",error);
}
However the events are not dispatched from the device or simulator. There are no errors as well. When i turn on the debug flag, i can see the following:
dispatch called
dispatching 4 events
[after 10 seconds]
dispatch called
...dispatcher was busy
[after 10 seconds]
dispatch called
...dispatcher was busy
My delegate method never gets called:
- (void)trackerDispatchDidComplete:(GANTracker *)tracker
eventsDispatched:(NSUInteger)eventsDispatched
eventsFailedDispatch:(NSUInteger)eventsFailedDispatch{
NSLog(#"success: %d failures: %d",eventsDispatched,eventsFailedDispatch);
}
I tried to create a new publisher ID but it did not help as well.
I do have internet connection from the device and simulator
I deleted the app before trying.
I played with the dispatch period - setting it to -1 and call the dispatch manually
Nothing helped.... :(
I'm struggling with this for a day now... how can i make it work?
I had the same problem with the dispatcher ("...dispatcher was busy"). In my case, it was because I had run my app normally in the background, and it was using the dispatcher. When I tried to connect the device to Xcode to run and debug the app, the console showed me that message. So the solution was easy:
Stop the app in Xcode
Close the app in background
That's it.
you can put after calling the GANTracker a manual dispatch like that:
[[GANTracker sharedTracker] dispatch];
and it work perfectly

Debugging Erlang heart timeouts

I use the heart program to restart an Erlang node when it becomes unresponsive. However, I am finding it hard to understand why the node freezes. SASL logs don't show any errors, and my own logs don't seem to show anything remarkable happening at those times. Can anybody give advice on debugging this sort of thing?
By default the heart program issues a SIGKILL to kill off the unresponsive VM so it can quickly start a new one. This makes getting any useful information about the VM pretty much impossible. Something I've tried in the past is to patch the heart program to avoid the hard kill and instead get the VM to create a crash dump and a coredump. I used a patch like this (this one is for Erlang/OTP R14B02):
--- erts/etc/common/heart.c.orig 2011-04-17 12:11:24.000000000 -0400
+++ erts/etc/common/heart.c 2011-04-17 12:12:36.000000000 -0400
## -559,10 +559,11 ##
int res;
if(heart_beat_kill_pid != 0){
pid = (pid_t) heart_beat_kill_pid;
- res = kill(pid,SIGKILL);
+ res = kill(pid,SIGUSR1);
+ sleep(4);
for(i=0; i < 5 && res == 0; ++i){
sleep(1);
- res = kill(pid,SIGKILL);
+ res = kill(pid,i < 2 ? SIGQUIT : SIGKILL);
}
if(errno != ESRCH){
print_error("Unable to kill old process, "
As you can see, with this patch heart will first issue a SIGUSR1 to try to get the VM to create a crash dump. Since this can take awhile, heart then sleeps for 4 seconds. You might have to increase this sleep time if you're not getting full crash dumps. After that, heart then tries twice to issue a SIGQUIT with the hope of getting a coredump, and if that fails, issues a SIGKILL.
Note that this patch will slow down heart's VM restart due to the time required to wait for the crash dumps and coredumps. If you use it in production, be aware of this limitation.
You could try to call erlang:halt/1 from your HEART_COMMAND thus creating a crash dump from the unresponsive node.
You can try using the erl_call tool with e.g. -a erlang halt 123.
If the erlang node can't respond to this is also interesting information.
Did you try increasing `HEART_BEAT_TIMEOUT? Maybe the node is just bogged down a bit an misses the timeout but doesn't freeze.
If you have any idea of why it is freezing you could try to trace the module using dbg.
http://www.erlang.org/doc/man/dbg.html
In short try
dbg:tracer(), dbg:p(all,c), dbg:tpl(Module, Function, x).
If you want to stop this tracing issue
dbg:ctpl()
See documentation for more info.
Note: Change Module and Function to whatever you want to trace, leave x as it is. You can also skip Function and only give Module, x.
Warning: Running this on a live system can be dangerous as the amount of information that is going to be printed to the shell can be enormous.

Resources