How to handle slow network connections with UIAutomation - ios

I noticed that all my UI tests fail when the network is slow. For instance a user would try to login and then the next screen wouldn't load fast enough in order for another UIElement to be on screen.
How can I handle a slow network connection without using a delay() ?

You should definitely take a look at multi-threading. When handling network connections, you should make all this processing in a secondary thread. If not, the main thread will be blocked and the app will become unresponsive to the user.
Multi-threading is a very big subject. I recommend you to start looking at Apple's reference for this. You can also refer to a great course on iTunes U (lecture 11).
If you just want to give it a shot, here's the actual code (similar) that you will need:
dispatch_queue_t newQueue = dispatch_queue_create("networkQueue", NULL);
dispatch_async(newQueue, ^{
// here you need to call the networking processes
dispatch_async(dispatch_get_main_queue(), ^{
// if you need to update your UI, you need to get back to the main queue.
// This block will be executed in your main queue.
});
});

The only way I know of is using a delay. I usually have a activity indicator when loading stuff from the internet. So I add a delay while the activity indicator is displaying
while (activityIndicator.isVisible())
{
UIALogger.logMessage("Loading");
UIATarget.localTarget().delay(1);
}

Check out pushTimeout and popTimeout methods in the UIATarget. You can find the docs here.
Here is one code example from our iOS app UIAutomation tests:
// Tap "Post" button, which starts a network request
mainWindow.buttons()["post.button.post"].tap();
// Wait for maximum of 30 seconds to "OKAY" button to be valid
target.pushTimeout(30);
// Tap the button which is shown from the network request success callback
mainWindow.buttons()["dialog.button.okay"].tap();
// End the wait scope
target.popTimeout();

Related

Firebase A/B test not counting users when activation event is used on iOS

We're using the current version of the Firebase iOS framework (5.9.0) and we're seeing a strange problem when trying to run A/B test experiments that have an activation event.
Since we want to run experiments on first launch, we have a custom splash screen on app start that we display while the remote config is being fetched. After the fetch completes, we immediately activate the fetched config and then check to see if we received info about experiment participation to reconfigure the next UI appropriately. There are additional checks done before we determine that the current instance, in fact, should be part of the test, thus the activation event. Basically, the code looks like:
<code that shows splash>
…
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:7 completionHandler:^(FIRRemoteConfigFetchStatus status, NSError * _Nullable error) {
[[FIRRemoteConfig remoteConfig] activateFetched];
if (<checks that see if we received info about being selected to participate in the experiment and if local conditions are met for experiment participation>) {
[FIRAnalytics logEventWithName:#"RegistrationEntryExperimentActivation" parameters:nil];
<dismiss splash screen and show next UI screen based on experiment variation received in remote config>
} else {
<dismiss splash screen and show next UI screen>
}
}
With the approach above (which is completely straight-forward IMO) does not work correctly. After spending time with the debugger and Firebase logging enabled I can see in the log that there is a race-condition problem occurring. Basically, the Firebase activateFetched() call does not set up a "conditional user property experiment ID" synchronously inside the activateFetched call but instead sets it up some short time afterward. Because of this, our firing of the activation event immediately after activateFetched does not trigger this conditional user property and subsequent experiment funnel/goal events are not properly marked as part of an experiment (the experiment is not even activated in the first place).
If we change the code to delay the sending of the activation event by some arbitrary delay:
<code that shows splash>
…
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:7 completionHandler:^(FIRRemoteConfigFetchStatus status, NSError * _Nullable error) {
[[FIRRemoteConfig remoteConfig] activateFetched];
if (<checks that see if we received info about being selected to participate in the experiment and if local conditions are met for experiment participation>) {
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[FIRAnalytics logEventWithName:#"RegistrationEntryExperimentActivation" parameters:nil];
<dismiss splash screen and show next UI screen based on experiment variation received in remote config>
}
} else {
<dismiss splash screen and show next UI screen>
}
}
the conditional user property for the experiment gets correctly setup beforehand and triggered by the event (causing experiment activation and subsequent events being correctly marked as part of the experiment).
Now, this code obviously is quite ugly and prone to possible race-conditions. The delay of 0.5 seconds is conservatively set to hopefully be enough on all iOS devices but ¯_(ツ)_/¯. I've read the available documentation multiple times and tried looking at all available API methods with no success in figuring out what the correct point of starting to send events should be. If the activateFetched method uses an asynchronous process of reconfiguring internal objects, one would expect a callback method that indicates to the caller the point in time when everything is done reconfiguring and ready for further use by the application. Seems the framework engineers didn't anticipate a use-case when someone needs to send the activation event immediatly after remote config profile activation…
Has anyone else experienced this problem? Are we missing something in the API? Is there a smarter way of letting activateFetched finish its thing?
Hope some Firebase engineers can chime-in with their wisdom as well :)
Thanks

iOS: Handling OpenGL code running on background threads during App Transition

I am working on an iOS application that, say on a button click, launches several threads, each executing a piece of Open GL code. These threads either have a different EAGLContext set on them, or if they use same EAGLContext, then they are synchronised (i.e. 2 threads don't set same EAGLContext in parallel).
Now suppose the app goes into background. As per Apple's documentation, we should stop all the OpenGL calls in applicationWillResignActive: callback so that by the time applicationDidEnterBackground: is called, no further GL calls are made.
I am using dispatch_queues to create background threads. For e.g.:
__block Byte* renderedData; // some memory already allocated
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
glViewPort(...)
glBindFramebuffer(...)
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
}
use renderedData for something else
My question is - how to handle applicationWillResignActive: so that any such background GL calls can be not just stopped, but also be able to resume on applicationDidBecomeActive:? Should I wait for currently running blocks to finish before returning from applicationWillResignActive:? Or should I just suspend glProcessingQueue and return?
I have also read that similar is the case when app is interrupted in other ways, like displaying an alert, a phone call, etc.
I can have multiple such threads at any point of time, invoked by possibly multiple ViewControllers, so I am looking for some scalable solution or design pattern.
The way I see it you need to either pause a thread or kill it.
If you kill it you need to ensure all resources are released which means again calling openGL most likely. In this case it might actually be better to simply wait for the block to finish execution. This means the block must not take too long to finish which is impossible to guarantee and since you have multiple contexts and threads this may realistically present an issue.
So pausing seems better. I am not sure if there is a direct API to pause a thread but you can make it wait. Maybe a s system similar to this one can help.
The linked example seems to handle exactly what you would want; it already checks the current thread and locks that one. I guess you could pack that into some tool as a static method or a C function and wherever you are confident you can pause the thread you would simply do something like:
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
[ThreadManager pauseCurrentThreadIfNeeded];
glViewPort(...)
glBindFramebuffer(...)
[ThreadManager pauseCurrentThreadIfNeeded];
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
[ThreadManager pauseCurrentThreadIfNeeded];
}
You might still have an issue with main thread if it is used. You might want to skip pause on that one otherwise your system may simply never wake up again (not sure though, try it).
So now you are look at interface of your ThreadManager to be something like:
+ (void)pause {
__threadsPaused = YES;
}
+ (void)resume {
__threadsPaused = NO;
}
+ (void)pauseCurrentThreadIfNeeded {
if(__threadsPaused) {
// TODO: insert code for locking until __threadsPaused becomes false
}
}
Let us know what you find out.

background processing a difficult task on viewDidLoad in swift

I am currently making an app which initiates a very long task on the viewDidLoad method. The nature of it is stopping the view from loading at all for extended periods of time and sometimes causes the app to crash. As such, I need to be able to process this particular task in the background so that the view can load instantly and then, in the background, complete the task and update the view when it is done. Does anybody know how to do this?
Try to use GCD similar to this:
let backgroundQueue = dispatch_get_global_queue(CLong(DISPATCH_QUEUE_PRIORITY_HIGH), 0)
dispatch_async(backgroundQueue) {
// Do your stuff on global queue.
dispatch_async(dispatch_get_main_queue(), { () -> Void in
// After finish update UI on main queue.
})
}
Or you can use NSOperationQueue. At WWDC 2105 was very nice talk about it. https://developer.apple.com/videos/wwdc/2015/?id=226

I'm not sure if I'm using NSOperationgQueue's addOperationWithBlock incorrectly

I've been using NSOperationQueue's addOperationWithBlock: to run code in background threads, like so:
self.fetchDataQueue = NSOperationQueue()
for panel in self.panels {
self.fetchDataQueue.addOperationWithBlock() {
() -> Void in
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
//Background code
}
}
}
I'm concerned that I may be doing this wrong. I can't see a way that the fetch queue would be able to know when an operation is done, since there's no completion to call, and I'm not confident it's tracking activity across threads to make sure it's still going.
And the point of using this is so that I don't queue them up in single file and take much longer to process, and so I don't run them all at once and use too much memory.
EDIT: I'm aware that I don't need to be doing dispatch_async, but it's simply an example of some block-based code I may call which may do the same thing, or a web request which may get back after a delay.
Well, your code will run in a background block. If you are using a queue to make sure that one operation only starts when the next one is finished, you may be in trouble: The block that you happen to the NSOperationQueue has finished as soon as it has dispatched the background code to GCD, not when the background code has actually finished which may be much later.

Threading NSStream

I have a TCP connection that is open continuously for communication with an external device. There is a lot going on in the communication pipe which causes the UI to become unresponsive at times.
I would like to put the communication on a separate thread. I understand detachNewThread and how it calls a #selector. My issue is that I am not sure how this would be used in conjunction with something like NSStream?
Rather than manually creating a thread and managing thread safety issues, you might prefer to use Grand Central Dispatch ('GCD'). That allows you to post blocks — which are packets of code and some state — off to be executed away from the main thread and wherever the OS thinks is most appropriate. If you create a serial dispatch queue you can even be certain that if you post a new block while an old one has yet to finish, the system will wait until it finishes.
E.g.
// you'd want to make this an instance variable in a real program
dispatch_queue_t serialDispatchQueue =
dispatch_queue_create(
"A handy label for debugging",
DISPATCH_QUEUE_SERIAL);
...
dispatch_async(serialDispatchQueue,
^{
NSLog(#"all code in here occurs on the dispatch queue ...");
});
/* lots of other things here */
dispatch_async(serialDispatchQueue,
^{
NSLog(#"... and this won't happen until after everything already dispatched");
});
...
// cleanup, for when you're done
dispatch_release(serialDispatchQueue);
A very quick introduction to GCD is here, Apple's more thorough introduction is also worth reading.

Resources