I have a scenario where after I disable a button, I check for the data persistence in the database. It takes some time to persist data in the database( roughly 3 mins). My tests are started through sauce labs so after 90 seconds the time out and my session is closed.
I do take screenshots of the tests at the tearDown Method. when data persistence takes more than 90 seconds the screenshots method is failing. I want to take screenshots only when the driver is alive, how can I check for it?
takeAllureScreenShot();
}```
You can increase how long Sauce Labs waits before shutting down a session by configuring the idleTimeout desired capability (docs for which are here).
By default, this is set to 90 seconds; It sounds like you should increase it to something like 200 seconds.
Assuming you're using Java and a Selenium 3 session with vendor name-spacing, you could do that like so:
// This is a new capabilities object to hold the nested vendor-specific options
MutableCapabilities sauceOptions = new MutableCapabilities();
sauceOptions.setCapability("idleTimeout", 200);
// assuming your desired capabilities are called 'capabilities'
capabilities.setCapability("sauce:options", sauceOptions);
(If you just wanted to check that the session was still alive, you could do so by doing something "trivial" like checking the page title inside an try/catch block. If an exception is thrown, the session is over! If you get a response, it's not).
Related
I'm new to Twitter APIs (this is my first experience), and I'm playing with them to monitor an account for new tweets, opening a web page when it happens, but I'm having some doubt on understanding how the allotting works.
Not knowing much, the twitter stream v2 APIs seem the ones fitting my use case, and in the Twitter-API-v2-sample-code git repository there is also a very clear filtered stream nodejs example. In fairness, I had little hassle to implement everything, and my code is not much different from filtered_stream.js source code. Given the provided example, implementation is straightforward: I use https://api.twitter.com/2/tweets/search/stream/rules to setup my rules (an array like [ { 'value': 'from:<myAccount>' } ] and then I start to listen at https://api.twitter.com/2/tweets/search/stream, easy peasy.
What I don't understand is the allotting resources count, because as per Twitter documentation I should be able to make 50 requests every 15 minutes, but I can barely make a couple, thus every time I'm testing my script I have to wait a couple of minutes before restarting.
These are the relevant headers I received after restarting a script running since one hour (the status code at restart was 429):
x-rate-limit-limit: 50
x-rate-limit-remaining: 49
Reset time: +15 minutes from current time
I usually don't have to wait 15 minutes, just a couple usually is fine... And my other note is that i managed to arrive down to 45 x-rate-limit-remaining once or twice, but never lower than that (usually I'm locked out at 47 / 48).
What I don't understand is: I opened one stream, I closed that one stream, and now I'm locked out for a couple of minutes. Allegedly, shouldn't I be able to open up to 50 connection in 15 minutes? (which is actually plenty if I'm just debugging a portion of code). Even the headers says that I have 49 attempts remaining out of 50, the status code 429 seems in pure contradiction with the x-rate-limits ... Sometimes, I cannot even reset the rules and start the stream in the same run, because the stream will return a backoff (status 429) when the rules resetting finishes (get -> set -> delete)...
I could add my code, but it's literally the NodeJS example I cited above, and my problem is not about querying the APIs, but rather not being able to connect for no apparent reason (at least to me). The only thing I could think of is the fact that I use the same Bearer for all requests (as per their example), but I don't see written anywhere it is a problem (I generated it in the developer dashboard, I'm not sure there is an API for that as well).
Edit - adding details
Just to describe my issue, this is the output I get when I start the script the first time:
Headers for stream received (status 200):
- [x-rate-limit-limit]: 50
- [x-rate-limit-remaining]: 49
- [x-rate-limit-reset]: 20/03/2021, 11:05:35
Which make sense, I made one request, remaining count went down by one.
Now, I stopped it, and ran it immediately after (Ctrl + C, run again, let's say two seconds delay), and this is the new output:
Headers for stream received (status 429):
- [x-rate-limit-limit]: 50
- [x-rate-limit-remaining]: 48
- [x-rate-limit-reset]: 20/03/2021, 11:05:35
With the following exception being returned in the body:
{
title: 'ConnectionException',
detail: 'This stream is currently at the maximum allowed connection limit.',
connection_issue: 'TooManyConnections',
type: 'https://api.twitter.com/2/problems/streaming-connection'
}
I understand the server takes a bit to realise I disconnected, but don't I have 50 connections available in a 15 minutes timeframe? I only opened one connection.
Actually, After the time it took to write all of the above (let's say ten minutes), I was able to connect again, receveing with this output:
Headers for stream received (status 200):
- [x-rate-limit-limit]: 50
- [x-rate-limit-remaining]: 47
- [x-rate-limit-reset]: 20/03/2021, 11:05:35
Maybe I'm realising only now and I wrote a useless question, but can I only have one active connection, being able to close it and open it again 50 times in 15 minutes? I understood I could have 50 active connections, but maybe at this point I'm wrong (and the Twitter server indeed takes a few minutes to realise I disconnected).
We're using the current version of the Firebase iOS framework (5.9.0) and we're seeing a strange problem when trying to run A/B test experiments that have an activation event.
Since we want to run experiments on first launch, we have a custom splash screen on app start that we display while the remote config is being fetched. After the fetch completes, we immediately activate the fetched config and then check to see if we received info about experiment participation to reconfigure the next UI appropriately. There are additional checks done before we determine that the current instance, in fact, should be part of the test, thus the activation event. Basically, the code looks like:
<code that shows splash>
…
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:7 completionHandler:^(FIRRemoteConfigFetchStatus status, NSError * _Nullable error) {
[[FIRRemoteConfig remoteConfig] activateFetched];
if (<checks that see if we received info about being selected to participate in the experiment and if local conditions are met for experiment participation>) {
[FIRAnalytics logEventWithName:#"RegistrationEntryExperimentActivation" parameters:nil];
<dismiss splash screen and show next UI screen based on experiment variation received in remote config>
} else {
<dismiss splash screen and show next UI screen>
}
}
With the approach above (which is completely straight-forward IMO) does not work correctly. After spending time with the debugger and Firebase logging enabled I can see in the log that there is a race-condition problem occurring. Basically, the Firebase activateFetched() call does not set up a "conditional user property experiment ID" synchronously inside the activateFetched call but instead sets it up some short time afterward. Because of this, our firing of the activation event immediately after activateFetched does not trigger this conditional user property and subsequent experiment funnel/goal events are not properly marked as part of an experiment (the experiment is not even activated in the first place).
If we change the code to delay the sending of the activation event by some arbitrary delay:
<code that shows splash>
…
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:7 completionHandler:^(FIRRemoteConfigFetchStatus status, NSError * _Nullable error) {
[[FIRRemoteConfig remoteConfig] activateFetched];
if (<checks that see if we received info about being selected to participate in the experiment and if local conditions are met for experiment participation>) {
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[FIRAnalytics logEventWithName:#"RegistrationEntryExperimentActivation" parameters:nil];
<dismiss splash screen and show next UI screen based on experiment variation received in remote config>
}
} else {
<dismiss splash screen and show next UI screen>
}
}
the conditional user property for the experiment gets correctly setup beforehand and triggered by the event (causing experiment activation and subsequent events being correctly marked as part of the experiment).
Now, this code obviously is quite ugly and prone to possible race-conditions. The delay of 0.5 seconds is conservatively set to hopefully be enough on all iOS devices but ¯_(ツ)_/¯. I've read the available documentation multiple times and tried looking at all available API methods with no success in figuring out what the correct point of starting to send events should be. If the activateFetched method uses an asynchronous process of reconfiguring internal objects, one would expect a callback method that indicates to the caller the point in time when everything is done reconfiguring and ready for further use by the application. Seems the framework engineers didn't anticipate a use-case when someone needs to send the activation event immediatly after remote config profile activation…
Has anyone else experienced this problem? Are we missing something in the API? Is there a smarter way of letting activateFetched finish its thing?
Hope some Firebase engineers can chime-in with their wisdom as well :)
Thanks
Trying to mirror views to chromecast with the remote display API. On Android it is well documented and easy to implement. The iOS samples/docs are less complete. I understand it only supports 15 fps but that is fine for my needs.
If anyone has gotten it to work, I'd love to see a small swift sample that shows how to mirror a simple view. I'm trying to test it with this, which shows nothing on the TV and gives the device has disconnected after a few seconds. From reading the docs, that happens when you don't send anything within the first 15 secs of getting the session.
var testSession: GCKRemoteDisplaySession!
func remoteDisplayChannel(channel: GCKRemoteDisplayChannel,
didBeginSession session: GCKRemoteDisplaySession) {
// Use the session.
testSession = session
frameInput = GCKViewVideoFrameInput(session: testSession)
// any view
frameInput.view = testView
}
Make sure you are strongly referencing the session as well as the frame input. Inputs have weak references to sessions (to avoid cycles between sessions and inputs). If the session is not strongly referenced and gets destroyed, you'll see black on your remote screen followed by a timeout disconnect.
Suppose I use QTPs recovery scenario manager to set the playback synchronization timeout to 0. The handler would return with "continue with next statement".
I'd do that to make sure that any following playback statements don't waste their time waiting for the next non-existing/non-matching step before failing:
I have a lot of GUI tests that kind of get stuck because let's say if 10 controls are missing, their (consecutive) playback steps produce 10 timeout waits before failing. If the playback timeout is 30 seconds, I loose 10x30 seconds=5 minutes execution time while it really would be sufficient to wait for 30 seconds ONCE (because the app does not change anymore -- we waited a full timeout period already).
Now if I have 100 test cases (=action iterations), this possibly happens 100 times, wasting 500 minutes of my test exec time window.
That's why I come up with the idea of a recovery scenario function setting the timeout to 0 after/upon the first failed playback step. This would accelerate the speed while skipping the rightly-FAILED step, yet would not compromise the precision/reliability of identifying the next matching GUI context (which creates a PASSED step).
Then of course upon the next passed playback step, I would want to restore the original timeout value. How could I do that? This is my question.
One cannot define a recovery scenario function that is called for PASSED steps.
I am currently thinking about setting a method function for Reporter.ReportEvent, and "sniffing" for PASSED log entries there. I'd install that method function in the scenario recovery function which sets timeout to 0. Then, when the "sniffer" function senses a ReportEvent call with PASSED status during one of the following playback steps, I'd reset everything (i.e. restore the original timeout, and uninstall the method function). (I am 99% sure, however, that .Click and .Set methods do not call ReportEvent to write their result status...so this option might probably not work.)
Better ideas? This really bugs me.
It sounds to me like you tests aren't designed correctly, if you fail to find an object why do you continue?
One possible (non recovery scenario) solution would be to use RegisterUserFunc to override the methods you are using in order to do an obj.Exist(0) before running the required method.
Function MyClick(obj)
If obj.Exist(1) Then
obj.Click
Else
Reporter.ReportEvent micFail, "Click failed, no object", "Object does not exist"
End If
End Function
RegisterUserFunc "Link", "Click", "MyClick"
RegisterUserFunc "WebButton", "Click", "MyClick"
''# etc
If you have many controls of which some may be missing and you know that after 10 seconds you mentioned (when the first timeout occurs), nothing more will show up, then you can use the exists method with a timeout parameter.
Something like this:
timeout = 10
For Each control in controls
If control.exists(timeout) Then
do something with the control
Else
timeout = 0
End If
Next
Now only the first timeout will be 10 seconds. Each and every subsequent timeout in your collection of controls will have the timeout set to 0 which will save your time.
I'm using Polly to retry web service calls in case the call fails with WebException, because I want to make sure the method executed correctly before proceeding. However sometimes web methods still throw exception even after retrying several times and I don't want to retry forever. Can I use Polly to show some confirmation dialog, e.g. "Max retry count reached! Make sure connection is enabled and press retry." Then retry counter should reset to initial value and start again. Can I achieve this using only Polly or should I write my own logic? Ideas?
Polly has nothing in-built to manage dialog boxes as it is entirely agnostic to the context in which it is used. However, you can customise extra behaviour on retries with an onRetry delegate so you can hook a dialog box in there. Overall:
Use an outer RetryForever policy, and display the dialog box in the onRetry action configured on that policy.
If you want a way for the user to exit the RetryForever, a cancel action in the dialog could throw some other exception (which you trap with a try-catch round all the policies), to cause an exit.
Within the outer policy, use an inner Retry policy for however many tries you want to make without intervention.
Because this is a different policy instance from the retryforever, and has fixed retry count, the retry count will automatically start afresh each time it is executed.
Use PolicyWrap to wrap the two retry policies together.
In pseudo-code:
var retryUntilSucceedsOrUserCancels = Policy
.Handle<WhateverException>()
.RetryForever(onRetry: { /* show my dialog box*/ });
var retryNTimesWithoutUserIntervention = Policy
.Handle<WhateverException>()
.Retry(n); // or whatever more sophisticated retry style you want
var combined = retryUntilSucceedsOrUserCancels
.Wrap(retryNTimesWithoutUserIntervention);
combined.Execute( /* my work */ );
Of course the use of the outer RetryForever() policy is just an option: you could also build the equivalent manually.