I am writing protractor e2e tests, and I find it annoying that I'm constantly using
ptor.sleep(4000)
all the time, sometimes it even wait longer, i know sometimes
ptor.wait(function(){return true/false; })
is a good solution, but how often does wait check the callback function?
is there a more elegant approach to this?
ptor.wait polls every 100 ms. (This is the Webdriver default).
A more elegant approach would depend on your application. Protractor tries to automatically wait for events it knows about ($http, $timeout, angular digests). What is your application doing that makes this an invalid approach? Is there some way that your application could notify the tests when it's done? You could write a custom waitForMyApp which would wait until that condition is true - for example, the way Protractor does this behind the scenes is to do a executeAsyncScript call, which blocks until the browser returns.
Related
How to test implemented fixed delay on webpage with Playwright (exactly: debounce)?
I have simple scenario. User, after entering input need to wait fixed time for response i.e. 1000ms.
How to test that exact wait with Plawright?
Looked at https://github.com/microsoft/playwright/issues/4405 but I wonder if there is more elegant way to do this?
Well usually there is a better way than to hardcode a waiting time.
Wait for certain API calls
Wait for network idle (= wait for started calls to finish)
Wait for an event
However there are times when hardcoding is inevitable. What I've used is page.waitForTimeout()
Wait for 2 seconds:
await page.waitForTimeout(2000);
Official docs: https://playwright.dev/docs/api/class-page#page-wait-for-timeout
"Waits for the given timeout in milliseconds.
Note that page.waitForTimeout() should only be used for debugging.
Tests using the timer in production are going to be flaky. Use signals
such as network events, selectors becoming visible and others
instead."
I am writing an app that makes plenty of network requests. As usual they are
async, i.e. the call of the request method returns immediately and the result
is delivered via a delegate method or in a closure after some delay.
Now on my registration screen I sent a register request to my backend and
want to verify that the success UI is shown when the request finishes.
Which options are out there to wait for the request to finish, verify the
success UI and only after that leave the test method?
Also are there any more clever options than waiting for the request to finish?
Thanks in advance!
Trivial Approach
Apple implemented major improvements in Xcode 9 / iOS 11 that enables you to wait for the appearance of a UI element. You can use the following one-liner:
<#yourElement#>.waitForExistence(timeout: 5)
Advanced Approach
In general UI and unit tests (referred to as tests here) must run as fast as possible so the developer can run them often and does not get frustrated by the need to run a slow test suite multiple times a day. In some cases, there is the possibility that an (internal or security-related) app accesses an API that can only be accessed from certain networks / IP ranges / hosts. Also, most CI services offer pretty bad hardware and limited internet-connection speed.
For all of those reasons, it is recommended to implement tests in a way that they do no real network requests. Instead, they are run with fake data, so-called fixtures. A clever developer realizes this test suite in a way that source of the data can be switched using a simple switch like a boolean property. Additionally, when the switch is set to fetch real backend data the fixtures can be refreshed/recorded from the backend automatically. This way it is pretty easy to update the fake data and quickly detect changes of the API.
But the main advantage of this approach is speed. Your test will not make real network requests but instead run against local data what makes them independent on:
server issues
connection speed
network restrictions
This way you can run your tests very fast and thus much more often - which is a good way of writing code ("Test Driven Development").
On the other hand, you won't detect server changes immediately anymore since the fake data won't change when the backend data changes. But this is solved by simply refreshing your fixtures using the switch you have implemented because you are a smart developer which makes this issue a story you can tell your children!
But wait, I forgot something! Why this is a replacement for the trivial approach above - you ask? Simple! Since you use local data which is available immediately you also can call the completion handler immediately too. So there is no delay between doing the request and verifying your success UI. This means you don't need to wait which makes your tests even faster!
I hope this helps some of my fellows out there. If you need more guidance regarding this topic don't hesitate and reply to this post.
Cya!
I have tried different wait/sleep commands which completely stops the code. In my code I have changed events, so if something was changed there is a wait/sleep command, but it will wait for that event to completely finish even if another event is called. How would I have it so there would still be a delay, but have it so the events called during the wait period will run, and not wait for the previous event.
There are two approaches:
1) "Fake" parallelism based on event loop:
Project "luvit" realizes this approach, trying to do for Lua those things that node.js doing for JavaScript. In my humble opinion, for such approach just use node.js, luvit is weird and is not very reliable.
2) Multi threading:
It is better for perfomance of application, but it is more complex way, it will take time to figure out how to work with it.
For this approach use Lua Lanes.
Also, if you need it inside OpenResty, it has tools for this.
If you need it in small script, though I love Lua with all my heart, you should consider switching to node.js
What's Happening
In our Rspec + Capybara + selenium (FF) test suite we're getting A LOT of inconsistent "Capybara::ElementNotFound" errors.
The problem is they only happen sometimes. Usually they won't happen locally, they'll happen on CircleCi, where I expect the machines are much beefier (and so faster)?
Also the same errors usually won't happen when the spec is run in isolation, for example by running rspec with a particular line number:42.
Bare in mind however that there is no consistency. The spec won't consistently fail.
Our current workaround - sleep
Currently the only thing we can do is to litter the specs with 'sleeps'. We add them whenever we get an error like this and it fixes it. Sometimes we have to increase the sleep times which is making out tests very slow as you can imagine.
What about capybara's default wait time?
Doesn't seem to be kicking in I imagine as the test usually fails under the allocated wait time (5 seconds currently)
Some examples of failure.
Here's a common failure:
visit "/#/things/#{#thing.id}"
find(".expand-thing").click
This will frequently result in:
Unable to find css ".expand-thing"
Now, putting a sleep in between those two lines fixes it. But a sleep is too brute force. I might put a second, but the code might only need half a second.
Ideally I'd like Capybara's wait time to kick in because then it only waits as long as it needs to, and no longer.
Final Note
I know that capybara can only do the wait thing if the selector doesn't exist on the page yet. But in the example above you'll notice I'm visiting the page and the selecting, so the element is not on the page yet, so Capybara should wait.
What's going on?
Figured this out. SO, when looking for elements on a page you have a few methods available to you:
first('.some-selector')
all('.some-selector') #returns an array of course
find('.some-selector')
.first and .all are super useful as they let you pick from non unique elements.
HOWEVER .first and .all don't seem to auto-wait for the element to be on the page.
The Fix
The fix then is to always use .find(). .find WILL honour the capybara wait time. Using .find has almost completely fixed my tests (with a few unrelated exceptions).
The gotcha of course is that you have to use more unique selectors as .find MUST only return a single element, otherwise you'll get the infamous Capybara::Ambiguous exception.
Ember works asynchronously. This is why Ember generally recommends using Qunit. They've tied in code to allow the testing to pause/resume while waiting for the asynchronous functions to return. Your best bet would be to either attempt to duplicate the pause/resume logic that's been built up for qunit, or switch to qunit.
There is a global promise used during testing you could hook up to: Ember.Test.lastPromise
Ember.Test.lastPromise.then(function(){
//continue
});
Additionally visit/click return promises, you'll need some manner of telling capybara to pause testing before the call, then resume once the promise resumes.
visit('foo').then(function(){
click('.expand-thing').then(function(){
assert('foobar');
})
})
Now that I've finished ranting, I'm realizing you're not running these tests technically from inside the browser, you're having them run through selenium, which means it's not technically in the browser (unless selenium has made some change since last I used it, possible). Either way you'll need to watch that last promise, and wait on it before you can continue, testing after an asynchronous action.
I use ASP.Net MVC 5 and I have a long running action which have to poll webservices, process data and store them in database.
For that I want to use TPL library to start the task async.
But I wonder how to do 3 things :
I want to report progress of this task. For this I think about SignalR
I want to be able to left the page where I start this task from and be able to report the progression across the website (from a panel on the left but this is ok)
And I want to be able to cancel this task globally (from my panel on the left)
I know quite a few about all of technologies involved. But I'm not sure about the best way to achieve this.
Is someone can help me about the best solution ?
The fact that you want to run long running work while the user can navigate away from the page that initiates the work means that you need to run this work "in the background". It cannot be performed as part of a regular HTTP request because the user might cancel his request at any time by navigating away or closing the browser. In fact this seems to be a key scenario for you.
Background work in ASP.NET is dangerous. You can certainly pull it off but it is not easy to get right. Also, worker processes can exit for many reasons (app pool recycle, deployment, machine reboot, machine failure, Stack Overflow or OOM exception on an unrelated thread). So make sure your long-running work tolerates being aborted mid-way. You can reduce the likelyhood that this happens but never exclude the possibility.
You can make your code safe in the face of arbitrary termination by wrapping all work in a transaction. This of course only works if you don't cause non-transacted side-effects like web-service calls that change state. It is not possible to give a general answer here because achieving safety in the presence of arbitrary termination depends highly on the concrete work to be done.
Here's a possible architecture that I have used in the past:
When a job comes in you write all necessary input data to a database table and report success to the client.
You need a way to start a worker to work on that job. You could start a task immediately for that. You also need a periodic check that looks for unstarted work in case the app exits after having added the work item but before starting a task for it. Have the Windows task scheduler call a secret URL in your app once per minute that does this.
When you start working on a job you mark that job as running so that it is not accidentally picked up a second time. Work on that job, write the results and mark it as done. All in a single transaction. When your process happens to exit mid-way the database will reset all data involved.
Write job progress to a separate table row on a separate connection and separate transaction. The browser can poll the server for progress information. You could also use SignalR but I don't have experience with that and I expect it would be hard to get it to resume progress reporting in the presence of arbitrary termination.
Cancellation would be done by setting a cancel flag in the progress information row. The app needs to poll that flag.
Maybe you can make use of message queueing for job processing but I'm always wary to use it. To process a message in a transacted way you need MSDTC which is unsupported with many high-availability solutions for SQL Server.
You might think that this architecture is not very sophisticated. It makes use of polling for lots of things. Polling is a primitive technique but it works quite well. It is reliable and well-understood. It has a simple concurrency model.
If you can assume that your application never exits at inopportune times the architecture would be much simpler. But this cannot be assumed. You cannot assume that there will be no deployments during work hours and that there will be no bugs leading to crashes.
Even if using http worker is a bad thing to run long task I have made a small example of how to manage it with SignalR :
Inside this example you can :
Start a task
See task progression
Cancel task
It's based on :
twitter bootstrap
knockoutjs
signalR
C# 5.0 async/await with CancelToken and IProgress
You can find the source of this example here :
https://github.com/dragouf/SignalR.Progress