Ember/Rails end-to-end testing error - ruby-on-rails

I have an Ember CLI app with a Rails back-end API. I am trying to set up end-to-end testing by configuring the Ember app test suite to send requests to a copy of the Rails API. My tests are working, but I am getting the following strange error frequently:
{}
Expected: true
Result: false
at http://localhost:7357/assets/test-support.js:4519:13
at exports.default._emberTestingAdaptersAdapter.default.extend.exception (http://localhost:7357/assets/vendor.js:52144:7)
at onerrorDefault (http://localhost:7357/assets/vendor.js:42846:24)
at Object.exports.default.trigger (http://localhost:7357/assets/vendor.js:67064:11)
at Promise._onerror (http://localhost:7357/assets/vendor.js:68030:22)
at publishRejection (http://localhost:7357/assets/vendor.js:66337:15)
This seems to occur whenever a request is made to the server. An example test script which would recreate this is below. This is a simple test which checks that if a user clicks a 'login' button without entering any email/password information they are not logged in. The test passes, but additionally I get the above error before the test passes. I think this is something to do with connecting to the Rails server, but have no idea how to investigate or fix it - I'd be very grateful for any help.
Many thanks.
import Ember from 'ember';
import { module, test } from 'qunit';
import startApp from 'mercury-ember/tests/helpers/start-app';
module('Acceptance | login test', {
beforeEach: function() {
this.application = startApp();
},
afterEach: function() {
Ember.run(this.application, 'destroy');
}
});
test('Initial Login Test', function(assert)
{
visit('/');
andThen(function()
{
// Leaving identification and password fields blank
click(".btn.login-submit");
andThen(function()
{
equal(currentSession().get('user_email'), null, "User fails to login when identification and password fields left blank");
});
});
});

You can check in the Network panel of Chrome or Firefox developer tools that the request is being made. At least with ember-qunit you can do this by getting ember-cli to run the tests within the browser rather than with Phantom.js/command-line.
That would help you figure out if it's hitting the Rails server at all (the URL could be incorrect or using the wrong port number?)
You may also want to see if there is code that needs to be torn down. Remember that in a test environment the same browser instance is used so all objects need to be torn down; timeouts/intervals need to be stopped; events need to be unbound, etc.
We had that issue a few times where in production there is no error with a utility that sent AJAX requests every 30 seconds, but in testing it was a problem because it bound itself to the window (outside of the iframe) so it kept making requests even after the tests were torn down.

Related

Page Object Model Instance in Playwright (TS) Doesn't Use the Global Config Session Storage

I am developing tests for a website that requires login for the whole page. This website uses Google Sign-In for authentication. All of the Google accounts used for authentication require two-factor authentication.
This means to test one part of the website, the test suite needs to sign in for each test three times - which eventually leads to problems with authentication requiring human interaction, or even requiring extra steps to authenticate.
To resolve this problem, I followed the Reuse Signed-In State part of the Playwright documentation so that sign in would happen once and then saved and shared among the tests.
This works fine for just normal tests. However, I also started using Page Object Models to describe my pages and allow for easier maintenance of the test suite. For some reason, in tests that create new instances of Page Object Model classes, the tests do not make use of the saved state and thus are not logged in.
How can this saved state be passed onto a POM instance in Playwright so that signed-in state can be reused? Perhaps I've missed something simple?
Used a global-setup.ts file to declare login:
async function globalSetup(config: FullConfig) {
const browser = await chromium.launch({ headless: false });
const page = await browser.newPage();
await page.goto("http://localhost:8000/login");
await page.getByRole("button", { name: "Connexion" }).click();
await page.getByRole("button", { name: "Continue with Google" }).click();
await page.getByRole("textbox", { name: "Adresse e-mail ou numéro de téléphone" }).fill(process.env.USERNAME);
await page.getByRole("textbox", { name: "Adresse e-mail ou numéro de téléphone" }).press("Enter");
await page.getByRole("textbox", { name: "Saisissez votre mot de passe" }).fill(process.env.PASSWORD);
await page.locator("#passwordNext").click();
await page.goto("http://localhost:8000");
// Save signed-in state to 'storageState.json'.
await page.context().storageState({ path: "storageState.json" });
await browser.close();
}
I also added this to the playwright.config.ts file:
const config: PlaywrightTestConfig = {
globalSetup: require.resolve('./global-setup'),
use: {
storageState: 'storageState.json'
}
};
Here is an example of a test making use of the POM, which then doesn't use the state created in global setup. The login still happens, but then the new browser opened doesn't have the state and thus gets stuck on the login page:
test("apply a filter", async ({ page }) => {
const dashboardHome = new DashboardHome(page);
await new LoginPage(page).login();
await dashboardHome.bookingLink.waitFor();
await dashboardHome.bookingLink.click();
const reservationsPage = new ReservationsPage(page);
await reservationsPage.orderNumberFilter.waitFor();
reservationsPage.filter({ orderID: "22222" });
await expect(page).toHaveURL("http://localhost:8000/booking/22222");
});
I had similar issues with global setup and re-write my global setup in fixture (kinda global) where I used it everywhere. I amn't very experienced in test automation and can't explain why globalsetup didn't work. I also believe it's something simple, that i can't catch. Hope it help you.
That definitely seems confusing, especially since switching to a POM shouldn’t affect it (I have successfully used the POM and this one-time saved and reused auth).
I can’t think how switching to use page objects would affect it, and would be curious to see your setup. But one potential reason for your problem is that I’ve seen and worked with sites where if you go to the login page, you’re automatically “logged out” in some way, thus negating your login during setup. If you already logged in during setup and saved that auth state, you should be able to bypass/skip the login page altogether in the test and go directly to the page you need to work with. I would actually expect/be curious if you run into the same problem when not using the POM.
Let me know if that solution works for you, or if you even still have the issue at this point, but if that’s not the issue I may need more clarification or info to help further.

Getting sometimes double POST request to IIS 10 webserver

Since we migrated to an Amazon EC instance we're getting sometimes double POST requests to our SaaS application. I have no idea where this is coming from and why this is happening. I've been searching and looking at different options, but can't find the root cause. We migrated from IIS7 to IIS10 on a Windows Server 2022 Datacenter.
Here is an example of an SEQ logging session at 1 client:
SEQ Log
You can see the endpoint was requested multiple times (= OK), but there is also a double POST request at 18:19:41 and 18:18:27. This is logged from within the ASP.NET MVC controller. If I look at the IIS 10 logs, I see the same thing. So the request seems to be initiated from the browser and not doubled in the pipeline.
The MVC controller looks something like this (simplified):
if (ViewData.ModelState.IsValid){
try
{
NHibernateSession.Current.Transaction.Begin();
foreach (var item in deliveryDTosToSave){
var delivery = new Delivery
{
//copying over the DTO to the delivery
};
_deliveryRepository.SaveNew(delivery, userCode, item.Index);
}
NHibernateSession.Current.Transaction.Commit();
return RedirectToAction("Add", "Delivery", new { tab, pharmacyId });
}
catch (RuleException ex)
{
NHibernateSession.Current.Transaction.Rollback();
}
}
On the client-side, the javascript to submit is something like this:
$('#formDeliveries').submit(function () {
if (!canSubmit) {return false;}
$("#loading").show();
canSubmit = false;
});
Things I looked for:
user double-clicking on the submit button: we've blocked this through JS after initial submit. I interviewed several users, and they are not double-clicking.
bug in browser: last user I logged in used Edge 107, which is recent. I can't find anything on the matter
HTTPS redirects: the website is HTTPS-only and users have to be authenticated to use it.
HTTP/3 fall-back scenario's: could be possible as we've enabled HTTP/3 when migrating to Amazon. However users are seeing this behaviour only sometimes and not always from the same browser.
We've tried to simulate the behaviour, but cannot find it. We've logged into the users computer and tried to simulated it while looking at the network requests through the developers utils, but cannot simulate it. I was hoping to see it happening live, so we can rule out any javascript items. If the network log in developer tools shows only 1 POST, it probably has to do something with the IIS pipeline.
Any advice would be greatly appreciated.

Retrying a failed dynamic import? (SPA/PWA)

Something that I'm running into with a project I'm working on is, I have a single page application. I'm handling browser navigation routing on the client-side, that lets me dynamically import some modules whenever a route is matched. My routing setup looks a bit like this:
router.setRoutes([
{
path: '/',
component: 'app-main', // statically loaded
},
{
path: '/posts',
component: 'app-posts',
action: () => { import('./app-posts.js').catch(() => Router.go('/offline');} // dynamically loaded
},
{
path: '/offline', network
component: 'app-offline', // also statically loaded
}
]);
And here's a simple image of the 'app' for clarity:
I'm caching the app shell in my service worker, which means that the main page, and the offline page get precached, and the posts page should get cached during runtime (once requested, so if the user clicks on the posts link)
So my precache manifest caches: main.js, offline.js, and my index.html.
Where I'm hitting a bump is:
The user loses network connection
The user tries to go to the posts page
The dynamic import for this may fail if it hasnt been requested and cached before (and the user will be redirected to the offline page)
But when my user gains network connectivity again, clicks the posts link, the dynamic import will still fail; I'm guessing because the browser dedupes dynamic imports.
Which is a huge shame, because my user has a network connection; this request should succeed! The only way I can figure out how to deal with this is to have the user reload the page, and request the posts page again.
So my question is, how should I go about this?
Solved it by checking if the user has network connection before trying to do the import like this:
function handleRoute(url) {
if('onLine' in navigator) {
if(navigator.onLine) {
// user is online, safe to import
import(url);
} else {
// user is offline, don't even try to import -> straight to `/offline`
Router.go('/offline');
}
} else {
// incase the browser doesnt support `navigator.onLine`, try to import anyway
import(url);
}
}
It feels like a slightly hacky way to do it, but then again browser support for navigator.onLine is pretty good.
For more information on retrying failed dynamic imports, I found this issue in the tc39/proposal-dynamic-import github repo.
The browser is not doing that sort of caching/deduping/whatever. You can verify this easily by calling import from the console and toggling your network online/offline.
So the problem is most likely with the framework you're using. The caching of the first call to the route action happens somewhere in the framework. Maybe consult the documentation?

Front-end testing using React and Selenium-Webdriver with Rails as Backend

I just want to test the Front-End part. So, here is my problem:
Background
I have a robust Ruby on Rails (V3.2) backend app and an entiry new and separate front-end app with ReactJs (V16.4).
Problem
We begin to test React app with the help of Selenium-Webdriver and JestJs, we managed to try several views, but the problem arose when we made POST requests to the rails API.
I don't want to fill my database (development) with garbage because of the tests.
Ex: What happens when I want to test the creation of a new user?.
Possible solutions thought
I was thinking in 3 solutions:
Intercept the API calls and mock them by imitating their response (ex: at submitting click using selenium-webdriver).
Make use of Rails test environment through React
Just revert the call of the API doing the opposite, this would mean creating often undesirable actions in the controller. (ex: doing a delete for each post)
It depends if you want to test the whole stack (frontend/backend) or only the frontend part.
Frontend tests
If you only want to test the frontend part go with your first solution : mock API calls.
You will be limited if you just use the selenium-webdriver directly. I would recommend using nightwatch or testcafe. Testcafe does not depend on selenium. This is also optional in the latest versions of Nightwatch.
Testcafe includes a Request mocking API : http://devexpress.github.io/testcafe/documentation/test-api/intercepting-http-requests/mocking-http-responses.html
With Nightwatch you could use nock. See Nightwatch Mock HTTP Requests
Full stack tests
If you want to test the whole stack, you may use this approach : implement a custom API endpoint to allow for resetting your database in a clean state before or after tests execution. (like "/myapi/clean")
You should disable access to this endpoint in production environments.
You can then implement test hooks (before/after) to call your custom api endpoint :
http://nightwatchjs.org/guide#using-before-each-and-after-each-hooks
http://devexpress.github.io/testcafe/documentation/test-api/test-code-structure.html#test-hooks
You could have a test environment. From my experience, garbage data generated by tests is not such a big deal. You can periodically clean it up. Or you can spin up a new environment for every test run.
Finally I decided to use enzyme with jest and sinon.
example code:
import { mount } from "enzyme";
import sinon from "sinon";
beforeAll(() => {
server = sinon.fakeServer.create();
const initialState = {
example: ExampleData,
auth: AuthData
};
wrapper = mount(
<Root initialState={initialState}>
<ExampleContainer />
</Root>
);
});
it("example description", () => {
server.respondWith("POST", "/api/v1/example", [
200,
{ "Content-Type": "application/json" },
'message: "Example message OK"'
]);
server.respond();
expect(wrapper.find(".response").text().to.equal('Example message OK');
})
In the code above we can see how to intercept API calls using the test DOM created by the enzyme and then mock API responses using sinon.

How to purposely delay an AJAX response while testing with Capybara?

I have a React component that mimics the "link preview" feature that most modern social media sites have. You type in a link and it fetches the image, title, etc...
I do this by having the React component make an AJAX call back to my server to fetch the URL preview data.
While it's fetching I show an intermediate "loading" state (i.e. some loading icon or spinning wheel)
The relevant React snippet looks like
this.setState({ isLoadingAttachment: true })
return $.ajax({
type: "GET",
url: some_url,
dataType: "json",
contentType: "application/json",
}).success(function(response){
// Succesful! Do Success stuff
component.setState({ isLoadingAttachment: false })
}).error(function(response) {
// Uh oh! Handle failure stuff
component.setState({ isLoadingAttachment: false })
});
Note how the isLoadingAttachment state variable is only valid for a brief second while the server is doing the fetching. Both the success and error scenarios immediately disable it.
I'd like to test some functionality during my "loading" state with my Capybara feature specs. I've mocked all the web calls and the data to be returned by the server, but it all happens so quickly that it passes through the "loading" state before I can even run any expect().. statement on it. I also purposely don't call wait_for_ajax so the page will go ahead without waiting for the ajax, but it's still too fast.
Lastly I also tried purposefully delaying the server call by 1.0 second, but that didn't work either. I assume because the whole thing is single threaded somehow?
# `foo` is an arbitrary method called during the server-side execution
allow_any_instance_of(MyController).
to receive(:foo) { sleep(1.0) }.and_call_original
Any thoughts on how I could do this?
Thanks!
Capybara starts up the app server in a different thread than the tests, however if you're using the default Capybara.server setting you may have issues with your app calling back to itself since it uses webrick by default. Instead you should specify Capybara.server = :puma. Beyond that, mocking responses is generally a bad idea in feature specs (which are generally meant to be end-to-end tests) since it means you're not actually testing your apps code the way it would run in production anymore. A better solution is to use something like puffing-billy - https://github.com/oesmith/puffing-billy - to mock web responses outside of your apps code which would allow you to do something like
proxy.stub('https://example.com/proc/').and_return(Proc.new { |params, headers, body|
sleep 2
{ :text => "Your results"}
})

Resources