I have a REST backend for executing Selenium UI test. Sending a GET request with test names executes them and returns the result. The problem is that running some of these tests takes a long time, often over 5 minutes or more.
When I try to run a lengthy set of tests, after about 2 minutes I get an error Failed to load resource: net::ERR_EMPTY_RESPONSE.
The server seems to be working correctly. When I tried to send the request from Insomnia, I get correct response even after more than 5 minutes of running.
My code for sending the GET request
handleRunTests = (tests) => {
const httpClient = axios.create();
httpClient.defaults.timeout = 60 * 60 * 1000; // one hour timeout for running tests
this.setState({
testsResults: {},
testsRunning: true
});
let url = `/api/runTests?tests=${tests}`;
httpClient.get(url)
.then(result => {... handle test results ...}
.catch(error => {
this.setState({
testsRunning: false,
error
});
})
};
Related
I'm using Formik to create a form for a React web app. The submission is as the following code.
const submitForm = (values) => {
console.log(JSON.stringify(values, null, 2));
setFormStatus(status.loading);
// handle request.
axios
.put("#", values)
.then(() => {
console.log("Submission Success");
setFormStatus(status.success);
})
.catch(() => {
console.log(`Submission Failure`);
setFormStatus(status.failure);
})
.then(() => {
console.log("Submission CleanUp");
setTimeout(() => {
console.log("Neutralizing Form");
setFormStatus(status.neutral);
// tell to animate.
setSwitchSubmitBtn(!switchSubmitBtn);
}, 2000);
});
};
After the axis request, there will be a 2s delay before I set the status state back to neutral.
However, I'm testing using Jest, waitFor doesn't work as expected. The timeout in submission seems blocked, as no matter how long I wait for the submission, it just won't occur. I found a solution to this by using jest.advanceTimer, but adding the same amount of delay in waitFor doesn't work.
it("INPUT_FORM_TC_008", async () => {
jest.useFakeTimers();
render(<InputForm />);
axios.put.mockImplementation(async () => {
console.log("MOCKED PUT");
return Promise.reject();
});
// click submit Button
user.click(screen.getByTestId("submitBtn"));
await waitFor(() => {
expect(screen.queryByTestId("crossIcon")).not.toBeNull();
});
act(() => {
jest.advanceTimersByTime(2000);
});
await waitFor(() => {
// crossIcon will be unmounted once `status` changes to neutral.
expect(screen.queryByTestId("crossIcon")).toBeNull();
});
});
The following code won't work if I don't use jest.advanceTimer, even I set timeout much longer than the one in submission.
await waitFor(() => {
expect(screen.queryByTestId("crossIcon")).toBeNull();
}, 5000);
I suspect that this is related to the event loop stuff, but I tried to set the timeout to be 20ms in the submission and it works. So I looking for a reason why this happens.
I tried to test a state change using setTimeOut using Jest. I expect after the amount of time I specified in setTimeOut the state should be changed. But using waitFor won't work even set a longer timeout.
The issue is that you are setting a setTimeout in a scope that is immediately exited. This code should still work when it's actually run due to a closure on your function, but in Jest the test will say, "welp, looks like there's no more synchronous code" and finish the test before waiting for the execution.
If you want more information on why setTimeout behaves this way you can check the MDN article here: https://developer.mozilla.org/en-US/docs/Web/API/setTimeout
But putting it very simply, it uses something called the "event loop" which is how JS handles asynchronous code while still being a synchronous language. When you use setTimeout, you're simply adding the function execution to the event loop and saying, "Don't execute this code now, run other code until the 2000 ms have gone by, then run this code." Which works fine in your implementation because you're allowing 2000ms to go by. But Jest doesn't wait, it just says "There's no more code to execute and nothing is ready in the event loop, the test must be complete" and cleaning everything up immediately.
You could resolve this by wrapping your setTimeout in a promise, like:
//........
.then(() => {
console.log("Submission CleanUp");
return new Promise((resolve, reject) => {
setTimeout(() => {
console.log("Neutralizing Form");
resolve(status.neutral);
}, 2000);
})
});
Then have your actual implementation code wait for the promise to resolve and set your formStatus and submissionButton states after the Promise is resolved. This would be a simpler API, would allow for greater abstraction, and would make testing easier since you'd now be able to wait for this execution to finish with a .then() block in your tests as well.
I have an instance that on some requests needs to make multiple calls to an external API, sometimes up to 2,000+ calls.
When running my application locally, each call to the external api returns at sub 200ms every time without fail, and the entire process for all 2,000 calls takes appx 15 seconds.
I noticed running on cloud run, my API calls are split into three categories:
appx 1/4 take ~200ms same as local. about 1/2 take exactly 12007ms and some take exactly 63000ms, the whole process is taking 20+ minutes for these same API calls.
I have tried batching, using async/eachLimit set to 10, 20, 50.. but the same thing occurs.
The endpoint, data and calls are identical on my local to what's on cloud run. Cloud run is running through a NAT with static IP as well (which may be intercepting/throttling?).
Has anyone encountered this before? Adding my docker image to a VM in the same VPC/NAT Gateway has the same effect as my local (all calls < 200ms and complete is < 15 seconds).
Has anyone encountered this in run and how to get around?
Snippet running (Note I have played around with the limit (50 in below snippet):
const productsWithAvailabilities = await mapLimit(products, 50, async product => {
console.time(product.ProductID)
const availabilityOther = await productClient.execute(
'GetAvailabilityA',
{
productid: product.ProductID,
viewid: 'WEB',
connectid: this.pricingToken,
},
false,
)
let AvailabilityOther
try {
AvailabilityOther = availabilityOther?.GetAvailabilityAResult?.diffgram?.Warehouses?.ProductAvailability?.map(
a => ({
...a,
QtyAvail: parseFloat(a.QtyAvail),
QtyOnHand: parseFloat(a.QtyOnHand),
QtyOnOrder: parseFloat(a.QtyOnOrder),
QtyInTransit: parseFloat(a.QtyInTransit),
Available: parseFloat(a.QtyAvail),
}),
)
} catch (e) {
console.log({ product, e })
}
console.timeEnd(product.ProductID)
return {
...product,
AvailabilityOther,
Availability: AvailabilityOther?.find(a => a.LocationID === LOCATIONS.MEL),
QtyAvailableOther: AvailabilityOther?.filter(a => a.LocationID !== LOCATIONS.MEL)
.map(a => a.QtyAvail)
.reduce((result, current) => result + current, 0),
}
})
console.timeEnd('availabilities')
return productsWithAvailabilities as MoProProduct[]
}
ProductClient.execute is making a SOAP post request using the node 'soap' library.
VPC Connector throughput is 200 - 1000 (default setting) and I have a single external IP / router connected to the NAT.
I have a PWA that runs offline with background sync running and it works on all browsers (Brave/Safari/Chrome/Firefox tested). I am able to add articles and upon adding an article it is stored in the indexedDB. If offline and the app can't reach the server to post the data the request goes into the workbox-background-sync as expected, and the article makes its way to my MySQL database once the network becomes active again.
However, on iOS Safari, the PWA works online but when I go offline and try post an article, the data makes its way into the indexedDB successfully but the background sync isn't added to the queue and i'm presented with the error
FetchEvent.respondWith received an error: UnknownError: Error preparing Blob/File data to be stored in object store
I'm assuming this is because the body of the request is a Blob. How would I go about storing the request and have iOS do the sync the next time the network is online?
Many thanks for any help provided.
Here are my snippets of the add article code (main.js), and my service worker code (sw.js)
function addAndPostArticle(e)
{
e.preventDefault();
const data = {
id: Date.now(),
title: document.getElementById('article-title').value,
content: document.getElementById('article-content').value
};
updateUI([data]);
saveArticleDataLocally([data]);
const headers = new Headers({'Content-Type': 'application/json'});
const body = JSON.stringify(data);
return fetch('/pwa/api/add.php', {
method: 'POST',
headers: headers,
body: body
});
}
sw.js
const bgSyncPlugin = new workbox.backgroundSync.Plugin('myQueueName', {
maxRetentionTime: 24 * 60 // Retry for max of 24 Hours
});
workbox.routing.registerRoute(
'/pwa/api/add.php',
workbox.strategies.networkOnly({
plugins: [bgSyncPlugin]
}),
'POST'
);
I created a test where I setup a route, try to visit a page which makes an API request to the route and then wait for the route response:
cy
.server()
.route('GET', '/api/testing')
.as('testing');
cy.visit('/index.html', { timeout: 60000 });
cy.wait('#testing', { timeout: 60000 });
This only waits for the Cypress global default responseTimeout of 30 seconds and then fails the API request.
Here's the error message logged by Cypress in the console:
Cypress errored attempting to make an http request to this url:
https://localhost:4200/api/testing
The error was:
ESOCKETTIMEDOUT
The stack trace was:
Error: ESOCKETTIMEDOUT
at ClientRequest. (…\node_modules\cypress\dist\Cypress\resources\app\packages\server\node_modules\request\request.js:778:19)
at Object.onceWrapper (events.js:314:30)
at emitNone (events.js:105:13)
at ClientRequest.emit (events.js:207:7)
at TLSSocket.emitTimeout (_http_client.js:722:34)
at Object.onceWrapper (events.js:314:30)
at emitNone (events.js:105:13)
at TLSSocket.emit (events.js:207:7)
at TLSSocket.Socket._onTimeout (net.js:402:8)
at ontimeout (timers.js:469:11)
at tryOnTimeout (timers.js:304:5)
at Timer.listOnTimeout (timers.js:264:5)
Adding a responseTimeout to the global config of Cypress will increase the timeout, but why isn't the timeout for either the visit or the wait occurring?
See the code example on this page commands - wait - Alias
// Wait for the route aliased as 'getAccount' to respond
// without changing or stubbing its response
cy.server()
cy.route('/accounts/*').as('getAccount')
cy.visit('/accounts/123')
cy.wait('#getAccount').then((xhr) => {
// we can now access the low level xhr
// that contains the request body,
// response body, status, etc
})
I would add the then((xhr) => to your code and see what response is coming through.
Logic says that if a bogus route waits the full timeout, but a 'failed legitimate route' does not, then a response with failure code is being sent back from the server within the timeout period.
The block of code in request.js where the error comes from has an interesting comment.
self.req.on('socket', function(socket) {
var setReqTimeout = function() {
// This timeout sets the amount of time to wait *between* bytes sent
// from the server once connected.
//
// In particular, it's useful for erroring if the server fails to send
// data halfway through streaming a response.
self.req.setTimeout(timeout, function () {
if (self.req) {
self.abort()
var e = new Error('ESOCKETTIMEDOUT') <-- LINE 778 REFERENCED IN MESSAGE
e.code = 'ESOCKETTIMEDOUT'
e.connect = false
self.emit('error', e)
}
})
}
This may be a condition you want to test for (i.e connection broken mid-response).
Unfortunately, there seems to be no syntax cy.wait().catch(), see Commands-Are-Not-Promises
You cannot add a .catch error handler to a failed command.
You may want to try stubbing the route instead of setting the breakpoint on the server, but I'm not sure what form the fake response should take. (Ref route with stubbing)
.vist() and .wait() didn't work for me, error logs on cypress suggested using .request() instead which works fine.
cy.server();
cy.request('/api/path').then((xhr) => {
console.log(xhr.body)
})
I want to test a feature that in my app sends the usera custom message when a Rails UJS/ajax times-out.
The code to timeout the Rails UJS request is itself on the app:
$.rails.ajax = function(options) {
if (!options.timeout) {
options.timeout = 10000;
}
return $.ajax(options);
};
When observing on chrome dev tools what happened when it timed out on my local dev mode, I notice the code status is strangely 200 but as it "times out", my message is indeed displayed to the user.
on('ajax:error',function(event,xhr, status, error){
// display message in modal for users
if(status == "timeout") {
console.log( ' ajax request timed out');
var msg;
msg = Messenger().post({
hideAfter: 8,
message: "Too long, the app timed out"
});
}
}
});
Below is my current test (using puffing bill gem). I managed to stub the http response for my ajax request but I don't know how to tell rspec to "wait" and timeout like take 11 seconds and still not send any response to the xhr call:) (xhr max timeout is set to 10 000 milliseconds above so 10 sec<11sec, it should time out inside the rspec test)
it " displays correct modal message appears correctly when xhr call timeout" do
visit deal_page_path(deal)
proxy.stub("http://127.0.0.1:59533/deals/dealname/ajaxrequest").and_return(:code => 200)
first('a.button').click
wait_for_ajax
within('ul.messenger') do
expect(page).to have_content('Too long, the app timed out')
end
end
If you really want it to just wait for the timeout I believe you can use the Proc version of and_return and sleep for as long as you want the request to take
proxy.stub("http://127.0.0.1:59533/deals/dealname/ajaxrequest").and_return(
Proc.new { |params, headers, body|
sleep 11
{code: 200}
} )
also - rather than wait_for_ajax just pass the amount of time you expect the element to take to appear to the within call
within('ul.messenger', wait: 11) do ...