Cypress visit and wait timeouts ignored - timeout

I created a test where I setup a route, try to visit a page which makes an API request to the route and then wait for the route response:
cy
.server()
.route('GET', '/api/testing')
.as('testing');
cy.visit('/index.html', { timeout: 60000 });
cy.wait('#testing', { timeout: 60000 });
This only waits for the Cypress global default responseTimeout of 30 seconds and then fails the API request.
Here's the error message logged by Cypress in the console:
Cypress errored attempting to make an http request to this url:
https://localhost:4200/api/testing
The error was:
ESOCKETTIMEDOUT
The stack trace was:
Error: ESOCKETTIMEDOUT
at ClientRequest. (…\node_modules\cypress\dist\Cypress\resources\app\packages\server\node_modules\request\request.js:778:19)
at Object.onceWrapper (events.js:314:30)
at emitNone (events.js:105:13)
at ClientRequest.emit (events.js:207:7)
at TLSSocket.emitTimeout (_http_client.js:722:34)
at Object.onceWrapper (events.js:314:30)
at emitNone (events.js:105:13)
at TLSSocket.emit (events.js:207:7)
at TLSSocket.Socket._onTimeout (net.js:402:8)
at ontimeout (timers.js:469:11)
at tryOnTimeout (timers.js:304:5)
at Timer.listOnTimeout (timers.js:264:5)
Adding a responseTimeout to the global config of Cypress will increase the timeout, but why isn't the timeout for either the visit or the wait occurring?

See the code example on this page commands - wait - Alias
// Wait for the route aliased as 'getAccount' to respond
// without changing or stubbing its response
cy.server()
cy.route('/accounts/*').as('getAccount')
cy.visit('/accounts/123')
cy.wait('#getAccount').then((xhr) => {
// we can now access the low level xhr
// that contains the request body,
// response body, status, etc
})
I would add the then((xhr) => to your code and see what response is coming through.
Logic says that if a bogus route waits the full timeout, but a 'failed legitimate route' does not, then a response with failure code is being sent back from the server within the timeout period.
The block of code in request.js where the error comes from has an interesting comment.
self.req.on('socket', function(socket) {
var setReqTimeout = function() {
// This timeout sets the amount of time to wait *between* bytes sent
// from the server once connected.
//
// In particular, it's useful for erroring if the server fails to send
// data halfway through streaming a response.
self.req.setTimeout(timeout, function () {
if (self.req) {
self.abort()
var e = new Error('ESOCKETTIMEDOUT') <-- LINE 778 REFERENCED IN MESSAGE
e.code = 'ESOCKETTIMEDOUT'
e.connect = false
self.emit('error', e)
}
})
}
This may be a condition you want to test for (i.e connection broken mid-response).
Unfortunately, there seems to be no syntax cy.wait().catch(), see Commands-Are-Not-Promises
You cannot add a .catch error handler to a failed command.
You may want to try stubbing the route instead of setting the breakpoint on the server, but I'm not sure what form the fake response should take. (Ref route with stubbing)

.vist() and .wait() didn't work for me, error logs on cypress suggested using .request() instead which works fine.
cy.server();
cy.request('/api/path').then((xhr) => {
console.log(xhr.body)
})

Related

Protocol error when calling puppeteer.connect()

I am using the basic approach as set out in this post to connect from a client docker container to any one of a number of chrome docker containers (in a docker swarm/service, potentially across several servers behind nginx, deployed using CapRover).
In each chrome container I maintain a pool (just a simple array) of browser objects, and direct incoming requests to an appropriate browser as follows (very similar to the linked post):
import http from 'node:http'; // https://nodejs.org/api/http.html
import httpProxy from 'http-proxy'; // https://www.npmjs.com/package/http-proxy
const proxy = new httpProxy.createProxyServer({ ws: true });
// an array (pool) of pre-launched and managed browser objects...
const browsers = [ ... ];
http
.createServer()
.on('upgrade', (req, socket, head) => {
const browser = browsers[Math.floor(Math.random() * browsers.length)]; // in reality I don't just pick a browser at random
const target = browser.wsEndpoint();
proxy.ws(req, socket, head, { target });
})
.listen(3222);
The above is listening at ws://srv-captain--chrome:3222 (communication is "internal" over the docker network between containers).
Then, in my client container, I connect to the common endpoint ws://srv-captain--chrome:3222 as follows:
import puppeteer from 'puppeteer'; // https://www.npmjs.com/package/puppeteer (using version 17.1.3 at time of posting this)
try {
const browser = await puppeteer.connect({ browserWSEndpoint: 'ws://srv-captain--chrome:3222' });
} catch (err) {
console.error('error connecting to browser', err);
}
This works really well, except that I am getting occasional/inconsistent errors like these when calling puppeteer.connect() in the client container above:
Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
Protocol error (Performance.enable): Target closed.
Almost always, if I simply try to connect again, the connection is made without further error, and at the first attempt.
I have no idea why the error is complaining that the page has been closed or Target closed since, at this point in the process, I'm not attempting to interact with any page, and I know from listening for browser.on('disconnected'...), and also monitoring the chromium processes themselves, that each browser in the array is still working fine... none has crashed.
Any idea what's going on here?
UPDATE after further testing
Of course, in the client container we don't connect to a browser just for the sake of it, like in the above snippet, but to open a page and do some stuff with the page. In practice, in the client container it's more like the following test snippet:
const doIteration = function (i) {
return new Promise(async (resolve, reject) => {
// mimic incoming requests coming in at random times over a short period by introducing a random initial delay...
await new Promise(resolve => setTimeout(resolve, Math.random() * 5000));
// now actually connect...
let browser;
try {
browser = await puppeteer.connect({ browserWSEndpoint: `ws://srv-captain--chrome:3222?queryParam=loop_${i}` });
} catch (err) {
reject(err);
return;
}
// now that we have a browser, open a new page...
const page = await browser.newPage();
// do something useful with the page (not shown here) and then close it..
await page.close();
// now disconnect (but don't close) the browser...
browser.disconnect();
resolve();
});
};
const promises = [];
for (let i = 0; i < 15; i++) {
promises.push( doIteration(i) );
}
try {
await Promise.all(promises);
} catch (err) {
console.error(`error doing stuff`, err);
}
Each iteration above is being performed multiple times concurrently... I am using Promise.all() on an array of iteration promises to mimic multiple concurrent incoming requests in my production code. The above is enough to reproduce the problem... the error doesn't happen on calling puppeteer.connect() with every iteration, just some.
So there seems to be some sort of interplay between opening/closing a page in one iteration, and calling puppeteer.connect() in another, despite closing the page and disconnecting the browser properly in each iteration? This probably also explains the Most likely the page has been closed error message when calling puppeteer.connect() if there is some hangover relating to a page closed in another iteration... though for some reason this error occurs when calling puppeteer.connect()?
With the use of a pool of browser objects in the browsers array, and a docker swarm having multiple containers on multiple servers, each upgrade message could be received at a different container (which could even be on a different server) and could be routed to a different browser in the browsers array. But I now think that this is a red herring, because in the further testing I narrowed the problem down by routing all requests to browsers[0] and also scaling the service down to just one container... so that the upgrade messages are always handled by the same container on the same server and routed to the same browser... and the problem still occurs.
Full stacktrace for the above-mentioned error:
Error: Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
at CDPSession.send (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Connection.js:281:35)
at EmulationManager.emulateViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/EmulationManager.js:33:73)
at Page.setViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:1776:93)
at Function._create (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:242:24)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Target.page (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Target.js:123:23)
at async Promise.all (index 0)
at async BrowserContext.pages (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Browser.js:577:23)
at async Promise.all (index 0)
As I dug deeper and deeper into this problem, it become more and more apparent that I might not actually be doing anything fundamentally wrong, and that this might just be a bug in puppeteer itself. So I reported those as an issue over on puppeteer... and indeed, it is acknowledged as a bug for any version later than 15.5.0, and is being fixed. In the meantime, the workaround is to revert to puppeteer version 15.5.0 and to be careful when calling browser.pages() when concurrent connections are being used, because that might itself throw an error... but I understand that this too might be something that they can/will fix so that browser.pages() is more resilient to the presence of concurrent connections.

1-3 'SyntaxError: Unexpected end of JSON input' errors occurring per minute when streaming tweets in Node

Using express and Node.js, I'm using the twitter streaming API and the needle npm package for accessing APIs to pull tweets related to keywords. The streaming is functional and I am successfully pulling tweets using the following (simplified) code:
const needle = require('needle');
const TOKEN = // My Token
const streamURL = 'https://api.twitter.com/2/tweets/search/stream';
function streamTweets() {
const stream = needle.get(streamURL, {
headers: {
Authorization: `Bearer ${TOKEN}`
}
});
stream.on('data', (data) => {
try {
const json = JSON.parse(data); // This line appears to be causing my error
const text = json.data.text;
} catch (error) {
console.log("error");
}
});
}
However, no matter which search term I use (and the subsequent large or small volume of tweets coming through), the catch block will consistently log 1-3 errors per minute, which look like this:
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at PassThrough.<anonymous> (C:\Users\danie\OneDrive\Documents\Personal-Projects\twitter-program\server.js:56:31)
at PassThrough.emit (events.js:315:20)
at addChunk (internal/streams/readable.js:309:12)
at readableAddChunk (internal/streams/readable.js:284:9)
at PassThrough.Readable.push (internal/streams/readable.js:223:10)
at PassThrough.Transform.push (internal/streams/transform.js:166:32)
at PassThrough.afterTransform (internal/streams/transform.js:101:10)
at PassThrough._transform (internal/streams/passthrough.js:46:3)
at PassThrough.Transform._read (internal/streams/transform.js:205:10).
I've seen previous advice which says that data can be fired in multiple chunks, and to push the chunks to an array i.e. something like the following:
let chunks = [];
stream.on('data', (dataChunk) => {
chunks.push(dataChunk);
}).on('end',() => {
// combine chunks to create JSON object
})
But this didn't work either (may have been my implementation but I don't think so) and now I'm wondering if it's perhaps an error with the twitter API, because most of the tweet objects do come through correctly. I should note that the streamTweets() function above is called from an async function, and I am also wondering if that is having something to do with it.
Has anyone else encountered this error? Or does anyone have any idea how I might be fix it? Ideally i'd like 100% of the tweets to stream correctly.
Thanks in advance!
For future readers, this error is triggered by Twitter's heartbeat message that is sent every 20 seconds. Per the documentation:
The endpoint provides a 20-second keep alive heartbeat (it will look like a new line character).
Adding a guard against parsing the empty string will prevent the JSON parsing error.
if (data === "")
return
An empty string is invalid JSON, hence the emitted error.
Now, acknowledging that the heartbeat exists, it may be beneficial to add read_timeout = 20 * 1000 in the needle request to avoiding a stalled program with no data, be that due to a local network outage or DNS miss, etc.

Fetch of the service worker doesn't seem to get triggered

When a browser requests an image from the server, the call is getting picked up by an API controller in the back end. There, a authorization check must be done before returning the image in order to check if the request is allowed or not.
So I need to add the authorization header and when searching for the best solution, I found this article: https://www.twelve21.io/how-to-access-images-securely-with-oauth-2-0/ and I was mostly intereseted in the solution number 4 which uses a Service Worker.
I made my own implementation, I registered a serviceWorker:
if ('serviceWorker' in navigator) {
console.log("serviceWorker active");
window.addEventListener('load', onLoad);
}
else {
console.log("serviceWorker not active");
}
function onLoad() {
console.log("onLoad is called");
var scope = {
scope: '/api/imagesgateway/'
};
navigator.serviceWorker.register('/Scripts/ServiceWorker/imageInterceptor.js', scope)
.then(registration => console.log("ServiceWorker registration successful with scope: ", registration.scope))
.catch(error => console.error("ServiceWorker registration failed: ", error));
}
and this is in my imageInterceptor:
self.addEventListener('fetch', event => {
console.log("fetch event triggered");
event.respondWith(
fetch(event.request, {
mode: 'cors',
credentials: 'include',
header: {
'Authorization': 'Bearer ...'
}
})
)
});
When I run my application, I see in my console that the registration seems to be successfully executed as I see the console.logs printed (ServiceWorker active, onLoad is called and successful registration with correct scope: https://localhost:44332/api/imagesgateway/
But when I load an image (https://localhost:44332/api/imagesgateway/...) via the gateway, I still get a 400 and when put a breakpoint on the backend I see that the authentication header is still null. Also, I don't see "fetch event triggered" message in my console. In another article it is stated that I can see the registered service workers via this setting: chrome://inspect/#service-workers but I don't see my worker there either.
My question is: Why isn't the authorization header added? Is it because, although the registration seems to go successfully, this isn't actually the case and therefore I don't see the worker in inspect#service-workers either?
You're not seeing fetch event triggered in the browser console because your Service Worker script isn't allowed to intercept the image requests. This is because your Service Worker script is located in a directory outside the scope of the requests you're interested in.
In order to intercept requests that handle resources at
/api/imagesgateway/
the SW script needs to be located in either
/, /api/, or /api/imagesgateway/. It cannot be located in /some/other/directory/service-worker.js.
This is the reason that your Service Worker registers successfully! There is no probelm in registering the SW. The problem lies in what it can do.
More info: Understanding Service Worker scope

Navigator credentials creation method returning an exception

Trying to run a demo for the webautn spec (https://www.w3.org/TR/webauthn/) available (https://github.com/molekilla/webauthn-demo-fork) under Firefox Nightly.
getMakeCredentialsChallenge({
username,
name
})
.then((response) => {
console.log(response);
let publicKey = preformatMakeCredReq(response);
console.log(publicKey);
console.log(publicKey.challenge)
return navigator.credentials.create({publicKey})
})
Whenever the execution reaches the return statement, the Promise will stay pending for a few seconds and eventually rejects, logging either [Exception... "Abort" nsresult: "0x80004004 (NS_ERROR_ABORT)" location: "<unknown>" data: no] or UnknownError: The operation failed for an unknown transient reason. Both objects seem fine. Any idea about the reason why is it not resolving?

Rails - How to stub a certain response time for a stubbed / fake http or ajax response

I want to test a feature that in my app sends the usera custom message when a Rails UJS/ajax times-out.
The code to timeout the Rails UJS request is itself on the app:
$.rails.ajax = function(options) {
if (!options.timeout) {
options.timeout = 10000;
}
return $.ajax(options);
};
When observing on chrome dev tools what happened when it timed out on my local dev mode, I notice the code status is strangely 200 but as it "times out", my message is indeed displayed to the user.
on('ajax:error',function(event,xhr, status, error){
// display message in modal for users
if(status == "timeout") {
console.log( ' ajax request timed out');
var msg;
msg = Messenger().post({
hideAfter: 8,
message: "Too long, the app timed out"
});
}
}
});
Below is my current test (using puffing bill gem). I managed to stub the http response for my ajax request but I don't know how to tell rspec to "wait" and timeout like take 11 seconds and still not send any response to the xhr call:) (xhr max timeout is set to 10 000 milliseconds above so 10 sec<11sec, it should time out inside the rspec test)
it " displays correct modal message appears correctly when xhr call timeout" do
visit deal_page_path(deal)
proxy.stub("http://127.0.0.1:59533/deals/dealname/ajaxrequest").and_return(:code => 200)
first('a.button').click
wait_for_ajax
within('ul.messenger') do
expect(page).to have_content('Too long, the app timed out')
end
end
If you really want it to just wait for the timeout I believe you can use the Proc version of and_return and sleep for as long as you want the request to take
proxy.stub("http://127.0.0.1:59533/deals/dealname/ajaxrequest").and_return(
Proc.new { |params, headers, body|
sleep 11
{code: 200}
} )
also - rather than wait_for_ajax just pass the amount of time you expect the element to take to appear to the within call
within('ul.messenger', wait: 11) do ...

Resources