SignalR not handling requests after some limit.Tabs remains loading even to other browser on same or different machine - asp.net-mvc

When number of tabs increases on browser for signalR implemented domain.The tab remains spinning after some limit of tabs.
From some reasearch it is said that it is the limitation of browser.The links are:
https://medium.com/yasser-shaikh/multiple-tab-issue-with-signal-r-9df76c1ffba0
https://github.com/SignalR/SignalR/issues/2744
https://github.com/SignalR/SignalR/issues/849
SignalR, Limited unique connections (opened tabs) IIS8, Windows8
But when I try to browse any url from same domain from different machine or from different browser, tabs are spinning on them too. the other machine tabs starts working as soon as I closed some tabs of the first browsers.
Please help
Thanks

The links that you have shared already describes the problem and also provided the answers.
For example "Using localstorage as a message bus between the tabs". This should fix the problem. For that purpose you can use the IWC-SignalR library from here: https://github.com/slimjack/IWC-SignalR if you want to reduce the work load.
Sample codes are as following:
var echoHub = SJ.iwc.SignalR.getHubProxy('echo', {
client: {
displayMsg: function (msg) {
console.log(msg);
}
}
});
SJ.iwc.SignalR.start().done(function () {
console.log('started');
echoHub.server.send('test').done(function () {
console.log('sent');
});
});
Here hub Echo with method Send defined on server. Method Send calls method displayMsg of all clients.
Then here is the displayMsg method:
var echoHub = SJ.iwc.SignalR.getHubProxy('echo', {
client: {
displayMsg: function (msg) {
console.log(msg);
}
}
});
echoHub.server.send('test');
The full implementation and the description is available on the above mentioned link.

Ya I have explored the solutions you have mentioned. And appplied it too. Now I am able to open multiple tabs with in my machine. However when I load multiple tabs on mine. It will block all other users on other machines. They will only be able to explore the URLS untill I close tabs on mine. it is blocking the block server when blocked by any user browser

Related

Protocol error when calling puppeteer.connect()

I am using the basic approach as set out in this post to connect from a client docker container to any one of a number of chrome docker containers (in a docker swarm/service, potentially across several servers behind nginx, deployed using CapRover).
In each chrome container I maintain a pool (just a simple array) of browser objects, and direct incoming requests to an appropriate browser as follows (very similar to the linked post):
import http from 'node:http'; // https://nodejs.org/api/http.html
import httpProxy from 'http-proxy'; // https://www.npmjs.com/package/http-proxy
const proxy = new httpProxy.createProxyServer({ ws: true });
// an array (pool) of pre-launched and managed browser objects...
const browsers = [ ... ];
http
.createServer()
.on('upgrade', (req, socket, head) => {
const browser = browsers[Math.floor(Math.random() * browsers.length)]; // in reality I don't just pick a browser at random
const target = browser.wsEndpoint();
proxy.ws(req, socket, head, { target });
})
.listen(3222);
The above is listening at ws://srv-captain--chrome:3222 (communication is "internal" over the docker network between containers).
Then, in my client container, I connect to the common endpoint ws://srv-captain--chrome:3222 as follows:
import puppeteer from 'puppeteer'; // https://www.npmjs.com/package/puppeteer (using version 17.1.3 at time of posting this)
try {
const browser = await puppeteer.connect({ browserWSEndpoint: 'ws://srv-captain--chrome:3222' });
} catch (err) {
console.error('error connecting to browser', err);
}
This works really well, except that I am getting occasional/inconsistent errors like these when calling puppeteer.connect() in the client container above:
Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
Protocol error (Performance.enable): Target closed.
Almost always, if I simply try to connect again, the connection is made without further error, and at the first attempt.
I have no idea why the error is complaining that the page has been closed or Target closed since, at this point in the process, I'm not attempting to interact with any page, and I know from listening for browser.on('disconnected'...), and also monitoring the chromium processes themselves, that each browser in the array is still working fine... none has crashed.
Any idea what's going on here?
UPDATE after further testing
Of course, in the client container we don't connect to a browser just for the sake of it, like in the above snippet, but to open a page and do some stuff with the page. In practice, in the client container it's more like the following test snippet:
const doIteration = function (i) {
return new Promise(async (resolve, reject) => {
// mimic incoming requests coming in at random times over a short period by introducing a random initial delay...
await new Promise(resolve => setTimeout(resolve, Math.random() * 5000));
// now actually connect...
let browser;
try {
browser = await puppeteer.connect({ browserWSEndpoint: `ws://srv-captain--chrome:3222?queryParam=loop_${i}` });
} catch (err) {
reject(err);
return;
}
// now that we have a browser, open a new page...
const page = await browser.newPage();
// do something useful with the page (not shown here) and then close it..
await page.close();
// now disconnect (but don't close) the browser...
browser.disconnect();
resolve();
});
};
const promises = [];
for (let i = 0; i < 15; i++) {
promises.push( doIteration(i) );
}
try {
await Promise.all(promises);
} catch (err) {
console.error(`error doing stuff`, err);
}
Each iteration above is being performed multiple times concurrently... I am using Promise.all() on an array of iteration promises to mimic multiple concurrent incoming requests in my production code. The above is enough to reproduce the problem... the error doesn't happen on calling puppeteer.connect() with every iteration, just some.
So there seems to be some sort of interplay between opening/closing a page in one iteration, and calling puppeteer.connect() in another, despite closing the page and disconnecting the browser properly in each iteration? This probably also explains the Most likely the page has been closed error message when calling puppeteer.connect() if there is some hangover relating to a page closed in another iteration... though for some reason this error occurs when calling puppeteer.connect()?
With the use of a pool of browser objects in the browsers array, and a docker swarm having multiple containers on multiple servers, each upgrade message could be received at a different container (which could even be on a different server) and could be routed to a different browser in the browsers array. But I now think that this is a red herring, because in the further testing I narrowed the problem down by routing all requests to browsers[0] and also scaling the service down to just one container... so that the upgrade messages are always handled by the same container on the same server and routed to the same browser... and the problem still occurs.
Full stacktrace for the above-mentioned error:
Error: Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
at CDPSession.send (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Connection.js:281:35)
at EmulationManager.emulateViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/EmulationManager.js:33:73)
at Page.setViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:1776:93)
at Function._create (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:242:24)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Target.page (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Target.js:123:23)
at async Promise.all (index 0)
at async BrowserContext.pages (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Browser.js:577:23)
at async Promise.all (index 0)
As I dug deeper and deeper into this problem, it become more and more apparent that I might not actually be doing anything fundamentally wrong, and that this might just be a bug in puppeteer itself. So I reported those as an issue over on puppeteer... and indeed, it is acknowledged as a bug for any version later than 15.5.0, and is being fixed. In the meantime, the workaround is to revert to puppeteer version 15.5.0 and to be careful when calling browser.pages() when concurrent connections are being used, because that might itself throw an error... but I understand that this too might be something that they can/will fix so that browser.pages() is more resilient to the presence of concurrent connections.

Service-Worker stays in waiting state in Chrome

I'm working on a SPA with Vue. I'd like to update to a new service-worker when the user navigates to a specific page. A save moment to refresh, because the view of the user already changes (a pattern discussed in this video: https://youtu.be/cElAoxhQz6w)
I have an issue that sometimes (infrequently) the service-worker won't activate while calling skipWaiting. The call is made correctly, and even in Chrome I get a response that the current service-worker stops (see animated GIF), however it the same service-worker starts running again, instead of the waiting one.
After a while (1-2 minutes) the service-worker is suddenly activated. Not a situation you want, because it happens just out of the blue when the user might be in the middle of an activity.
Also when I am in this situation I can't activate the service-worker by calling skipWaiting (by doing multiple navigations) again. It's received by the service-worker but nothing happens. It stays in "waiting to activate". When I press skipWaiting in Chrome itself, it works.
I have no clue what goes wrong. Is this an issue with Chrome, workbox or something else?
Most close comes this topic: self.skipWaiting() not working in Service Worker
I use Vue.js, but I don't depend on the pwa plugin for the service-worker. I use the workbox webpack plugin.
I've edited the example code below, the minimal code probably didn't show the problem well
In main.js:
let sw = await navigator.serviceWorker.register("/service-worker.js", {
updateViaCache: "none",
});
let firstSw = false;
navigator.serviceWorker.addEventListener("controllerchange", () => {
// no need to refresh when the first sw controls the page, we solve this with clientsClaim
// this makes sure when multiple-tabs are open all refresh
if (!firstSw) {
window.location.reload();
}
});
sw.onupdatefound = () => {
const installingWorker = sw.installing;
installingWorker.onstatechange = async () => {
console.log("installing worker state-change: " + installingWorker.state);
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
firstSw = false;
// set the waiting service-worker in the store
// so we can update it and refresh the page on navigation
await store.dispatch("setWaitingSW", sw.waiting);
} else {
console.log("First sw available");
firstSw = true;
}
}
};
};
In router.js:
// after navigation to specific routes we check for a waiting service-worker.
router.afterEach(async (to) => {
if (to.name == "specificpage") {
let waitingSw = store.getters["getWaitingSW"];
if (waitingSw) {
waitingSw.postMessage("SKIP_WAITING");
// clean the store, because we might have changed our data model
await store.dispatch("cleanLocalForage");
}
}
});
In service-worker.js:
self.addEventListener("message", event => {
if (event.data === "SKIP_WAITING") {
console.log("sw received skip waiting");
self.skipWaiting();
}
});
skipWaiting() isn't instant. If there are active fetches going through the current service worker, it won't break those. If you're seeing skipWaiting() taking a long time, I'd guess you have some long-running HTTP connections holding the old service worker in place.
I'm not sure that
let sw = await navigator.serviceWorker.register("/service-worker.js", {updateViaCache: "none"});
if (sw.waiting) {
sw.waiting.postMessage("SKIP_WAITING");
}
is the code that you want in this case. Your if (sw.waiting) check is only evaluated once, and the newly registered service worker might still be in the installing state when it's evaluated. If that's the case, then sw.waiting will be false-y at the time of initial evaluation, though it may be true-thy after a small period of time.
Instead, I'd recommend following a pattern like what's demonstrated in this recipe, where you explicitly listen for a service worker to enter waiting on the registration. That example uses the workbox-window library to paper over some of the details.
If you don't want to use workbox-window, you should follow this guidance check to see if sw.installing is set after registration; if it is, listen to the statechange event on sw.installing to detect when it's 'installed'. Once that happens, sw.waiting should be set to the newly installed service worker, and at that point, you could postMessage() to it.
Ok i had a similar issue and it took me two days to find the cause.
There is a scenario where you can cause a race condition between the new service worker and the old if you request a precached asset at the exact same time you call skip waiting.
For me i was prompting the user to update to a new version and upon their confirmation i was showing a loading spinner which was a Vue SFC dynamic import which kicked off a network request to the old service worker to fetch the precached js file which basically caused both to hang and get very confused.
You can check if your having a similar issue by looking at the service worker specific network requests (Network requests button in the image below) that are happening and make sure they aren't happening the instant you're trying to skip waiting on your newer service worker.

iOS Safari: Frequently sent ajax calls sometimes not reaching the server

After searching now for hours, I unfortunately can't find a solution to my current iOS Safari issue.
I've got a JavaScript frontent which uses jQuery.ajax to communicate with an ASP.NET MVC web server.
That works absolutely perfect on all platforms, e.g. Windows 10 with Chrome, Firefox, IE (yes, IE works), Edge. Or on a Mac with Chrome, Firefox, Safari. The place where it does not work is iOS Safari.
In my scenario, I'm sending multiple AJAX requests to the server almost at the same time. Maybe 3 to maximum 6 calls. Having a look at the Safari developer tools, the calls look like this.
They seem to take very long, but having a look at the server, they appearently never reach the backend. Also, after exactly 10 minutes, the browser runs into the timeout. Even though I have configured an AJAX timeout of 60 seconds.
My code looks pretty okay to me at the moment (written in TypeScript):
let defer: JQueryDeferred<any> = jQuery.Deferred();
this._RunningRequests++;
let settings: JQueryAjaxSettings = {
method: method,
url: this.BuildURL(controller, action, id),
contentType: 'application/json',
timeout: Timeout,
cache: false,
headers: {}
};
if (payload) {
settings.data = JSON.stringify(payload);
}
jQuery.ajax(settings)
.done((result: any) => {
if (this.DetectLogoutRedirect(result)) {
defer.reject();
location.reload(true);
return;
}
defer.resolve(result);
})
.fail((jqXHR: JQueryXHR, textStatus: string, errorThrown: string) => {
defer.reject();
this.HandleError(method, controller, action, jqXHR, textStatus, errorThrown);
}).always(() => {
this._RunningRequests--;
});
return defer.promise();
Here now the fun part. As soon as I add a delay to the call ...
let delay = this._RunningRequests * 500;
setTimeout(() => { jQuery.ajax(...) }, delay);
... which makes sure the calls are not sent quickly after each other, it works perfectly fine.
Things I've tried and found out so far:
I've set all headers for cache control plus all jQuery configurations adressing cache to false
I've added a guid-like param to every call (POST as well) to ensure the URL is always unique
As mentioned above, I've added the delay which solves the problem, but is not realy practiable
I've tried to reproduce the issue wihin the iOS Simulator. Same result.
It seems to affect POST requests only, but I'm not sure about that.
How can I fix this?
We encountered the same issue with our Single Page App as soon as it got particulary "chatty" with our API.
It appears to be caused by a bug with the DNS resolution somewhere in the iOS networking stack.
We found that by changing the DNS setting from automatic to manual on the device, and setting the DNS servers to the Google Public DNS, all XHR requests made by our app worked and we stopped getting the weird timeouts.
Here is a guide on how to change the DNS setting on an iOS device:
https://www.macinstruct.com/tutorials/how-to-change-your-iphones-dns-servers/

IE 11 + SignalR not working

Strange behavior is happening when using signalR with IE 11. Scenario:
We have some dispatcher type functionality where the dispatcher does some actions, and the other user can see updates live (querying). The parameters that are sent come through fine and cause updates on the IE client side without having to open the developer console.
BUT the one method that does not work (performUpdate - to get the query results - this is a server > client call, not client > server > client) - never gets called. IT ONLY GETS CALLED WHEN THE DEVELOPER CONSOLE IS OPEN.
Here's what I've tried:
Why JavaScript only works after opening developer tools in IE once?
SignalR : Under IE9, messages can't be received by client until I hit F12 !!!!
SignalR client doesn't work inside AngularJs controller
Some code snippets
Dispatcher side
On dropdown change, we get the currently selected values and send updates across the wire. (This works fine).
$('#Selector').on('change', function(){
var variable = $('#SomeField').val();
...
liveBatchHub.server.updateParameters(variable, ....);
});
Server Side
When the dispatcher searches, we have some server side code that sends out notifications that a search has been ran, and to tell the client to pull results.
public void Update(string userId, Guid bId)
{
var context = GlobalHost.ConnectionManager.GetHubContext<LiveBatchViewHub>();
context.Clients.User(userId).performUpdate(bId);
}
Client side (viewer of live updates)
This never gets called unless developer tools is open
liveBatchHub.client.performUpdate = function (id) {
//perform update here
update(id);
};
Edit
A little more information which might be useful (I am not sure why it makes a difference) but this ONLY seems to happen when I am doing server > client calls. When the dispatcher is changing the search parameters, the update is client > server > client or dispatcher-client > server > viewer-client, which seems to work. After they click search, a service in the search pipeline calls the performUpdate server side (server > viewer-client). Not sure if this matters?
Edit 2 & Final Solution
Eyes bloodshot, I realize I left out one key part to this question: we are using angular as well on this page. Guess I've been staring at it too long and left this out - sorry. I awarded JDupont the answer because he was on the right track: caching. But not jQuery's ajax caching, angulars $http.
Just so no one else has to spend days / nights banging their heads against the desk, the final solution was to disable caching on ajax calls using angulars $http.
Taken from here:
myModule.config(['$httpProvider', function($httpProvider) {
//initialize get if not there
if (!$httpProvider.defaults.headers.get) {
$httpProvider.defaults.headers.get = {};
}
// Answer edited to include suggestions from comments
// because previous version of code introduced browser-related errors
//disable IE ajax request caching
$httpProvider.defaults.headers.get['If-Modified-Since'] = 'Mon, 26 Jul 1997 05:00:00 GMT';
// extra
$httpProvider.defaults.headers.get['Cache-Control'] = 'no-cache';
$httpProvider.defaults.headers.get['Pragma'] = 'no-cache';
}]);
I have experienced similar behavior in IE in the past. I may know of a solution to your problem.
IE caches some ajax requests by default. You may want to try turning this off globally. Check this out: How to prevent IE from caching Ajax with jQuery
Basically you would globally switch this off like this:
$.ajaxSetup({ cache: false });
or for a specific ajax request like this:
$.ajax({
cache: false,
//other options...
});
I had a similar issue with my GET requests caching. My update function would only fire off once unless dev tools was open. When it was open, no caching would occur.
If your code works properly with other browsers, So the problem can be from the SignalR's used transport method. They can be WebSocket, Server Sent Events, Forever Frame and Long Polling based on browser support.
The Forever Frame is for Internet Explorer only. You can see the Introduction to SignalR to know which transport method will be used in various cases (Note that you can't use any of them on each browser, for example, IE doesn't support Server Sent Events).
You can understand the transport method being used Inside a Hub just by looking at the request's QueryString which can be useful for logging:
Context.QueryString["transport"];
I think the issue comes from using Forever Frame by IE likely, since sometimes it causes SignalR to crash on Ajax calls. You can try to remove Forever Frame support in SignalR and force to use the remaining supported methods by the browser with the following code in client side:
$.connection.hub.start({ transport: ['webSockets', 'serverSentEvents', 'longPolling'] });
I showed some realities about SignalR and gave you some logging/trace tools to solve your problem. For more help, put additional details :)
Update:
Since your problem seems to be very strange and I've not enough vision around your code, So I propose you some instructions based on my experience wish to be useful:
Setup Browser Link in IDE suitable
checkout the Network tab request/response data during its process
Make sure you haven't used reserved names in your server/client side
(perhaps by renaming methods and variables)
Also I think that you need to use liveBatchHub.server.update(variable, ....); instead of liveBatchHub.server.updateParameters(variable, ....); in Dispatcher side to make server call since you should use server method name after server.

Modify URL before loading page in firefox

I want to prefix URLs which match my patterns. When I open a new tab in Firefox and enter a matching URL the page should not be loaded normally, the URL should first be modified and then loading the page should start.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
Browsing the HTTPS Everywhere add-on suggests the following steps:
Register an observer for the "http-on-modify-request" observer topic with nsIObserverService
Proceed if the subject of your observer notification is an instance of nsIHttpChannel and subject.URI.spec (the URL) matches your criteria
Create a new nsIStandardURL
Create a new nsIHttpChannel
Replace the old channel with the new. The code for doing this in HTTPS Everywhere is quite dense and probably much more than you need. I'd suggest starting with chrome/content/IOUtils.js.
Note that you should register a single "http-on-modify-request" observer for your entire application, which means you should put it in an XPCOM component (see HTTPS Everywhere for an example).
The following articles do not solve your problem directly, but they do contain a lot of sample code that you might find helpful:
https://developer.mozilla.org/en/Setting_HTTP_request_headers
https://developer.mozilla.org/en/XUL_School/Intercepting_Page_Loads
Thanks to Iwburk, I have been able to do this.
We can do this my overriding the nsiHttpChannel with a new one, doing this is slightly complicated but luckily the add-on https-everywhere implements this to force a https connection.
https-everywhere's source code is available here
Most of the code needed for this is in the files
IO Util.js
ChannelReplacement.js
We can work with the above files alone provided we have the basic variables like Cc,Ci set up and the function xpcom_generateQI defined.
var httpRequestObserver =
{
observe: function(subject, topic, data) {
if (topic == "http-on-modify-request") {
var httpChannel = subject.QueryInterface(Components.interfaces.nsIHttpChannel);
var requestURL = subject.URI.spec;
if(isToBeReplaced(requestURL)) {
var newURL = getURL(requestURL);
ChannelReplacement.runWhenPending(subject, function() {
var cr = new ChannelReplacement(subject, ch);
cr.replace(true,null);
cr.open();
});
}
}
},
get observerService() {
return Components.classes["#mozilla.org/observer-service;1"]
.getService(Components.interfaces.nsIObserverService);
},
register: function() {
this.observerService.addObserver(this, "http-on-modify-request", false);
},
unregister: function() {
this.observerService.removeObserver(this, "http-on-modify-request");
}
};
httpRequestObserver.register();
The code will replace the request not redirect.
While I have tested the above code well enough, I am not sure about its implementation. As far I can make out, it copies all the attributes of the requested channel and sets them to the channel to be overridden. After which somehow the output requested by original request is supplied using the new channel.
P.S. I had seen a SO post in which this approach was suggested.
You could listen for the page load event or maybe the DOMContentLoaded event instead. Or you can make an nsIURIContentListener but that's probably more complicated.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
YES it is possible.
Use page-mod of the Addon-SDK by setting contentScriptWhen: "start"
Then after completely preventing the document from getting parsed you can either
fetch a different document from the same domain and inject it in the page.
after some document.URL processing do a location.replace() call
Here is an example of doing 1. https://stackoverflow.com/a/36097573/6085033

Resources