I'm using SignalR 2.2.2 to send users messages from my backend. When a user is logged in, and if other conditions are met, their connection is added to a group with the user's userId on my message hub.
It works great, as long as they have 10 or fewer tabs/windows open. Any beyond that, they're stuck in "Loading..." indefinitely.
It seems to just be getting stuck on $.connection.hub.start();
I don't necessarily need to allow each user an infinite amount of signalr connections, but breaking the entire site for them on 10 open tabs is a problem.
I've tried catching or handling an error, but it still just hangs there.
$(function () {
if (loggedInUser != null)
{
var user = loggedInUser.UserId;
var messaging = $.connection.messageHub;
if (conditions) {
$.connection.hub.start().done(function () {
messaging.server.joinGroup(user);
});
}
}
});
I want to do at least one of the following:
-Just stop adding connections if a limit is reached
-Increase the limit of connections
-If limit is reached, start closing earlier connections
-Try to connect, and after a few seconds if it doesn't work, give up
Related
I am building a web app to display on my iPad to control my raspberry pi acting as an audio recorder. Part of the need is to maintain an event source open so that the server can send Server Side Events. A specific instance of the app can grab control of the recording process, but will loose control if the server sees sse link closes. This is just protection against a client disappearing and leaving the control held (control of the process does needed to be renewed at least every 5 minutes - but I don't really want to wait that long in the normal case of someone just closing the browser tab.)
Part of my need is to push the browser to the background so I can then open up the camera and record a video.
I built this app and had it almost working see https://github.com/akc42/pi_record.git (master branch).
Until I pushed the browser to the background and found IOS shut down the page and broke the sse link.
I tried restructuring to use a private web worker to manage the sse link, massing messages between the web worker and the main javascript thread - again almost working (see workers branch of above repository). But that got shutdown too!
My last thought is to use a service worker, but how to structure the app?
Clearly the service worker must act as a client to the server for the server side events. It must keep the connection open, but it also needs to keep track of multiple tabs in the browser which may or may not try and grab control of the interface, and only allow one tab to do so.
I can think of three approaches - but its difficult to see which is better. At least I have never even seen any mention of approach 2 and 3 below , but it seems to me that one of these two might actually be the simplest.
Approach 1
Move the code I have now for separate web workers into the service worker. However we will need to add to the message passing some form of ID between window and service. So I can record which tab actually grabbed control of the interface and therefore exclude other tabs from doing so (ie simulate a failed attempt to take control).
As far as I can work out MessageEvent.ports[0] could be a unique object which I could store in a Map somewhere, but I am not entirely convinced that the MessageChannel wouldn't close if the browser moved to the background.
Approach 2
have a set of phantom urls in the service worker that simulate all the different message types (and parameters) that where previously sent my the tab to its private web worker.
The fetch event provides a clientid (which I can use to difference between who actually grabbed control) and which I can use to then do Clients.get(clientid).postMessage() (or Clients.matchAll when a broadcast response is needed)
Code would be something like
self.addEventListener('fetch', (event) => {
const requestURL = new URL(event.request.url);
if (/^\/api\//.test(requestURL.pathname)) {
event.respondWith(fetch(event.request)); //all api requests are a direct pass through
} else if (/^\/service\//.test(requestURL.pathname)) {
/*
process these like a message passing with one extra to say the client is going away.
*/
if (urlRecognised) {
event.respondWith(new Response('OK', {status: 200}));
} else {
event.respondWith(new Response(`Unknown request ${requestURL.pathname}`, {status: 404}));
}
} else {
event.respondWith(async () => {
const cache = await caches.open('recorder');
const cachedResponse = await cache.match(event.request);
const networkResponsePromise = fetch(event.request);
event.waitUntil(async () => {
const networkResponse = await networkResponsePromise;
await cache.put(event.request, networkResponse.clone());
});
// Returned the cached response if we have one, otherwise return the network response.
return cachedResponse || networkResponsePromise;
});
}
});
The top of the the fetch event just passes the standard api requests made by the client straight through. I can't cache these (although I could be more sophisticated and perhaps pre reject those not supported).
The second section matches phantom urls /service/something
The last section is taken from Jake Archibald's offline cookbook and tries to use the cache, but updates the cache in the background if any of the static files have changed.
Approach 3
Similar to the approach above, in that we would have phantom urls and use the clientid as a unique marker, but actually try and simulate a server side event stream with one url.
I'm thinking the code with be more like
...
} else if (/^\/service\//.test(requestURL.pathname)) {
const stream = new TransformStream();
const writer = stream.writeable.getWriter();
event.respondWith(async () => {
const streamFinishedPromise = new Promise(async (resolve,reject) => {
event.waitUntil(async () => {
/* eventually close the link */
await streamFinishedPromise;
});
try {
while (true) writer.write(await nextMessageFromServerSideEventStream());
} catch(e) {
writer.close();
resolve();
}
});
return new Response(stream.readable,{status:200}) //probably need eventstream headers too
}
I am thinking that approach 2 could be the simplest, given where I am now but I am concerned that I can see nothing when searching for how to use service workers that discusses this phantom url approach.
Can anyone comment on any of these approaches and provide guidance on how to best program the tricky bits (for instance does Approach 1 message channel close when the browser is moved to the background on an iPad, or how do you really keep a response channel open, and does that get closed when the browser moves to the background in Approach 3)
The simple truth is that none of these approaches will work. What I didn't realise when I asked the question is that a service worker is re-run by the browser when ever there is something to do and that run only lasts for the length of time of the processing of an event. Although eventWaitUntil can prolong that, the only reference to how long I can find is that the browser is still at liberty to cancel it if it appears it might never close. I can't imagine than in a period of several hours it won't get cancelled. So an Event Source will close effectively terminate its link to the server.
So my only option to achieve what I want is to have the server carry on when the Event Source closes and find some other mechanism to release resources held on behalf of the client
I am developing a PWA that requires Push-Notifications. Sadly IOS/Safari does not support https://w3c.github.io/push-api/#pushmanager-interface for now, so I think i might have to wrap a native APP around in some way.
In Android (before their "Trusted Web Activities" was a thing) you could use a WebView to basically display a headless Chrome-View in your App. Whats the equivalent in IOS and how does the interaction between push-notifications and the Webapp (the browser need to jump to a specific page) work?
One more thing I need is integration with our companys Mobile Device Management, which is Microsoft Intune. Having integrated MDMs in Android in the past i Know that this might be a major pain in the a**, so i'm considering to build the wrapper myself, for maximum flexibility. Another option would be something like Ionic, not sure now.
This may not necessarily work in your situation, but I had the exact same issue with a PWA for Safari and I solved it by just using long polling. It will allow you to get around all of the limitations with Safari and I was able to redirect and load sections within our SPA.
async function subscribe() {
let response = await fetch("/subscribe");
if (response.status == 502) {
// Status 502 is a connection timeout error,
// may happen when the connection was pending for too long,
// and the remote server or a proxy closed it
// let's reconnect
await subscribe();
} else if (response.status != 200) {
// An error - let's show it
showMessage(response.statusText);
// Reconnect in one second
await new Promise(resolve => setTimeout(resolve, 1000));
await subscribe();
} else {
// Get and show the message
let message = await response.text();
showMessage(message);
// Call subscribe() again to get the next message
await subscribe();
}
}
subscribe();
https://javascript.info/long-polling
I'm making a plug-in for Chrome. How do I add a delay between web requests.
chrome.webRequest.onBeforeRequest.addListener(
function(details) {
// I want every request to be delayed. example: 10 milisecond
return {cancel: details.url.indexOf("://www.evil.com/") != -1};
},
{urls: ["<all_urls>"]},
["blocking"]);
Thanks ...
I'm dealing with the same problem, I need to screen the user's onbeforerequest, running the URL by a remote server to see if it's on a ban list.
I'm thinking I'll redirect the user to a sort of hourglass page, then push on when the request comes back.
So in your case with an arbitrary ten seconds, you return redirectUrl: [placeholder extension page] and that page counts for ten seconds, then loads the page that was to be delayed.
Setup a simple SignalR Chat web application using the Microsoft tutorial code.
!--SignalR script to update the chat page and send messages.-->
<script>
$(function () {
// Reference the auto-generated proxy for the hub.
var chat = $.connection.chatHub;
// Create a function that the hub can call back to display messages.
chat.client.addNewMessageToPage = function (name, message) {
// Add the message to the page.
$('#discussion').append('<li><strong>' + htmlEncode(name)
+ '</strong>: ' + htmlEncode(message) + '</li>');
};
// Get the user name and store it to prepend to messages.
$('#displayname').val(prompt('Enter your name:', ''));
// Set initial focus to message input box.
$('#message').focus();
// Start the connection.
$.connection.hub.start().done(function () {
$('#sendmessage').click(function () {
// Call the Send method on the hub.
chat.server.send($('#displayname').val(), $('#message').val());
// Clear text box and reset focus for next comment.
$('#message').val('').focus();
});
});
});
It's running OK when debugging locally, I can send message immediately after putting in a username. But when deployed onto the Azure, after entering the username, I have to wait like 5 seconds before I can submit a new message (no response clicking Send button), but after this first message, for all the following messages I can send instantly.
For me, it looks like it is slow when setting up the initial connection ($.connection.hub.start()).
Is this normal? How can I improve the performance of this simple application?
By default websockets are not enabled on Azure and, by default, the client tries different transports starting from webSockets. If websockets does not work it will fall back to serverSentEvents and finally to longPolling. This takes time. Make sure you turn on websockets on Azure or specify that you want to use only serverSentEvents and longPolling transports.
I have code that works - MVC app using Google Calendar API and Gmail API with OAuth2 Authentication from Google. The code works. When the page is loaded the data from both services is displayed. And I have a Javascript timer to refresh the data with certain interval (20 min). So everything works as expected until at some point of time (after some time interval I guess) it starts throwing an exception: Error:"invalid_grant", Description:"", Uri:"". The exception has no InnerException and has only that error message and this info in StackTrace (here on the screenshot):
I would really appreciate if someone has an idea what could be the reason for that error. And what is that "c:\code\google.com...." line in stack trace message, I have no "c:\code" folder on my disk. I have found a few posts related to the same error, but unfortunately they didn't help to understand the problem. Maybe with more details like this screenshot someone has more info on the subject. Thanks a lot.
What I found out - is that AppPool recycling temporary solves the problem. But then, after some time, it comes back again. What doest it have to do with AppPool recycling?
Well, after more reading I found the reason of this exception.
https://developers.google.com/accounts/docs/OAuth2#expiration
https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtAuthorization?#helpme
Invalid Grant: The refresh token limit has been exceeded (default is 25).
That's all.
According to these documentation: There is currently a 25-token limit per Google user account. If a user account has 25 valid tokens, the next authentication request succeeds, but quietly invalidates the oldest outstanding token without any user-visible warning.
If the application attempts to use an invalidated refresh token, an invalid_grant error response is returned. The limit for each unique pair of OAuth 2.0 client and Google Analytics account is 25 refresh tokens (note that this limit is subject to change).
Understood, they limit # of refresh tokens to 25, but they don't say what to do when you need to go above that limit. Arghhh... I have been experimenting and found a solution how to bypass that limitation. It seems indeed that recycling the Application Pool solves the problem (of course untill next 25-limit is reached). We can manually recycle the AppPool from IIS or by running the command:
c:\Windows\System32\inetsrv\appcmd.exe recycle apppool /apppool.name:AppPoolName
You can schedule that command to execute every night or every hour, whatever...
But I found a have a programmatic solution:
Override OnException method for your controller (it's for MVC app)
protected override void OnException(ExceptionContext filterContext)
{
if (filterContext.ExceptionHandled) return;
// Log exception details
Global.LogException(filterContext.Exception, EventLogEntryType.Error);
if (filterContext.Exception.Message.Contains("invalid_grant"))
{
// Invalid Grant: The refresh token limit has been exceeded (default is 25).
// https://developers.google.com/accounts/docs/OAuth2#expiration
// https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtAuthorization?#helpme
Global.RecycleAppPool();
Global.LogException(new Exception("AppPool has been recycled"), EventLogEntryType.Information);
Response.Redirect("Index");
}
var actionName = filterContext.RouteData.Values["action"].ToString();
// Return friendly error message
var errorMessage = string.Format("Action {0} failed with error: {1}. Please try again.", actionName, filterContext.Exception.Message);
filterContext.Result = Content(errorMessage);
filterContext.ExceptionHandled = true;
base.OnException(filterContext);
}
Where RecycleAppPool is defined like this (this operation is fast, not like restarting IIS :):
public static void RecycleAppPool()
{
ServerManager serverManager = new ServerManager();
ApplicationPool appPool = serverManager.ApplicationPools["Homepage"];
if (appPool != null)
{
if (appPool.State == ObjectState.Stopped) appPool.Start();
else appPool.Recycle();
}
}
So, in case of invalid_grant exception, the exception "swallowed": logged, apppool is recycled and the limit for refresh tokens is reset. Hope this helps.
Please let me know if you find some issues.
It's also possible that the server clock is out of sync. For some reason mine was not able to sync against an internet clock and was running 6 minutes fast. Resetting it to the correct time worked.