SignalR vs setTimeout - asp.net-mvc

Part of my MVC view page gets refreshed every 30 seconds after fetching some resource from the server. I've been using setTimeOut to trigger a javascript method to fetch data from the server asynchronously, compare it with the old data, and if it has changed, update a div tag. Now, I'm thinking of creating a timer in global.asax class, start it in the application_start event, and in the timer elapsed event, get the data, and send it to all the clients using SignalR only if the data has changed.
Will there be any advantage in using SignalR over setTimeOut here?

The advantage in this case would be you'll avoid an unnecassary trip to the server if that data hasn't changed. Using SignalR, you can broadcast the data to all clients only when the data has changed.
The other advantage, is that SignalR will push from the server to the browser using the best technology available, without you having to worry about it. This could be WebSockets if you're running your server on Windows 8 server with ASP.NET 4.5 (probably a future consideration), or Server Sent Events if the client is Chrome, Firefox or Opera, or Forever Frame if the client is IE. Either way, you don't have to worry about it, SignalR will take care of the transport management for you.
Depending on where your data is stored and how it's updated, you might even be able to do away with the timer completely, and just broadcast the data to all clients immediately whenever it's changed. If it's updated by another action method on a controller, just broadcast to clients from there. If it's updated via some other process directly into the DB, you could setup a SQL query notification in you application (in App_Start) to get alerted when it's changed and then broadcast at that point.

Related

Chromium Edge - Javascript seems to be affected by automatic checks for Edge updates

We have a single page web application. One of the functions of the application is to supervise the connection path from the client back to the server. This is implemented with a periodic ajax http request in javascript to the server every 60 seconds. This request acts as a heartbeat.
After a session is started, the server looks for that heartbeat. If it fails to receive a heartbeat request after a reasonable amount of time, it takes specific action.
The client also looks for a response to that heartbeat request. If it fails to receive a response after a reasonable amount of time, it displays a message on the screen via javascript.
We are getting reports from the field where a Chrome version of Edge is failing. Communication between the client and server is apparently failing. The server is seeing those heartbeat requests cease – and taking that specific action. However, the client is not taking the expected action on its side. It’s not displaying the message indicating a failed heartbeat request. It’s almost appears as though the javascript stopped running altogether.
The thing is, though… The customer has reported that if they disable automatic updates to Microsoft Edge the application runs fine. If the checking of updates is allowed to occur, the application eventually fails as described above. Note that this is apparently happening when Edge is just checking for updates - it's already up to date.
Updates are being turned off using several guid-named registry keys at [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\EdgeUpdate].
Any thoughts?

How to update a web page from requests made by another client (in rails)?

Here is my need:
I have to displays some information from a web page.
The web browser is actually on the same machine (localhost).
I want the data to be updated dynamically by the server initiative.
Since HTTP protocol is actually a request/response protocol, I know that to get this functionality, the connection between the server and the client (which is local here) should be kept open in some way (Websocket, Server-Sent Events, etc..)
Yes, "realtime" is really a fashion trend nowadays and there are many frameworks out there to do this (meteor, etc...)
And indeed, it seems that Rails supports this functionnality too in addition to using Websockets (Server-Sent Events in Rails 4 and ActionCable in Rails 5)
So achieving this functionnality would not be a big deal, I guess...
Nevertheless what I really want is to trigger an update of the webpage (displayed here locally) from a request made by another client..
This picture will explain that better :
At the beginning, the browser connects to the (local) server (green arrows).
I guess that a thread is executed where all the session data (instance variables) are stored.
In order to use some "realtime" mechanisms, the connection remains open and therefore the thread Y is not terminated. (I guess this is how it works)
A second user is connecting (blue arrows) to the server (could be or not be the same web page) and make some actions (eg. posting a form).
Here the response to that external client does not matter. Just an HTTP OK response is fine. But a confirmation web page could also be returned.
But in anyway the thread X (and/or the connection) has no particular reason to be kept.
Ok, here is my question (BTW thank you for reading me thus far).
How can I echo this new data on the local web browser ?
I see 2 differents ways to do this :
Path A: Before terminating, the thread X passes the data (its instance variables) to the thread Y which has its connection still open. Thus the server is able to update the web browser.
Path B: Before terminating the thread X sends a request (I mean a response since it is the server) directly to the web browser using a particular socket.
Which mechanisms should I use in either method to achieve this functionnality ?
For method A, how can I exchange data between threads ?
For method B, how can I use an already opened socket ?
But which of these two methods (or another one) is actually the best way to do that?
Again thank you for reading me thus far, and sorry for my bad english.
I hope I've been clear enough to expose my need.
You are overthinking this. There is no need to think of such low-level mechanisms as threads and sockets. Most (all?) pub-sub live-update tools (ActionCable, faye, etc.) operate in terms of "channels" and "events".
So, your flow will look like this:
Client A (web browser) makes a request to your server and subscribes to events from channel "client-a-events" (or something).
Client B (the other browser) makes a request to your server with instructions to post an event to channel "client-a-events".
Pub-sub library does its magic.
Client A gets an update and updates the UI accordingly.
Check out this intro guide: Action Cable Overview.

How to keep the application alive without a client

My goal is to send an email out every 5 min from my application even if there isn't a browser open to the application.
I'm using FluentScheduler to manage the tasks; which works up until the server decides to kill the application from inactivity.
My big constraints are:
I can't touch the server. It is how it is and I have to work around it.
I can't rely on a client refreshing a browser or anything else along the lines of using client side scripts.
I can't use any scheduler that uses a database.
What I have been focusing on is trying to create an artificial postback.
Note: The server is load balanced, so a solution could use that
Is there any way that I can keep my application from getting killed by the server?
You could use a monitoring service like https://www.pingdom.com/ to ping the server at regular intervals. Just make sure it hits an endpoint that invokes .NET code and not a static resource.

How to dynamically and efficiently pull information from database (notifications) in Rails

I am working in a Rails application and below is the scenario requiring a solution.
I'm doing some time consuming processes in the background using Sidekiq and saves the related information in the database. Now when each of the process gets completed, we would like to show notifications in a separate area saying that the process has been completed.
So, the notifications area really need to pull things from the back-end (This notification area will be available in every page) and show it dynamically. So, I thought Ajax must be an option. But, I don't know how to trigger it for a particular area only. Or is there any other option by which Client can fetch dynamic content from the server efficiently without creating much traffic.
I know it would be a broad topic to say about. But any relevant info would be greatly appreciated. Thanks :)
You're looking at a perpetual connection (either using SSE's or Websockets), something Rails has started to look at with ActionController::Live
Live
You're looking for "live" connectivity:
"Live" functionality works by keeping a connection open
between your app and the server. Rails is an HTTP request-based
framework, meaning it only sends responses to requests. The way to
send live data is to keep the response open (using a perpetual connection), which allows you to send updated data to your page on its
own timescale
The way to do this is to use a front-end method to keep the connection "live", and a back-end stack to serve the updates. The front-end will need either SSE's or a websocket, which you'll connect with use of JS
The SEE's and websockets basically give you access to the server out of the scope of "normal" requests (they use text/event-stream content / mime type)
Recommendation
We use a service called pusher
This basically creates a third-party websocket service, to which you can push updates. Once the service receives the updates, it will send it to any channels which are connected to it. You can split the channels it broadcasts to using the pub/sub pattern
I'd recommend using this service directly (they have a Rails gem) (I'm not affiliated with them), as well as providing a super simple API
Other than that, you should look at the ActionController::Live functionality of Rails
The answer suggested in the comment by #h0lyalg0rithm is an option to go.
However, primitive options are.
Use setinterval in javascript to perform a task every x seconds. Say polling.
Use jQuery or native ajax to poll for information to a controller/action via route and have the controller push data as JSON.
Use document.getElementById or jQuery to update data on the page.

jquery .ajax request blocked by long running .ajax request

I am trying to use jQuery's .ajax functionality to make a progress bar.
A request is submited via .ajax, which starts a long running process. Once submited another .ajax request is called on an interval which checks the progress of this process. Then a progress meter is updated using this information.
However, the progress .ajax call only returns once the long running process has completed. Its like its being blocked by the initial request.
The weird thing is this process works fine on dev but is failing on the deployment server. I am running on IIS using ASP.Net MVC.
Update: Apparently, it is browser related because it is working fine on IE 7 but is not working on IE 8. This is strange because IE 8 allows up to 6 connections on broadband where IE 7 only allows 2 requests per domain.
Update2: I think it's a local issue because it appears to be working fine on another IE 8 machine.
The server will only run one page at a time from each user. When you send the requests to get the progress status, they will be queued.
The solution is to make the page that returns the status sessionless, using EnableSessionState="false" in the #Page directive. That way it's not associated with any user, so the request isn't queued.
This of course means that you can't use session state to communicate the progress state from the thread running the process to the thread getting the status. You have to use a different way of keeping track of running processes and send some identifier along with the requests that gets the status so that you know which user it came from.
Some browsers (in particular, IE) only allows two requests to the same domain at the same time. If there are any other requests happening at the same time, then you might run into this limitation. One way around it is to have a few different aliases for the domain (some sites use "www1.example.com" and "www2.example.com", etc)
You should be able to use Firebug or Fiddler to determine how many requests are in progress, etc.
Create an Asynchronus handler (IHttpAsyncHandler) for your second ajax request.
use any parameter required via the .ashx querystring in order to process what you want because the HttpContext won't have what you'll need. You barely will have access to the Application object.
Behind the scenes ASP.NET will create for you a thread from the CLR pool, not the application pool, so You'll have an extra performance gain with IHttpAsyncHandler

Resources