In a Rails 3.2 app I have a view that is pulling in information from an external API. On slow connections, this severely reduces the page load time and affects user experience.
How can I move this into an asynchronous process so that the rest of the page loads, and the external information is rendered later once it has been fetched and is available.
The external data is large and complex and I don't think is suitable to cache in the database or in a variable.
I'm aware of delayedjob and similar gems, but these seem more suited to queuing database methods rather than in the view.
What other options are available to me?
It seems like a large data set is perfectly suitable for caching on your local server.
Keep in mind, a long request is going to lock your Rails process/thread and and can't serve any other requests while waiting for your API call to finish.
That said, you can always trigger an Ajax request to occur once the rest of the page loads.
Related
I have a logging query (a simple INSERT) that happens on every single request.
For this request only (the one that happens on every page load), I want to set the limit to 500ms in case the database is locked/slow/down it won't affect the site, where the site hangs while it waits to connect/write.
Is there a way I can specify a timeout somehow on a per-query basis that I can abort the LoggedRequest.create! if it's taking too long?
I don't want to set it in my config because I have many other queries that shouldn't have timeouts that low.
I'm using Postgres 11.7
I also don't know how I feel about setting a timeout for the entire session because I don't want that connection to be shared from the pool with other queries that can't have that timeout.
Rails 6 introduces event based triggers for notifications, logging etc that comes in very handy, provided you are using/can afford to migrate to Rails 6. Here'a useful post that demonstrates creating event based triggers for notifications/logging: https://pramodbshinde.wordpress.com/2020/03/20/custom-events-tracking-with-activesupportnotifications-and-audited/
If, for some reason, you cannot use Rails 6, perhaps this article might help you find some answers: https://evilmartians.com/chronicles/the-silence-of-the-ruby-exceptions-a-rails-postgresql-database-transaction-thriller
If I were you, I could also contemplate using AJAX with a fire-and-forget API request to server for logging/whatever that is not critical to normal functioning of the application.
I have a web application that sometimes gets a bit heavy and takes a little while to load.
I would like to serve a loading page while the page that the user accessed is loaded on the server. Now, because this is not ajax, or a response to an event, I'm not really sure how to proceed here.
I came up with a rather ugly alternative that works like this:
1: user accesses www.myapp.com/heavypage.
2: if request comes from myapp.com/loading, then serve the myapp.com/heavypage.
else, if request comes from anywhere else, rediret to myapp.com/loading.
The page myapp.com/loading is basically a blue screen with a loading gif that fires a redirect upon loaded: onload="redirectToHeavyPage()".
3: while the server processes the redirect (which takes time), the user is
seeing a pretty loading page.
This way, I was able to show some information to the user while the heavypage action is processed on the server.
It works, but I feel like it is a totally wrong way of doing this, even though it works exactly I expected, mainly on slow connections (like gprs). Keep in mind that I can't put the loading gif anywhere on the heavypage because it will only be served when the server is done processing everything.
What would be the proper way of doing this?
While your scheme works, a cleaner way to approach this is the following:
When the browser requests /heavypage only the HTML shell with a loading animation that does not require any processing or database queries is returned to the client. Preferably this step is be skipped completely by caching the HTML in the browser, a CDN or a reverser-proxy cache.
In the HTML, asynchronously load the expensive HTML via JavaScript. You do not need to wait for the onload event but can trigger this directly through an inline script tag.
In the response callback render the received inner HTML to the target element, e.g. the body.
This scheme works irrespective of whether you are using a single-page application framework like React or Angular or classic server-side rendering. The only difference is if 2. ships an HTML snippet or some JSON that is rendered client-side.
If you are using HTTP/2, there is another slightly more efficient solution to this using server push. When 1. is received on the server (or the CDN) the computationally expensive data can be shipped without waiting for the client request using the new push functionality. This is however only necessary, if the round-trip latency is slower than the time your /heavypage takes to render.
So I built a website that uses Twitch.tv API, which is a gaming live stream website. The requests are long and slow, and I would like to cache it somehow. The problem is that there are a lot of dynamic attributes, if they are still online, or how many viewers there are. Since the traffic to my website is low at the moment, expiring Cache early isn't going to help much. Also, I have a page where it lists all the live streams, and it requests to see if the stream is online. So even if no one is online it still takes a while to load. Is there anyway to retrieve api faster without caching?
here is twitch.tv api doc
Since you don't own the Twitch.tv API, unfortunately I would say there is really nothing you can do to make their calls faster.
The good news is that you can cache the calls you make to them, which will make things appear faster to your users.
The way to cache the calls is to create a key and then cache the return JSON from the API. To create the key I would just use the URL you are calling for the API. Then just give the cached value an expiration time of a few minutes and when it expires you make another API call to re-populate the cache.
Also I'd look at Varnish (https://www.varnish-cache.org/) which does a lot of HTTP caching really well. Could work really well for you and it has the concept of a grace period that tries to hide the expensive calls made when the cache expires.
I'm aware of the model that involves a scheduled task runninng in the back ground which runs jobs registered with a web request but how about this for an idea that keeps everything within ASP.net...
User uploads a CSV file with, perhaps, several thousand rows. The rows are persisted to the database. I think this would take maybe a minute or so which would be an acceptable wait.
Request returns to the browser and then an automatic Ajax request would go back to the server and request, say, ten rows at a time and process them. (Each row requires a number of web service requests.)
Ajax call returns, display is updated and then another automatic Ajax request goes back for more rows. This repeats until all rows are completed.
If user leaves the web page, then they could return and restart the job.
Any thoughts?
Cheers, Ian.
If i get you right, you actually dont need any "interaction" between background jobs and the long-running request, you just want to "lauch" background jobs with incoming requests? Not such a good idea. Take a look at the Quartz.NET project, it is scheduler embeddable into ASP.NET application, it will handle this stuff for you without need of requests. Of course, if there is app pool shutdown, also your scheduler goes down, but this you cant guarantee not to happen even with your long-running requests solution, dependent on browser waiting on other side.
Also take a look on this interesting article from phil haack on this topic, with his own little scheduler library specific for ASP.NET :
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
A server side program (or ideally service) could still be quick and dirty and would be more reliable. You could still do step 1 as you have proposed, upload the file and insert the data ( don't forget to increase the maxRequestLength time out value in the web.config ). Then have a program running on the server that checks for new records and processes them.
If the user needs status you could store an entry in the database for each file and update the database record when the import is complete.
Maybe I'm reading the question and interpreting it in a weird way, but why couldn't you read the file into a database and store in a table the current line of the file that you've completed through. You could then track your progress via the db and just send small json objects telling the user how far along you are. That way if their connection drops you can keep processing their request, and if they return later you can notify them of how far along the job is. Also, if multiple clients are connecting you can use the db to queue and throttle (by serializing) the workload. Or if the user connects mid-job with another file, then their new request will be queued up after their current job.
Does Rails provide a way to execute code on the server after the view is rendered and after the response is sent to the browser?
I have an action in my application that performs a lot of database transactions, which results in a slow response time for the user. What I'd like is to (1) perform some computations, (2) send the results of those computations to the browser, and then (3) save the results to the database.
It sounds like you want to implement a background job processor. This allows you to put the job into a queue to be processed asynchronously and for your users to not notice a long page load.
There are many options available. I have used and had no issues with delayed_job. Another popular one lately, which I have not used is resque.