How to improve prerender speed for Twitterbot request? - twitter

For the project I am working, I've set up a prerender service on the same server as the project and use Nginx to pass social media requests to the prerender service.
I have observed that if an authorized user shares a page to the Twitter, it usually works, i.e. the meta tag image and text are rendered as a twitter card. However, if the user has shared other pages of the same project, the images are usually not rendered when the user visits his twitter posts.
From the Nginx access log, it seems Twitterbots made requests at the same time and the prerender service was too slow to render the pages. 499 status were shown in the Twitterbot requests and 504 were shown in the prerender log.
The server is hosted on the UpCloud using 1 CPU and 2 GB memory data plan. The prerender service is run in a docker container with 300MB, it will cache rendered pages for 60 seconds. Due to the memory quota, I hesitate to increase the cache duration.
I have been studying the server logs and possible solutions, but haven't been able to come up with other solution than refactoring the UI. Had anyone else struggled with this issue and how do you overcome it?

That seems like a pretty underpowered server for running a Prerender server. You might want to at least give it more RAM and possibly another CPU to get better performance. 504's shouldn't be happening often at all.
Depending on how long your pages take to prerender, caching for much longer than 60 seconds is highly recommended. You probably won't see many cache hits in 60 seconds (from users sharing URLs on twitter) for a single URL unless you have a very high traffic site.

Related

NextJs - ServerSide Render CPU spike with certain API request

We noticed an interesting issue regarding our serverside rendering of our NextJs application.
The issue occurred when we decided to add fetching of our translation key values (which are fetched via API) on our serverside calls.
We made this decision to reduce CLS and give the user a better overal experience on first load. We didn't think much of it, since it's just another api call being handled returning the json before rendering the page.
Below you see a graph of our cpu usage on AWS, between 10:00 (10AM) and 14:00 (2PM) the fetching of the translation keys was live on production. We had to manually restart every 2h in order for the servers to survive (That is the peak you are seeing). After 16:00 (4PM) We removed this api call from that specific serverside call and you see that the server is stable. From 20:00 (8PM) we enabled an automatic restart to be sure that the servers would survive the night, but as you can see this was not necessary.
The API call in question, just returns a JSON object. It can contain up to 700lines of values which should be fine as our Product listing page can have a larger responses (up to 10k lines). Everything has caching on enabled, so next/static and the api responses. We were also thinking that it might have something to do with outgoing connections not closing in time. But I am making this post because no one really knows why this is an issue.
We are running the following setup:
Dockerized Application
Running on AWS Beanstalk
Cloudfront + Akamai
NextJs v12.2
Node v14.16
If anyone has the smallest idea in which direction to look, please let me know. We appreciate it.

Why My web site time out while running JMeter load Test

I'm new to JMeter and. I followed this tutorial to learn JMeter.
I tried to do a load tested under following conditions.
Number of Threads (Users) - 1000
Ramp-Up Period (in seconds) - 10
Loop Count - 5
While I'm running the test, I tried to load my website (after clear cache)But, it takes more than usual time to load the page. This issue doesn't occur when the browser has cached data.
Can someone please tell me why this is happening? Is it because of when 1000 users load my site, it may crash or something?
Any kind of explanation will be appreciated.
While running your JMeter test if you try to load your website (after clear cache), it will always take more time to load than usual.It's because you have cleared the cache and now the browser needs to render the page resources again to load your desired page.After loading is complete and if you try to load the page again without clearing cache, it will take less time to load the page this time.Browser does not fetch page resources every time, rather the browser saves it in its cache.So next time when you try to open or load that page, the browser could use those cache to open that page for you in the shortest period of time. So for the first time when a browser load a page it takes more time than loading that specific page later(without clearing cache).
Another point is , as your Jmeter test was running while you tried to load your website, it will take a longer time to load your website.Because your application was already handling some requests send by JMeter.So handling extra load will impact on your website page response time.
Ramp up time 10sec for 1000 users!!!
It is not the best practice. You have to give enough time to warm up those 1000 users. 10 sec is too small to be the ramp up time for 1000 users.So during the JMeter test period, it is obvious that your browser will take an unexpected time to load your webpage(using Browser) or end up notifying "Connection Timeout".It necessarily doesn't mean that your application is crashed. It's simply because of unrealistic test script design in JMeter.
Could you elaborate on the type of webserver software you are using e.g?
- Apache HTTPD 2.4 / Nginx / Apache Tomcat / IIS
And the underlying operating system?
- Windows (Server?) / Mac OS X / Linux
If your webserver machine is not limited by the maximum performance of your CPU, Disk etc. (check the Task Manager) your performance might be limited by the configuration of Apache.
Could you please check the Apache HTTPD log files for relevant warnings?
Depending on your configuration (httpd.conf + any files "Include"d from there) you may be using the mpm_winnt worker, that has a configurable number of worker threads which by default is 64 according to:
https://httpd.apache.org/docs/2.4/mod/mpm_common.html#threadsperchild
Once these are all busy new requests from any client (your browser, your loadtest, etc.) will have to wait for their turn.
Try and see what happens if you increase the number of threads!

For Ruby and Rails, how to print out the true page rendering time on the webpage?

If using in the controller:
def index
#time_start_in_controller
...
end
and at the end of view, use
<%= "took #(Time.now - #time_start_in_controller} seconds" %>
but isn't the time at the view not the true ending of rendering, because it needs to mix with the layout and so forth. What is a more accurate way (just as accurate as possible) to print out the page generation time right on the webpage?
(update: also, the console showing the log as taking 61ms, but the page definitely took 2 to 3 seconds to load, and the network I am using is super fast, at home or at work, at 18mbps or higher with a ping of maybe 30ms)
update: it is a bit strange that if I use the http performance test ab
ab -n 10 http://www.my-web-site.com:8080
it takes 3 seconds total for 10 requests. But if I use Firefox or Chrome to load the page, each page load is about 3 seconds. This is tunneling to my work's computer back to my notebook running Rails 3, but shouldn't make a difference because I run Bash locally for the above statement and use Firefox locally too.
in a typical production environment, static content (images, css, js) are handled by the web server (eg. apache, nginx etc) not you rails server. so you should check their logs as well. if you are serving static content from your rails server that could be your problem right there.
If your browser time is slow but the time taken in rails (According to the logs) is fast, that can mean many things including but not quite limited to:
network speed is slow
your dns server is slow and browser can't resolve your dns quickly (this happens with for instance if you use godaddy for your dns server they throttle dns lookups)
the requests concurrency exceeds how many threads you have in rails
one way to debug these types of performance issues is to put something in front of the rails server (for example Haproxy) and turn the logging to full. As they will show how many waiting requests there are and how long the actual request/response transferring took along with how long it took your rails thread to process.

Heroku | Different performance parameters for different parts of your application

I have an Rails 3 application hosted on heroku, it has pretty common configuration where I have a client facing part of my application say: www.myapplication.com and an admin part of my application admin.myapplication.com.
I want my client facing part of my application to be fast, and I don't really care about how fast my admin module is. What I do care about is that I do not want usage on my admin site to slow down the client facing part of my application.
Ideally my client-side of the app with have 3 dedicated dynos, and my admin side will have 1 dedicated dyno.
Does anyone have any idea on the best way to accomplish this?
Thanks!
If you split the applications you're going to have share the database connections between the two apps. To be honest, I'd just have it one single app and give it 4 dynos :)
Also, Dynos don't increase performance, they increase throughput so you're capable of dealing with more requests a second.
For example,
Roughly - If a typical page response is 100ms, 1 dyno could process 10 requests a second. If you only have a single dyno and your app suddenly receives 10 requests per second then the excess requests will be queued until the dyno is free'd up to process those requests. Also requests > 30s will be timed out.
If you add a second dyno requests would be shared between the 2 dynos so you'd now be able to process 20 requests a second (in an ideal world) and so on as you add more dynos.
And remember a dyno is single threaded, so if it's doing something ANYTHING ie rendering a page, building a pdf and including receiving an uploaded image etc then it's busy and unable to process further requests until it's finished and if you don't have an more dynos requests will be queued.
My advice is to split your application into it's logical parts. Having a separate application for the admin interface is a good thing.
It does not have to be on the same domain as the main application. It could have a global client IP restriction or just a simple global Basic Auth.
Why complicate things and stuff two things into one application? This also lets you eperimenting more with the admin part and redeploy it without affecting your users.

Web App Performance Problem

I have a website that is hanging every 5 or 10 requests. When it works, it works fast, but if you leave the browser sit for a couple minutes and then click a link, it just hangs without responding. The user has to push refresh a few times in the browser and then it runs fast again.
I'm running .NET 3.5, ASP.NET MVC 1.0 on IIS 7.0 (Windows Server 2008). The web app connects to a SQLServer 2005 DB that is running locally on the same instance. The DB has about 300 Megs of RAM and the rest is free for web requests I presume.
It's hosted on GoGrid's cloud servers, and this instance has 1GB of RAM and 1 Core. I realize that's not much, but currently I'm the only one using the site, and I still receive these hangs.
I know it's a difficult thing to troubleshoot, but I was hoping that someone could point me in the right direction as to possible IIS configuration problems, or what the "rough" average hardware requirements would be using these technologies per 1000 users, etc. Maybe for a webserver the minimum I should have is 2 cores so that if it's busy you still get a response. Or maybe the slashdot people are right and I'm an idiot for using Windows period, lol. In my experience though, it's usually MY algorithm/configuration error and not the underlying technology's fault.
Any insights are appreciated.
What diagnistics are available to you? Can you tell what happens when the user first hits the button? Does your application see that request, and then take ages to process it, or is there a delay and then your app gets going and works as quickly as ever? Or does that first request just get lost completely?
My guess is that there's some kind of paging going on, I beleive that Windows tends to have a habit of putting non-recently used apps out of the way and then paging them back in. Is that happening to your app, or the DB, or both?
As an experiment - what happens if you have a sneekly little "howAreYou" page in your app. Does the tiniest possible amount of work, such as getting a use count from the db and displaying it. Have a little monitor client hit that page every minute or so. Measure Performance over time. Spikes? Consistency? Does the very presence of activity maintain your applicaition's presence and prevent paging?
Another idea: do you rely on any caching? Do you have any kind of aging on that cache?
Your application pool may be shutting down because of inactivity. There is an Idle Time-out setting per pool, in minutes (it's under the pool's Advanced Settings - Process Model). It will take some time for the application to start again once it shuts down.
Of course, it might just be the virtualization like others suggested, but this is worth a shot.
Is the site getting significant traffic? If so I'd look for poorly-optimized queries or queries that are being looped.
Your configuration sounds fine assuming your overall traffic is relatively low.
To many data base connections without being release?
Connecting some service/component that is causing timeout?
Bad resource release?
Network traffic?
Looping queries or in code logic?

Resources