I have a site that is taking around 20 seconds to load every page (no matter what it does)
So i put in a /scripts/test.html that isn't part of a route and it still takes this long... no db hit nothing.
I renamed the web.config to _web.config and it loads instantly, rename it back... back to around 20-30 second load times.
Running the application locally... i put a breakpoint on the RegisterRoutes in the Global.asax.cs and ran that... /scripts/test.html does not hit breakpoint normal site does (site loads instaly fast locally on the same database/code)
Server is Mosso IIS7/SQL Server 2008 Cluster
The site is being hit pretty hard... ANY help please? or things to test/debug?
Few things to try:
Try taking a look with FileMon/ProcessMonitor and see if there is a ton of disk activity.
If the above is not an issue, install an instance of Dot Trace by Jet Brains. Profile the app and see if there is some memory or performance issue that is not apparent on your local box.
I saw another related issue that was solved by disabling IPv6, maybe try that.
Related
Let me say first that I'm quite new and inexperienced with rails. Today I tried to update an image in a rails app hosted on Heroku. Anyway, this is the simple flow I followed as I did other times before:
Add updated image to the image folder
Precompile the assets rake assets:precompile
Add and commit all changes
Push to heroku
Until this point all seems fine: I open Chrome to check my app from my domain and it's all there as expected.
The problem is that if I refresh the page all the images disappear (like they have never been loaded). This does not happen locally.
If I do a ctrl+f5 it all comes back nicely, but I lose everything again on simple refresh.. and so on.
Has anyone experienced something similar? I understand this might be hard to answer as there is not much code to show. Let me know if I can give more details.
On a final note, it seems that all works normally on a friends machine (that is, refresh doesn't give this problem). I'm thinking something might be wrong with my Chrome settings here? I don't remember having changed anything recently though.
This is very weird and quite annoying some help/insights would be great.
UPDATE: This seems indeed really to happen locally on my machine at work. I checked from another couple of computers at home and the app is displayed fine (without any refreshing problem).
Did you check if the cookie is disabled by your browser for the heroku website in particular?
I have just tested this issue with an image based website (https://unsplash.com/). When the cookie is disabled for that website, pressing F5 clears all the images, while pressing Ctrl-F5 brings those lost resources back as like as your case.
Enabling cookie resolves the issue in my case.
Recently I've run into the issue that the public files of a rails application only load if there is a cookie present. I originally noticed this because Google reported that it couldn't find our robots.txt file. Later I realized that it seems to apply to all of our public files for some reason.
For instance, upon visiting this site, the content is blank. http://80000hours.org/robots.txt
(If it's not blank, remove the cookies from the website).
However, when I load the main page at http://80000hours.org/, and then go back to /robots.txt, the page loads correctly.
I'm quite confused what could create this issue and how to go upon debugging it. Looking back on my commits it doesn't seem like I changed anything substantial during the period where it broke. The Memcache add-on for the website shut down around a week before this happened; I never set up a replacement, but wouldn't think this would have caused the issue.
The issue also does not exist locally, only on the production and staging Heroku instances. The full codebase is here, the issue occurred around November 14th.
Any advice is much appreciated.
Sure enough, it was Memcache. I added the new Heroku Memcache add-on service, Memcachier, and it worked fine without the cookie. I'll check tomorrow if Google successfully find the /robots.txt file or not, but I'm assuming it will.
I had a perfectly well running asp.net mvc application and I was debugging message sending via SignalRand I decided to stop debugging and went to edit some code. However, there was an error saying something about IIS termination and whether I want to do that because something (I assume - the application) cannot be stopped. (I am really sorry, but I didn't read it at the time). So now I try to relaunch my program and it just won't open. The website is trying to be opened but the loading circle in chrome just kept on spinning forever.
What I tried to do was:
1. restarted VS - didn't help
2. restarted PC - didn't help
3. created a new project, brought all files to it and launched it and it worked!
So then it worked for ~10 minutes or so and then just stopped again (this time no error message or anything). I tried changing a port in the settings of the project. Didn't work. Tried changing back and it launched successfully. For a minute or so... :(
So finally, I tried putting a break point right at the
public class MvcApplication : System.Web.HttpApplication
{
protected void Application_Start()
-> {
The breakpoint was hit, it successfully passed the next line
RouteTable.Routes.MapHubs();
and just disappeared at
AreaRegistration.RegisterAllAreas();
Many times I tried and it always disappears at the same location. Going deeper is not an option since this is system method. I suspect something can be problematic with that, but I am not experienced with much configuration of asp.net.
P.S. many times VS was acting strange and even tho I set the breakpoint at the place I marked above, it showed the breakpoint with a message that it won't be hit because the source differs from the current code (But I didn't change a thing since before the very first crash! The only place I modified a few symbols at was at MyHub.cs which is an extended class for a Hub for SignalR)
Lastly, I tried deleting everything from bin folder so it got fully rebuilt, but without any success to revive my application.
What could be a possible problem, maybe someone had anything at least similar to this? Or maybe someone would be kind to help me at choosing better keywords while searching in google because "IIS termination" and "AreaRegistration.RegisterAllAreas(); not working" didn't bring me much :(
This is a known bug: https://github.com/SignalR/SignalR/issues/1335.
We have been unable to successfully reproduce this issue on our servers. We've seen that using a different web server will resolve the issue. If you're able to post a reproduction project to the linked Issue, chances are it will be resolved promptly.
Apparently, the solution I marked isn't the exact thing that helped me. This link was the thing that miraculously helped me and I was able to finally get back to work. However, I found the winning link because of N. Taylor Mullen, so he deserves the full credit :) But letting others know if anyone comes to this question :)
Extremely often, in all kinds of various MVC3/4 apps I debug in VS2012 on my home machine, after pressing F5 to start debugging and open the configured start page in Chrome, it can take several - up to ten - minutes before becoming active.
I have no long startup procedures that load caches or generate code etc. and the same app will start instantly on my office machine. Quite often it will do so on my home machine as well, but this slow starting seems to come about after some hours of debugging, and possibly certain operations. Restarting VS doesn't seem to help, neither does killing IIS Express.
We were faced with an identical scenario recently where attaching the application to the debugger resulted in each page load taking about 10 minutes each, but running without debugging or in the QA environment worked fine.
The problem turned out to be that log4net was configured to use a network path for storing log files, a path that was unavailable from our local setup. This resulted in multiple attempts at accessing a remote path (once for each class being set up with Spring .Net) that didn't exist (and hence log4net threw an exception in each case).
But this should impact you out of the box, and shouldn't increase with time..
YayMyLife.com is my first Rails site. I am using Apache/2.2.8 (Ubuntu) Phusion_Passenger/2.2.2 .
The site works fine on Linux/Mac/Phones. However, it does not load on any browser on XP. This behavior is also found on other XP machines. The browser seems to wait for more content and it times out. I have checked headers with Live HTTPHeaders (the headers look okey) and also flushed DNS cache on XP box.
Can you please help me fix the problem?
Are you sure it doesn't work? I just tried it using IE7 and Firefox 3 within one of my Windows XP virtual machines and the site loads fine. I get a JavaScript error in IE but not in Firefox.
I got browser shots for those who are interested in solving this case:
http://browsershots.org/http://www.yaymylife.com/
This gentleman was on #rubyonrails previously and asked the same question, with little feedback
What is the error that you are getting? If you look at all the browsers, they haven't finished loading ... could it be excessive load on the server?
Have you tried getting a Windows machine and trying to test it? If so, what is the error (with screenshot and/or stack trace from your log).
If it was a problem with rails, it would not load on any browser, if it was a css problem it would give you crap on the screen.
This looks to be an excessive load problem and something that you should try and address by looking at the web server end at the amount of time it takes to load the page and whether you need some sort of template caching or to improve the performance of DB queries that are running.
I started using Mongrel instead of Passenger and this problem is fixed. Thanks to everybody who took interest; esp. Omar Qureshi