IE6 freezes due to *server* configuration - ruby-on-rails

Our web site (running Rails) freezes IE6 nearly every time. The same code, deployed on a different server, does not freeze IE6. Where and how should we start tracking this down?

You need to determine the difference between them, so I'd start out with the following:
curl -D first.headers -o first.body http://first.example.com
curl -D second.headers -o second.body http://second.example.com
diff -u first.headers second.headers
diff -u first.body second.body

Might be a communication problem. Try wireshark against the server that freezes and the server that doesn't freeze. Compare the results to see if there is a difference.
Narrow down the problem. Start cutting out code until IE6 doesn't freeze. Then you might be able to figure out exactly what is causing the problem.

I've been having this problem today on an AJAX-heavy site. I think I've narrowed the problem down to the server having GZIP compression turned on. When the GZIP was turned off on our server, IE6 loaded the page without freezing at all. When GZIP is turned on, IE6 freezes/crashes completely.
I also noticed that images were being served with GZIP from our server, so I disabled that for images and this solved the problem with IE6 freezing/crashing. Now the server uses GZIP only for .js, .html, and JSON.

Try both in IE6 on different machines, preferably with as few addons as possible such as spyware blockers or Google Toolbars...

Use Firefox with Firebug to compare the HTTP Headers in the Request and Response from both servers.

You can also try : http://projects.nikhilk.net/WebDevHelper/Default.aspx
That installs in IE and may help you in troubleshooting network issues and such. You may be able to see exactly when and where it freezes in the request/response by using its tracing features.

Is the freezing happening on your development server or your production server? Weather your developer server locks up IE6 or not isn't that big of a deal, but if your production server fails to kill IE6 you might have a problem!
:-P

Perhaps some more info that will help you.
We had the same problem and narrowed it also down to the GZIP compression. The key was that we had gzip compression on for our ScriptResources, which also deliver the javascripts used by the controls in our .NET page.
Apperently there is a bug in IE6 that causes is to freeze, we believe that the browser receives the files and parses them before unpacking them, which causes the freeze.
For now we have turned off the gzip compression, but as we have a large number of files provided through the ScriptsResource manager we need a different solution.

Related

Pow domains not loading in Chrome

So, I struggled with this for the last hour. For some reason, my POW domains always hit a www.website-unavailable.com error in Chrome. Rails servers work great from the traditional rails s and pull up at localhost:3000. I'm using Anvil.app to manage the domains.
No matter what, I hit the www.website-unavailable.com page in Chrome immediately each time I try to visit a .dev domain.
The strangest thing is, the site loads great in other browsers. Just not Chrome. I even tried installing Chrome Canary and it hits the exact same error (fresh install!).
I tried, in this order, to no avail, to get the server running again:
Rebooting.
pow restart in the terminal for various sites.
Reinstalling POW.
Clearing the DNS cache at chrome://net-internals#dns
Nothing seems to work. Any idea what I could do to get this working again? Not a huge deal to use localhost:3000 but I love POW. The strange thing is, it was working wonderfully for weeks.
I ran into the same issue, but changing from OpenDNS to Google's DNS servers didn't help. Apparently, this is an issue with the asynchronous DNS built into Chrome.
There are a couple workarounds:
Use .xip.io domain instead of .dev
Disable asynchronous DNS in chrome://flags
Disable NXDOMAIN helper
I ended up disabling asynchronous DNS and my .dev domains work again.
Here's where I found more information:
Issue reported on Pow
Issue reported on Chrome
How to disable NXDOMAIN helper for OpenDNS
Thanks to user Dan Reedy (see above) I was able to fix this by moving from OpenDNS to the Google DNS 8.8.8.8 and 8.8.4.4 settings. Now, pages load faster and Pow servers are working again. And they seem to actually load much faster. Awesome!
I look up my IP via Network Preferences on a Mac OS X, and navigate to a domain like so :
http://subdomain.name-of-app-directory.ip-address.xip.io
So an example would be :
http://subdomain.website.10.0.0.0.22.xip.io

WebGL just stopped working locally for no reason

I was playing with some WebGL tutorials and, for no reason, WebGL just stopped working. I even loaded an untouched WebGL HTML page that I downloaded from the web that worked fine before. When I FTP that same exact code to my web server and load it, it works fine. Two questions...
Why would WebGL all of a sudden just stop working locally across ALL browsers?
Why would WebGL HTML code run fine online, but not locally?
I should also mention I restarted my computer, uninstalled/reinstalled Chrome and Firefox, and cleared all my internet cache.
Thanks so much for all your wisdom!
Found the problem. To prevent a local page from accessing your whole hard
disk drive, each local file:// URI is its own domain, which means that
local textures are always treated as cross-domain. In Firefox, I was able to get around this by modifying the about:config and setting security.fileuri.strict_origin_policy to false
Easiest way to work around this problem:
Use some IDE like Netbeans or Visual Studio to run your application. They use a local server automatically, so you don't have to run it on your own server or mess around with your browsers. Letting browsers access local data is more or less a security issue.

Does Struts 2 have any memory issues

I have a webapp developed with struts2 deployed in tomcat 5.5. The server has other applications deployed in it. But the app created with struts2 is very slow. Any ideas? How does Struts 2 handle object creation? And is there anything I can do on the tomecat server..
How slow is it? What are you doing? are you sure it is Struts 2 that is slow and not your application code? Did you do any profiling? What were the results?
Check this out: http://struts.apache.org/2.2.1/docs/performance-tuning.html
I found serving the static content from a folder increased the speed.
Well few details are really required for some one to answer your question in more good way
Which Struts2 version you are using
At which place/part do you think application is slow
as per my experience there are certain areas where Struts2 have known problems, OGNL in itself sometime creates problem since this is the part of the framework which took most of the time, this has been known to fixed in 3.x version of OGNL so you can get new jar of OGNL and than can test your application.
Second use some profiler and it will help you to catch the culprit like any thread blocking etc.
What OS is Tomcat running on?
If it's Linux, you may have run into a lack of entropy issue.
If this command returns something less than 200, it could explain your issue:
cat /proc/sys/kernel/random/entropy_avail
If it is low (or watch during startup/making requests), try pointing /dev/random to /dev/urandom. (Not for secure Production, but to test in Dev should be fine):
mv /dev/random /dev/random.orig
ln -s /dev/urandom /dev/random
And try starting Tomcat again.

Why does my Rails website timeout on Windows XP?

YayMyLife.com is my first Rails site. I am using Apache/2.2.8 (Ubuntu) Phusion_Passenger/2.2.2 .
The site works fine on Linux/Mac/Phones. However, it does not load on any browser on XP. This behavior is also found on other XP machines. The browser seems to wait for more content and it times out. I have checked headers with Live HTTPHeaders (the headers look okey) and also flushed DNS cache on XP box.
Can you please help me fix the problem?
Are you sure it doesn't work? I just tried it using IE7 and Firefox 3 within one of my Windows XP virtual machines and the site loads fine. I get a JavaScript error in IE but not in Firefox.
I got browser shots for those who are interested in solving this case:
http://browsershots.org/http://www.yaymylife.com/
This gentleman was on #rubyonrails previously and asked the same question, with little feedback
What is the error that you are getting? If you look at all the browsers, they haven't finished loading ... could it be excessive load on the server?
Have you tried getting a Windows machine and trying to test it? If so, what is the error (with screenshot and/or stack trace from your log).
If it was a problem with rails, it would not load on any browser, if it was a css problem it would give you crap on the screen.
This looks to be an excessive load problem and something that you should try and address by looking at the web server end at the amount of time it takes to load the page and whether you need some sort of template caching or to improve the performance of DB queries that are running.
I started using Mongrel instead of Passenger and this problem is fixed. Thanks to everybody who took interest; esp. Omar Qureshi

PHP Command-line scripts are ignoring php.ini and ini_set('memory_limit',...) directives

I am facing the common "Fatal error: Out of memory (allocated 30408704) (tried to allocate 24 bytes)..." PHP Fatal error. Pages served via Apache are not exhibiting this behavior.
I've tried the following:
Increasing the memory_limit in php.ini to a much larger value.
Increasing memory_limit within the script itself via calls to ini_set('memory_limit', -1), ini_set('memory_limit', '-1'), ini_set('memory_limit', 100000000), ini_set('memory_limit', '128M'), etc.
unset()ing unneeded arrays and objects to encourage the garbage collector to free up memory.
Contacting the web host. They are normally very capable and knowledgeable, but have not been able to help me with this issue either.
I've tried explicitly including a php.ini file using the -c command-line flag to hand-pick specific php.ini files with various values.
I've tried setting memory_limit in php.ini using both raw numbers of bytes and values such as 64M, 128M, etc.
The hosting provider was able to run the script as root with no issues, but experiences the same issue I do when running it using my non-root user. Perhaps there is some kind of permissions issue involved?
Regardless of what I try, the error message is the same. It appears that my command line scripts are ignoring changes to memory_limit.
I tend to try to make sure my scripts are memory efficient, but I'm currently needing to parse large amounts of HTML via Simple HTML DOM and it is in the parser that I'm experiencing out of memory issues. In an attempt to reduce the memory footprint of the script, I've tried using DOMDocument instead. This does not help either. In fact, the out of memory error is now triggered elsewhere in the script.
My question: has anyone experienced this or a similar issue? Do you have any recommendations?
Thank you.
It turns out that the problem was caused by shell fork bomb protection being enabled on the server that was placing a hard memory limit on all command-line scripts. This had been enabled by my web host without my knowledge.
your PHP on cli may be using a different php.ini to your apache php. try a phpinfo() and check its using the ini file you think its using.

Resources