Ruby on Rails: not closing HTTP response (when "asian characters" are displayed - ruby-on-rails

everyone:
I have this nagging problem I cannot seem to fix. So, please, help me out!!!!
So, I have a page that renders a partial. The page gets renders correctly within a couple of seconds, however, Chrome still receives (by showing the "loading" icon") for some 30 more seconds and reports an error (failed to load resource) on Chrome Inspector. It seems like the response is not correctly closed. If I took out the line in the partial, that renders asian characters, it would work fine - meaning it would render the page and properly stops.
This problem gets worse, if the partial is gets rendered as part of an AJAX call via jQuery. It would not even get rendered then, because it fails to get a proper ending for the response.
I would appreciate your help.
Thanks.
Here is the HTTP header:
Request Method:GET Status Code:200 OK Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch Accept-Language:en-US,en;q=0.8
Connection:keep-alive Cookie:XXXXX Host:localhost:3000
Referer:https://localhost:3000/home User-Agent:Mozilla/5.0 (Macintosh;
Intel Mac OS X 10_7_2) AppleWebKit/535.2 (KHTML, like Gecko)
Chrome/15.0.874.106 Safari/535.2 Response Headersview source
Cache-Control:max-age=0, private, must-revalidate
Connection:Keep-Alive Content-Length:55118 Content-Type:text/html;
charset=utf-8 Date:Wed, 02 Nov 2011 23:07:52 GMT
Etag:"77d774b3b119012c5fabbd5c625a98a8" P3p:CP="CAO PSA OUR"
Server:WEBrick/1.3.1 (Ruby/1.9.2/2011-07-09) OpenSSL/0.9.8r
X-Runtime:1.070787 X-Ua-Compatible:IE=Edge
UPDATE:
I just installed Firefox/Firebug. They give more info than Chrome does. What a pleasant surprise! Firebug confirmed my theory that somehow content-length got messed up. So, if the rendered partial contains some asian characters, the content-length in the response header and the actual response body size are different. If no asian characters are present, they are the same. Has anybody seen this problem before?

OK! We finally figured this out! YEAAAAH!!!
This was caused by WEBrick's inability to handle HTTPs correctly. Basically, WEBrick would not render pages correctly, which would causes a discrepancy between Content-Length in the response header and the actual body size. When this happens, the browser would wait for request to complete and throws an error (usually Failed to load resources error on Chrome) after 30 some seconds.
So, if you want to use HTTPs on your machine (localhost), make sure you use Thin as your server and nginx as a reverse proxy server. Even though it sounds complicated, it's not. Basically, Thin will serve up your pages just like how WEBrick would do. If a HTTPS request comes in, say through port 443 or whatever port you set up for, nginx takes care of validating the request and forwards it to Thin, which then handles the rendering, etc.
I hope this posts would help someone..

Related

POST request does not work on external servers

I am trying to send data from an arduino to a web server (LAMP) using the ESP8266 module, when I do a POST to a local network server the server receives the data and returns 200, however, when I post to an external server
(Hosting or google cloud) it registers error 400 in the Apache log and returns nothing, but when I do the same type of request by Postman everything is fine, because of this I do not know if it is my fault when mounting or executing the request or if Is some block on the external servers because the http server in my network works.
I'm using this lib to work with ESP: https://github.com/itead/ITEADLIB_Arduino_WeeESP8266
This is the request string:
POST /data/sensor_test.php HTTP/1.1
Host: xxxxxxxxx.com
Accept: */*
Content-Length: 188
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
temperatureAir1=19.70&humidityAir1=82.30&temperatureAir2=19.40&humidityAir2=78.60&externalTemperature=19.31&illumination05=898&illumination10=408&humiditySoilXD28=6&humiditySoilYL69=5
I found the problem, when I concatenated the strings that make up the request I was doing line breaks with \n, I switched to \r\n and it worked!
The amount of bytes really is with error, I am seeing to correct, but the good thing is that now the request is right.

URL Shorten: how is this achieved? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How do short URLs services work?
I often see shortened urls from bitly.com such as http://bit.ly/abcd. How is this "bit.ly" realized at server side? Is it some DNS trick inside?
Yes.. actually if you go to https://bitly.com/ you will notice that it provides this URL shortening service.
Going to http://bit.ly/abcd just redirects it to a URL of your choice. You can figure it by looking at the HTTP request and response headers
Request URL:http://bit.ly/abcd
Request Method:GET
Status Code:301 Moved
Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Host:bit.ly
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.24 Safari/535.1
Response Headersview source
Cache-control:private; max-age=90
Connection:keep-alive
Content-Length:145
Content-Type:text/html; charset=utf-8
Date:Thu, 16 Jun 2011 21:14:04 GMT
Location:http://macthemes2.net/forum/viewtopic.php?id=16786044
MIME-Version:1.0
Server:nginx
Set-Cookie:_bit=4dfa721c-001f7-011f8-c8ac8fa8;domain=.bit.ly;expires=Tue Dec 13 16:14:04 2011;path=/; HttpOnly
http://www.w3.org/Protocols/HTTP/HTRESP.html talks about status codes and 301 is what you should be looking for
No, it's just an HTTP server that looks up abcd in a database, finds http://example.com/long/url, and sends an HTTP redirect answer, like
HTTP/1.1 301 Moved Permanently
Location: http://example.com/long/url
Have you gone to http://bit.ly/? The url shortener stores the long url in a database, then when the short url is used, the url shortener service performs an http redirect to the long url.
LY is the top-level domain for Libya, which is distinct from bitly.com.
bit.ly is just a domain like any other (ie: .com, .net. .fr)
In this case .ly belongs to Libya.
It looks like they use A-Za-z0-9 for generating their URLs, and if my calculations are right, this means at any one time they can probably store a database of 61,474,519 of those codes mapped onto the long URLs. Assuming certain links can expire, or people can delete links they have made, it's safe to assume they won't run out of possibilities soon...and hey if they do, just make the links up to 8 characters- then you get 3,381,098,545 possibilities =P

Rails 3 is changing session ID on POST from AIR

I have a REST API in Rails 3 being accessed sometimes from an AIR application and sometimes from the browser.
I think this is a Rails 3 problem but it might be a Flex/AIR problem.
The Rails app uses omniauth for authentication, cancan for authorization, and active_record_store. I use the session model to store the identity of the user.
(There is a reason I'm not using cookie sessions, having to do with AIR for Android, OAuth, and StageWebView.)
I'm using Charles to monitor HTTP traffic.
Most requests work fine. The browser (or the AIR client) sends the session ID to the server using the Cookie http header, like this:
_session_id=950dee7eca6732aa62b5f91876f66d15
And Rails finds the session, figures out who the user is, and does its thing.
But under certain circumstances, Rails generates a new session before sending the response. It adds a session to the sessions table, and returns a Set-Cookie header to the client with a new session ID. Like this:
_session_id=e1489a6b610c0a1d13cec1454228ae47; path=/; HttpOnly
The circumstances under which this happens are:
The request comes from the AIR client
The request is a POST
This is obviously a problem, because on subsequent requests, Rails can't find the user information. It created a new session without that information.
So I'm looking at the HTTP headers for the POST request. Here's a copy/paste from Charles; I inserted the colon after the header name to make it readable.
Host: localhost.seti.hg94.com:3000
Content-Type: application/x-www-form-urlencoded
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en) AppleWebKit/531.9 (KHTML, like Gecko) AdobeAIR/2.6
Referer: app:/AndroidApplication.swf
X-Flash-Version: 10,2,152,22
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Cookie: _session_id=950dee7eca6732aa62b5f91876f66d15
Content-Length: 84
Connection: keep-alive
Does anyone have any insight into why Rails would generate a new session under those circumstances? It seems to be happening after my controller code executes, since I have the correct session information in the controller.
I'm busy trying to isolate the problem further, control the headers from within AIR, and so on. I've been working on this bug for almost a week. So any insight or suggestions from the community would be greatly appreciated.
Only a guess, but it seems like you're not bringing across the CSRF token that Rails generates for all POST-based requests:
http://guides.rubyonrails.org/security.html#cross-site-request-forgery-csrf

Why is http-response in chunked transfer-encoding only for some clients

I have an asp.net mvc application running on IIS 7. The problem I'm having is that depending on client the response may be recived (as seen through fiddler) as "chunked transfer-encoding. What I can't understand is why this only happens to some of my clients (even if two computers is on the same network with the same browser (IE 8) and not everyone, or vice versa?
Could anyone explain this to me?
Sorry for this late update, but the problem turned out to be the result of how the user reached the server. If the user was connected to the local lan through a vpn-connection the proxy would be sidestepped otherwise the proxy would be used. This resulted in two different results.
Chunked encoding is enabled on the server side if you prematurely flush the output stream. Do you have any user-agent-specific code might be calling Flush()?
RFC 2616 says:
All HTTP/1.1 applications MUST be able to receive and decode the "chunked" transfer-coding
Transfer-Encoding: chunked is defined for HTTP/1.1. Are some of your clients using HTTP/1.0 or even (shudder) 0.9? In that case, the server must not use transfer-encoding, as it's not a part of the protocol.
Although most modern clients understand HTTP/1.1, most have an option to downgrade to 1.0 when using a proxy (for historical reasons - some older proxies had buggy 1.1 implementations). So, although the browser may understand 1.1, it can request 1.0 if so instructed.
Example: MSIE 6+ has this in the Internet Options dialog - tab Advanced - HTTP 1.1 settings - checkboxes "Use HTTP 1.1" and "Use HTTP 1.1 through proxy connections".
Also, chunked encoding is not activated for all responses - usually the server switches it on when Content-Length is not set, or when the output buffer is flushed.

Why would the browser cache assets (images, js, etc.) while GETting but re-request everything after a POST + 302 redirect?

We've got our ETags and expiry headers setup properly and when browsing around the site without posting it is really very snappy. However, after any POST (which is almost invariably followed with a 302) you can see the browser re-request all the images. Is there something that could be causing this? Is there a setting that handles this?
I believe typically you still get a request but with 'If-Modified-Since' in the header -- at least that's what I've observed, even with expires headers. The response should be 304 (not modified) which is ultraquick. Or are you saying all the images are reloaded entirely?
Read more on If-Modified Since
Read about If-Modified-Since in HTTP RFC

Resources