Rails AJAX post form from HTTP side to HTTPS - ruby-on-rails

I'm having a problem submitting an AJAX form (it is a sign-up modal form, that ideally, should be available on all of the pages) from HTTP (unsecured) to HTTPS (secured) part of the site.
The problem is - I'm getting status "200 OK", but no actual response is available (hard to debug actually) and nothing happens after it (but it should).
If I do the same request from one of the secured (HTTPS -> HTTPS) pages - it works totally fine.
Also in the log file I see following message: "WARNING: Can't verify CSRF token authenticity".
I tried skipping this CSRF filter, but no difference.
Can it be fixed somehow, apart from enabling SSL on the entire site ?

Fixed the problem myself by applying an after_filter to corresponding controller actions:
def access_control_header
headers["Access-Control-Allow-Origin"] = "*"
headers["Access-Control-Request-Method"] = "*"
end
Easy :)

Related

How to handle unauthorized accesses gracefully in backend?

I have a Ruby on Rails application which redirects users to the start or login page if they end up at a resource they are not authorized for.
For that, it redirects through a 302 Found.
This does not feel right to me, as for example a successful creation of a resource via POST also returns a 302, with the only difference being that it redirects to the created resource.
On the other hand, it does not seem possible to redirect a user without returning a 30X status code (401/403 in this case).
Am I missing something here, or am I already doing it correctly and this is just the way to go?
Well I'd say that it depends of the context, for an API I'd go for you way, if the user is trying to reach an endpoint without authentication or without enough permissions, I'd return a 401 or 403 respectively.
But for a web application without a separated frontend app, you've no choice to tell to the browser where it has to go next and the only way of doing this is to use redirections (that are only 3xx HTTP codes => https://developer.mozilla.org/en-US/docs/Web/HTTP/Status#redirection_messages).

Setting a cookie on all subdomains in Rails 4 and Ember

We have a Rails application that is the API component running on api.domain.com and a front-end application in Ember.js running on www.domain.com.
When Ember.js sends a POST request to a route in the API, /events, we want the API to set a cookie to remember a unique user identifier.
Hence this method in the Events controller:
def set_tracking_cookie
cookies[:our_company_distinct] = {
value: create_identifier,
expires: 1.year.from_now,
domain: :all
}
end
As you see, the cookie is set on the entire domain, and is set to expire in a year.
The point is that the next time Ember queries the API, it will be able to read this cookie. However, this is not the case.
Each time the front-end queries the API, the API is unable to find the cookie, nor does it show in the cookies in my developer tools.
The Ember requests set the Access-Control-Allow-Credentials header to true, and I can confirm that the cookie is indeed sent in the response from the API with the correct values for domain, name, path, expiry, etc.
Am I missing something?
Thanks!
For anyone else going through a problem sending/receiving cookies in this way, here are some things I found helpful when I was debugging and ultimately fixed the problem:
When you use the Chrome's (or any other browser's) devtools, examine each request and check it to see if the cookie is being sent from the API on the request where you expect it to be sent, and if all subsequent requests from Ember (or whatever other JS framework for that matter) send this request. In Chrome, go to "Network > [request] > Cookies". Ensure that the front end is sending the cookie properly and that it indeed receives it properly.
We found that this was the best way of adding support for the CORS cookies to Ember was to be done like so: http://discuss.emberjs.com/t/ember-data-and-cors/3690

Ajax requests stop working after many successful requests

In my Rails + Devise app I have a table of links to multiple "contacts", each being a simple jquery $.get AJAX request which calls Contact#show.
Inevitably after clicking anywhere from 3-25 of the links (successfully!), a request will fail (response status 0 or failed to load resource depending on browser), after which it will never work again until the browser tab is closed or cache is cleared.
Here's the javascript for the request
$.get('/contacts/1312')
Details...
I do have csrf_meta_tags at the top of my layout
The request heads do include a "X-CSRF-Token" with the correct CSRF token from the meta tag
On chrome, the failed requests do not show up in the server logs
as a request. its as if they never made it. The only error reported
is in the chrome console which reports a failure. This leads me to believe it is browser related.
On Safari, upon first failure, it seems to destroy the session any subsequent requests result in a request for the sign_in page, which leads me to believe it may have something to do with devise
Update 3/30/13: After looking at many of the related question on SO (this one: Rails not reloading session on ajax post), which have to do with CSRF not being set correctly, I dont believe this issue is related to CSRF because it works properly several times before it fails.
i eventually did figure it out. I was using a feature datatables.js (a table library) that saved its state in the cookie. However, the data it was trying to save in the cookie exceeded the 4kb max, so my cookie was getting messed up , resulting in different behaviours across different browsers.

CSRF token problem on requests from outside the browser to a Rails server

I need to make an HTTP POST request from outside the browser, but the Rails back-end is not accepting the authentication (error 401). I know I need to pass a CSRF token in such cases, but it's not working.
When I make the request through a form on a browser, it works as expected, but when I try to simulate an identical request (in terms of headers and cookies) from outside the browser (using curl, for example), the authentication doesn't work.
Two small changes allowed me success without a browser: (1) turning off protect_from_forgery, which validates the CSRF or (2) using GET instead of POST for the request. In both cases, passing the cookie is enough. That means the problem is definitely related to CSRF stuff.
So, my question is: how can I make a CSRF-protected HTTP POST to a Rails server without using a browser?
To clarify, the process is broken in three steps:
Login: returns a cookie to identify the session;
New: a GET request that returns the CSRF token to be used later (uses the cookie);
Create: a POST request that submits the information I want to create (uses both the session cookie and the CSRF token).
The only step which fails is the third one.
Assuming your CSRF token is cookie-based, then the program you use to make your requests needs to track cookies. Check out the --cookie-jar option in curl.

What's the correct response to unauthorized HTTP request?

I am writing web application I am not sure what is the correct response to unauthorized request. For user it is convenient when server response with 302 and redirects him to login page. However somewhere deep inside I feel that 401 is more correct. I am also little afraid if the 302 cannot be misinterpreted by search engines.
So how do you response to your unauthorized requests?
Edit
I am using ASP.NET MVC. This is not important from theoretical point of view. However ASP.NET form authentication use 302 approach.
I also like the behavior when user is redirected after successful login to the page he was requested. I am not sure if this can be implemented with 401 approach easily.
I think the correct response is entirely dependent on the context of the request. In a web application intended for human (not machine) consumption, I prefer to either redirect to login if the user is not authenticated and render an error page if the user is authenticated, but not authorized. I won't typically return an unauthorized response as it contains too little information for the typical user to help them use the application.
For a web service, I would probably use the unauthorized response. Since it is typically consumed by a program on the other end, there is no need to provide a descriptive error message or redirection. The developer using the service should be able to discern the correct changes to make to their code to use the service properly -- assuming I've done a good job of documenting interface usage with examples.
As for search engines, a properly constructed robots.txt file is probably more useful in restricting it to public pages.
401 seems grammatically correct, however a 401 is actually a statement presented back to the browser to ask for credentials - the browser would then expect to check the WWW-Authenticate header so that it could challenge the user to enter the correct details.
To quote the spec.
The request requires user
authentication. The response MUST
include a WWW-Authenticate header
field (section 14.47) containing a
challenge applicable to the requested
resource. The client MAY repeat the
request with a suitable Authorization
header field (section 14.8). If the
request already included Authorization
credentials, then the 401 response
indicates that authorization has been
refused for those credentials. If the
401 response contains the same
challenge as the prior response, and
the user agent has already attempted
authentication at least once, then the
user SHOULD be presented the entity
that was given in the response, since
that entity might include relevant
diagnostic information. HTTP access
authentication is explained in "HTTP
Authentication: Basic and Digest
Access Authentication" [43].
If you do a 302 you at least guarantee that the user will be directed to a page where they can log in if non-standard log in is being used. I wouldn't care much what search engines and the like think about 401's.
Send a 401 response, and include a login form on the page you return with it. (i.e. don't just include a link to the login page, include the whole form right there.)
I have to agree with you that the 401 result is actually the correct response.
That said why not have a custom 401 page which is well designed and shows the unauthorised message as well as a link to the login page, which you could have a 15 second javascript countdown to automatically send them there.
This way you give the correct 401 response to a bot which is told that the page is restricted but a real user gets redirected after being told that they are accessing a secured resource.
Don't bother about the search engines if your site is mainly used by humans. The ideal approach when a user reaches a protected page is to redirect them to a login page, so that they can be forwarded to the protected page after successful login.
You cannot accomplish that with a 401-error, unless you are planning to include a login form in the error page. From the usability point of view, the first case (302) is more reasonable.
Besides, you could write code to redirect humans to your login page, and search engines to 401.
How are the search engines going to be indexing the secured pages in the first place? Unauthorized users, such as bots, shouldn't be getting that far in the first place IMHO.

Resources