Uploadify and rails 3 authenticity tokens - ruby-on-rails

I'm trying to get a file upload progress bar working in a rails 3 app using uploadify (http://www.uploadify.com) and I'm stuck at authenticity tokens. My current uploadify config looks like
<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$("#zip_input").uploadify({
'uploader': '/flash/uploadify.swf',
'script': $("#upload").attr('action'),
'scriptData': { 'format': 'json', 'authenticity_token': encodeURIComponent('<%= form_authenticity_token if protect_against_forgery? %>') },
'fileDataName': "world[zip]",
//'scriptAccess': 'always', // Incomment this, if for some reason it doesn't work
'auto': true,
'fileDesc': 'Zip files only',
'fileExt': '*.zip',
'width': 120,
'height': 24,
'cancelImg': '/images/cancel.png',
'onComplete': function(event, data) { $.getScript(location.href) }, // We assume that we can refresh the list by doing a js get on the current page
'displayData': 'speed'
});
});
</script>
But I am getting this response from rails:
Started POST "/worlds" for 127.0.0.1 at 2010-04-22 12:39:44
ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken):
Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms)
Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (6.6ms)
Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (12.2ms)
This appears to be because I'm not sending the authentication cookie along with the request. Does anyone know how I can get the values I should be sending there, and how I can make rails read it from HTTP POST rather than trying to find it as a cookie?

Skipping authenticity token checking is not ideal as it opens up XSS attack vectors.
Another way to make this work is described here: http://metautonomo.us/2010/07/09/uploadify-and-rails-3/
Note that you may need to double url encode things. In the example the rails 'u' is being used as well as encodeURLComponent(). Howoever, if you have a more fancy/rails3 type set up and source the session data/authenticity token from meta tags in the page header, you will need to call encodeURLComponent() twice.

This seems to be a bug with rails 3.
https://rails.lighthouseapp.com/projects/8994-ruby-on-rails/tickets/3913
This meant I had to change how I was skipping the authenticity token checking:
Changed from
protect_from_forgery :except => :upload
To
skip_before_filter :verify_authenticity_token, :only => :upload
Which seems to still work fine

well, i figured how to go around that. Is there a form on the view where you want to upload the files. If u do just use jquery to get the value of the hidden authenticity token and pass it into the scriptData var.
var token = ($('input[name=authenticity_token]').val());
scriptData : {'authenticity_token':token}
Hope this works for you.

Related

How to add the CSRF token to the HTTP header using fetch API and VanillaJS

I am trying to send a POST-request to a server that is configured to use Spring Security. When submitting my request, I get a 403 error. This is issue is due to CSRF protection. When disabling CSRF in my Spring Security Configuration, the POST request works fine.
I am using the fetch API to send my POST-request. This allows for specifying the HTTP-header that comes with the body containing my JSON-object that I am trying to POST. I am now trying to add the CSRF token to my HTTP-header. For this purpose, I have added the following two meta-tags to the head-section of my HTML:
<meta name="_csrf" content="${_csrf.token}"/>
<meta name="_csrf_header" content="${_csrf.headerName}"/>
I have then added the following lines to my JavaScript:
const token = document.querySelector('meta[name="_csrf"]').content;
const header = document.querySelector('meta[name="_csrf_header"]').content;
let responsePromise = fetch(
"myEndpoint",
{ method: 'POST', headers: {'Content-Type': 'application/json', header: token}, body: JSON.stringify(myJSON) });
})
I am assuming that Spring Security fills out the content of my CSRF-token. So far the error message still persists though. I think there might be nothing inside my _csrf meta-tag. When logging the content (token and header), I just receive:
${_csrf.headerName}
${_csrf.token}
While I am expecting to see the header name "X-CSRF-Token" and the actual token. So could it be that Spring security does not automatically fill out this content?
======= Update:
My project is a Maven project. I have added the Thymeleaf-dependency to my pom.xml. I don't know much about thymeleaf, but it seemed to be the easiest way to implement the Login mechanism with Spring Security. Anyways, when replacing content with th:content in my meta-tags, the header and the token are actually found and will be logged in my console. I am still not able to POST my request though, the 403 remains.
Add 'th' before the attribute for thymeleaf processing
<meta name="_csrf" th:content="${_csrf.token}"/>
<meta name="_csrf_header" th:content="${_csrf.headerName}"/>

Turbolinks to Turbo upgrade has broken form redirection

I'm attempting to upgrade to Turbo from Turbolinks and I've found that the client is not rendering redirects for form submissions.
Versions:
rails 6.1.4
hotwire-rails 0.1.2
#hotwired/turbo-rails 7.0.0-beta.8
I've ignored the incompatibility between Turbo and Devise for now - just trying to get regular forms working without having to disable Turbo on them.
Here's an example action:
def update
authorize #label
#label.update(label_params)
if #label.save
redirect_to document_labels_path(document_id: #document.id)
else
render :new, status: :unprocessable_entity
end
end
Here's a rendered form:
<form class="simple_form new_label" id="label_form" novalidate="novalidate" action="/documents/72/labels" accept-charset="UTF-8" method="post">
...
</form>
When submitting a valid form, the server will say Processing by LabelsController#create as TURBO_STREAM and correctly serve a 302. It will then serve the 200 for the redirect location. The browser however is left just looking at the submitted form. Changing the redirect status to 303 doesn't change anything.
I added a console.log for every Turbo event:
document.addEventListener("turbo:load", function () {
console.log('TURBO:LOAD')
})
document.addEventListener("turbo:click", function () {
console.log('TURBO:CLICK')
})
document.addEventListener("turbo:before-visit", function () {
console.log('TURBO:BEFORE-VISIT')
})
document.addEventListener("turbo:visit", function () {
console.log('TURBO:VISIT')
})
document.addEventListener("turbo:submit-start", function () {
console.log('TURBO:SUBMIT-START')
})
document.addEventListener("turbo:before-fetch-request", function () {
console.log('TURBO:BEFORE-FETCH-REQUEST')
})
document.addEventListener("turbo:before-fetch-response", function () {
console.log('TURBO:BEFORE-FETCH-RESPONSE')
})
document.addEventListener("turbo:submit-end", function (event) {
console.log('TURBO:SUBMIT-END')
// event.detail
})
document.addEventListener("turbo:before-cache", function () {
console.log('TURBO:BEFORE-CACHE')
})
document.addEventListener("turbo:before-stream-render", function () {
console.log('TURBO:BEFORE-STREAM-RENDER')
})
document.addEventListener("turbo:render", function () {
console.log('TURBO:RENDER')
})
This is what the output is for a successful form submission:
TURBO:BEFORE-FETCH-REQUEST
TURBO:SUBMIT-START
TURBO:BEFORE-FETCH-RESPONSE
TURBO:SUBMIT-END
There is no render event. Investigating event.detail.fetchResponse.response for turbo:submit-end it seems to be perfectly aware that the client should redirect, it just didn't.
Response {type: "basic", url: "http://lvh.me:3000/documents/72/labels", redirected: true, status: 200, ok: true, …}
body: (...)
bodyUsed: true
headers: Headers {}
ok: true
redirected: true
status: 200
statusText: "OK"
type: "basic"
url: "http://lvh.me:3000/documents/72/labels"
__proto__: Response
Update: It is actually performing the redirect and the server is generating the response. The issue is that the client is not rendering the redirect response.
What is happening here is that your application is specifying that it prefers turbo-stream responses over text/html responses. If you were to look at your request headers for the redirect page, you'll likely see the following:
Accept: text/vnd.turbo-stream.html, text/html, application/xhtml+xml
As a result, Rails returns the data with the first type it recognizes, which is text/vnd.turbo-stream.html. Turbo in your browser sees this and, since it's not interpretable as a Turbo Stream, unhelpfully ignores it quietly.
The solution (workaround?) is to make sure you are redirecting to the html version of your page:
redirect_to document_labels_path(document_id: #document.id, format: :html)
This will return the page with a Content-Type of text/html, and Turbo will replace the whole page with the contents.
Jeff's answer is correct but I wanted to share the specific fix for the issue I was having.
If you use HAML or Slim, I've seen it on more than one codebase where developers rename all template files .haml instead of .html.haml (same for Slim). It's never bitten me before using Turbo, but without .html in the filename, part of Rails won't know what format to serve a response in, so it defaults to the request format.
Turbo makes a turbostream request when submitting a form, but if the response is a redirect, it expects it to be text/html in order to render it. If it receives a turbostream response to a redirect request, Turbo just sits there doing nothing with no console errors or warnings (terrible default behavior IMO).
So if your templates do not include .html, just add it back and Turbo will render redirects. You may still need status: :see_other.
More information:
https://github.com/hotwired/turbo-rails/issues/122
https://github.com/hotwired/turbo-rails/issues/287
Adding to the excellent answer of Jeff Seifert.
If you don't need turbo streams, you may also unregister the turbo-stream content type altogether by putting this into an initializer e.g. config/initializers/turbo.rb:
Rails.application.config.after_initialize do
Mime::Type.unregister(:turbo_stream)
end

rails 4 API gives 401 unathorized response after successful log in using angular2-token package

I have a setup in which I have a rails 4 API having the gem devise_token_auth and hosted as a separate application so I have also rack-cors configured to handle cross origin requests. Using angular2-token on my front end Angular 2 applicaiton I have been able to successfully sign up and sign in as well as sign out users via my API.
The issue however, which I have encountered occurs only when the user is signed in and upon refreshing the browser I get this error in the rails API console as well as in the browser, checked in firefox as well as chrome.
Started GET "/api/v1/auth/validate_token" for 127.0.0.1 at 2017-02-06 17:42:49 +0500
Processing by DeviseTokenAuth::TokenValidationsController#validate_token as JSON
followed by
SELECT "users".* FROM "users" WHERE "users"."uid" = $1 LIMIT 1 [["uid", "abc#xyz.com"]]
Completed 401 Unauthorized in 76ms (Views: 0.2ms | ActiveRecord: 0.3ms)
My initial assumption during the configuration of this package in my Angular2 app was that it will implicitly include authentication headers in each request. However after repeatedly going through the gem's documentation I also added the headers myself when I initialize the token service in my app.component.ts file.
this._tokenService.init({
apiPath: API_PATH,
globalOptions: {
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
"access_token_name": localStorage.getItem('accessToken'),
"client_name": localStorage.getItem('client'),
"uid_name": localStorage.getItem('uid')
}
}
});
Even after that the response hasn't changed to the request and I was unable to receive these headers on the server end as well.
However after hours of inspection an idea finally came to me which was to inspect the headers m getting on the server and when I used ruby's request.header.inspect on my server end application I get the following output with the information required for validation of the token but it seems that the name of the keys of these header values are different form what the devise_token_auth expects to validate token (I went through the source of the devise_auth_token gem here.
"HTTP_ACCESS_TOKEN_NAME"=>"xxxxxxxxxxxxxxxxxx", "HTTP_EXPIRY"=>"xxxxxxxxxxxxxxxxxx", "HTTP_UID"=>"abc#xyz.com", "HTTP_CLIENT_NAME"=>"xxxxxxxxxxxxxxxxxx", "HTTP_TOKEN_TYPE"=>"Bearer"
What I believe is the user is not being set by the devise_token_auth gem based on the headers that are being passed.
After repeatedly going through the documentation of Angular2-token as well as devise_token_auth gem I am confused whether or not to manually add headers for authentication because I believe they are being passed already but with different keys.
I would just like to know if that is the case I am experiencing its been almost a full day and I cannot figure out a way to pin point the reason behind the 401 response.
Thanks a lot.
EDITED:
Moreover I am also getting nil when accessing current_user or any devise helper after successful sign in on server end.
Here are the rack-cors configuration for my api rails applicaiton as well.
application.rb
config.middleware.use Rack::Cors do
allow do
origins '*'
resource '/cors',
:headers => :any,
:methods => [:post],
:credentials => true,
:max_age => 0
resource '*',
:headers => :any,
:expose => ['access-token', 'expiry', 'token-type', 'uid', 'client'],
:methods => [:get, :post, :options, :delete, :put]
end
end
The headers I get upon inspecting are following:
HTTP_ACCESS_TOKEN
HTTP_CLIENT
HTTP_EXPIRY
HTTP_TOKEN_TYPE
HTTP_UID
These are the headers sent even if I don't mention any headers while configuring the angular2-token package.
I am confused why it lets me login in the first place and later throw an error with a 401 code and response of
{"success":false,"errors":["Invalid login credentials"]}
When I try and manually check token's validation using the following code
this._tokenService.validateToken().subscribe(
res => console.log(res),
error => console.log(error)
);
You should also pass Expiry and Token-type on requests for devise_token_auth to authenticate, something like this:
let headers = new Headers();
headers.append('Content-Type', 'application/json');
headers.append('Uid', this.uid);
headers.append('Client', this.client);
headers.append('Access-Token', this.access_token);
headers.append('Expiry', this.expiry);
headers.append('Token-Type', 'Bearer');
this.http.post('http://my-api.com/', JSON.stringify(resource), {headers: header}).subscribe((res)=>{
#Your Logic Here
});
This example is for generic HTTP requests, but you can apply that rule on your angular token plugin. ie.:
this._tokenService.init({
apiPath: API_PATH,
globalOptions: {
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
"access_token_name": localStorage.getItem('accessToken'),
"client_name": localStorage.getItem('client'),
"uid_name": localStorage.getItem('uid'),
"expiry_name": localStorage.getItem('expiry'),
"token-type_name': 'Bearer'
}
}
});
You have set custom headers name for devise_token_auth? First example works with default configuration, without _name in the end of the headers' names, you should try modifying if that is the case.
After spending a few days on this issue and going through multiple threads of related issues repeatedly posted on related topics I came across the following issue and I realized I have rails 4 and have used rails-api gem to generate my API.
After that I created a rails 5 API with --api option (without rails-api gem) and with devise_token_auth and rack-cors on my api end I was successful in sending authorized request using the angular2-token package. Along with that I was also able to send authorized http post requests with the authorization headers access-token, token-type, expiry, uid as mentioned in the devise_token_auth gem's documentation.
This might not be the exact solution or I may not have pinpointed the cause of the issue but this was what worked for me.

Googlebot causes an invalid Cross Origin Request (COR) on Rails 4.1

How do I prevent Google from causing this error while crawling the site? I am not interested in turning off "protect_from_forgery" unless it is safe to do so.
[fyi] method=GET path=/users format=*/* controller=users action=show status=200 duration=690.32 view=428.25 db=253.06 time= host= user= user_agent=Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) session= params={""} ()
[hmm] Security warning: an embedded <script> tag on another site requested protected JavaScript. If you know what you're doing, go ahead and disable forgery protection on this action to permit cross-origin JavaScript embedding. (pid:)
[fyi] method=GET path=/users/123/flag format=*/* controller=users action=flag status=500 error='ActionController::InvalidCrossOriginRequest:Security warning: an embedded <script> tag on another site requested protected JavaScript. If you know what you're doing, go ahead and disable forgery protection on this action to permit cross-origin JavaScript embedding.' duration=26.50 time= host= user= user_agent= session= params= (pid)
[omg] ActionController::InvalidCrossOriginRequest (Security warning: an embedded <script> tag on another site requested protected JavaScript. If you know what you're doing, go ahead and disable forgery protection on this action to permit cross-origin JavaScript embedding.):
actionpack (4.1.4) lib/action_controller/metal/request_forgery_protection.rb:217:in `verify_same_origin_request'
The controller responds with this
respond_to do |format|
format.js { render template: 'users/flag', layout: "some_layout" }
end
I am unable to recreate the bug and it seems to work fine when I do it through my browser
So far I've looked at the following resources but most seem to suggest just blindly turning of CSRF or are unanswered.
Using layout specific javascript in comfy leads to InvalidCrossOriginRequest
Invalid Cross Origin Request After Upgrading to Rails 4.1
How to avoid ActionController::InvalidCrossOriginRequest exception?
Googlebot asks for png and then my whole Heroku site crashes. What is going on?
https://github.com/rails/rails/pull/13345
http://myownpirateradio.com/tag/rails-authentication-token/
https://gist.github.com/aishek/8535082
Why does Google prepend while(1); to their JSON responses?
http://www.tsheffler.com/blog/?p=428
http://edgeapi.rubyonrails.org/classes/ActionController/RequestForgeryProtection.html
To clarify:
The action should be protected from CSRF, But I want to prevent Google from crawling it or generating an error from crawling the page. Ie.) I want the false positive Security Warnings to go away without actually compromising my security features.
Googlebot is using the format "*/*" (http://apidock.com/rails/Mime) and the application renders the js since it's the only thing available. Since it's remote, it correctly causes an Invalid COR.
This was reproducible using:
curl -H "Accept: */*" https://www.example.com/users/123/flag
The fix is to have an html fallback resource for the spider to crawl:
respond_to do |format|
format.html { render template: 'users/flag' }
format.js { render template: 'users/flag', layout: "some_layout" }
end
As per "CSRF protection from remote tags " from the rails guide:
In the case of tests, where you also doing the client, change from:
get :index, format: :js
To:
xhr :get, :index, format: :js
http://edgeguides.rubyonrails.org/upgrading_ruby_on_rails.html#csrf-protection-from-remote-script-tags
In the case you want to make this route skip csrf check, white list the route using something like:
protect_from_forgery :except => :create

InvalidAuthenticityToken after using Jquery File Upload

I have a Rails 4.0 app with an Ember.js frontend. I'm using Ember-Auth in conjunction with Devise to handle authentication. For the most part, everything works. However, if I use Jquery File Upload, then all subsequent queries to the server result in an InvalidAuthenticityToken error. The file upload itself works perfectly, but if for instance I visit the Organizations index page afterwards, I'll get the error. If I reload the page, then the errors stop coming and everything works fine again until I perform another upload.
The uploader looks like this:
didInsertElement: ->
$('#image_upload').fileupload
url: "/images"
formData: [{ name: 'auth_token', value: Whistlr.Auth.get('authToken') }]
success: (response) =>
#get('parentView').get('controller').set('image_token', response.token)
Even if I remove everything but the url, I get the InvalidAuthenticityToken afterwards. Any idea what's happening?
Following my suspicion that the session was getting reset, I tried passing the authenticity_token back the client side from the server. Surely enough, it was changing. So I manually placed the new authenticity_token in the headers. The code looks like this:
def create
image = Image.create(image_params)
render json: {image: image, authenticity_token: form_authenticity_token}, status: 201
end
$('#image_upload').fileupload
url: "/images"
dataType: "json"
formData: [{ name: 'auth_token', value: Whistlr.Auth.get('authToken') }]
success: (response) =>
#get('parentView').get('controller').set('image_token', response.image.token)
$('meta[name="csrf-token"]').attr('content', response.authenticity_token)
Note the last line of the javascript, which replaces the csrf-token. Thankfully, this works. As far as I can tell, it's also secure. (If you see a security flaw, please let me know!) But it still seems weird that this is necessary. Nowhere else in my app do I have to manually replace the csrf-token. Why is this happening here? Is it because the session is getting reset, and if so, why is that happening here but not elsewhere?

Resources