In Rails, how can I set a cookie's Max-Age? - ruby-on-rails

In Rails 3.2.8, you can write a cookie like this:
cookies[:login] = {
:value => "XJ-122",
:expires => 1.hour.from_now
}
The docs say that these are the available option symbols:
:value
:path
:domain
:expires
:secure
:httponly
I would expect :max_age to be available as an option too, but perhaps user agent support is not widespread enough yet (?) to warrant including it.
So how should I set a cookie's Max-Age in Rails?

I read over the Rails source code for ActionDispatch::Cookies. If you look at how the handle_options method is used you will see that even options not specified in the documentation are passed through. Rails usually passes options around quite liberally, with the philosophy that, somewhere down the line, a method will know what to do with the left-over options.
So, I would suggest that you give it a try with the :max_age option, even though it is not documented, and see what happens.
Note: Rails relies on Rack to set the cookie header, so if for some reason the "Max-Age" "Set-Cookie" header is being passed to Rack but is not being passed through, I would ask over on the Github Rack issue tracker.
Update #1: there has been at least one pull request having to do with Max-Age and Rack, but I'm not sure it is relevant. If the above doesn't work, I think you may want to discuss on the Rack ticket tracker as I mention above.
Update #2: Have you looked at the Rack::Cache middleware? It may be of use.

Related

(JSON::ParserError) "{N}: unexpected token at 'alihack<%eval request(\"alihack.com\")%>

I have the website on Ruby on Rails 3.2.11 and Ruby 1.9.3.
What can cause the following error:
(JSON::ParserError) "{N}: unexpected token at 'alihack<%eval request(\"alihack.com\")%>
I have several errors like this in the logs. All of them tries to eval request(\"alihack.com\").
Part of the log file:
"REMOTE_ADDR" => "10.123.66.198",
"REQUEST_METHOD" => "PUT",
"REQUEST_PATH" => "/ali.txt",
"PATH_INFO" => "/ali.txt",
"REQUEST_URI" => "/ali.txt",
"SERVER_PROTOCOL" => "HTTP/1.1",
"HTTP_VERSION" => "HTTP/1.1",
"HTTP_X_REQUEST_START" => "1407690958116",
"HTTP_X_REQUEST_ID" => "47392d63-f113-48ba-bdd4-74492ebe64f6",
"HTTP_X_FORWARDED_PROTO" => "https",
"HTTP_X_FORWARDED_PORT" => "443",
"HTTP_X_FORWARDED_FOR" => "23.27.103.106, 199.27.133.183".
199.27.133.183 - is CLoudFlare IP.
"REMOTE_ADDR" => "10.93.15.235" and "10.123.66.198" and others, I think, are fake IPs of proxy.
Here's a link guy has the same issue with his web site from the same ip address(23.27.103.106).
To sum up, the common ip from all errors is 23.27.103.106 and they try to run the script using ruby's eval.
So my questions are:
What type of vulnerability they try to find?
What to do? Block the ip?
Thank you in advance.
Why it happens?
It seems like an attempt to at least test for, or exploit, a remote code execution vulnerability. Potentially a generic one (targeting a platform other than Rails), or one that existed in earlier versions.
The actual error however stems from the fact that the request is an HTTP PUT with application/json headers, but the body isn't a valid json.
To reproduce this on your dev environment:
curl -D - -X PUT --data "not json" -H "Content-Type: application/json" http://localhost:3000
More details
Rails action_dispatch tries to parse any json requests by passing the body to be decoded
# lib/action_dispatch/middleware/params_parser.rb
def parse_formatted_parameters(env)
...
strategy = #parsers[mime_type]
...
case strategy
when Proc
...
when :json
data = ActiveSupport::JSON.decode(request.body)
...
In this case, it's not a valid JSON, and the error is raised, causing the server to report a 500.
Possible solutions
I'm not entirely sure what's the best strategy to deal with this. There are several possibilities:
you can block the IP address using iptables
filter (PUT or all) requests to /ali.txt within your nginx or apache configs.
use a tool like the rack-attack gem and apply the filter there. (see this rack-attack issue )
use the request_exception_handler gem to catch the error and handle it from within Rails (See this SO answer and this github issue)
block PUT requests within Rails' routes.rb to all urls but those that are explicitly allowed. It looks like that in this case, the error is raised even before it reaches Rails' routes - so this might not be possible.
Use the rack-robustness middleware and catch the json parse error with something like this configuration in config/application.rb
Write your own middleware. Something along the lines of the stuff on this post
I'm currently leaning towards options #3, #4 or #6. All of which might come in handy for other types of bots/scanners or other invalid requests that might pop-up in future...
Happy to hear what people think about the various alternative solutions
I saw some weird log entries on my own site [which doesn't use Ruby] and Google took me to this thread. The IP on my entries was different. [120.37.236.161]
After poking around a bit more, here is my mostly speculation/educated guess:
First, in my own logs I saw a reference to http://api.alihack.com/info.txt - checked this link out; looked like an attempt at a PHP injection.
There's also a "register.php" page there - submitting takes you to an "invite.php" page.
Further examination of this domain took me to http://www.alihack.com/2014/07/10/168.aspx (page is in Chinese but Google Translate helped me out here)
I expect this "Black Spider" tool has been modified by script kiddies and is being used as a carpet bomber to attempt to find any sites which are "vulnerable."
It might be prudent to just add an automatic denial of any attempt including the "alihack" substring to your configuration.
I had a similar issue show up in my Rollbar logs, a PUT request to /ali.txt
Best just to block that IP, I only saw one request on my end with this error. The request I received came from France -> http://whois.domaintools.com/37.187.74.201
If you use nginx, add this to your nginx conf file;
deny 23.27.103.106/32;
deny 199.27.133.183/32;
For Rails-3 there is a special workaround-gem: https://github.com/infopark/robust_params_parser

How to view content security policy violation reports in rails app?

I used secure_headers gem https://github.com/twitter/secureheaders and i configure the csp as
config.csp = {
:enforce => true,
:default_src => 'http://* inline',
:report_uri => "/report",
:connect_src => 'self',
:style_src => 'self inline',
:script_src => 'self inline eval',
:font_src => 'self'
}
but still i can't view the reports in my http://localhost:3000/report and page is not redirecting
EDIT:
https://report-uri.io/ offers CSP reporting capabilities. They give you a report-uri, and they manage the incoming reports!
Currently the gem does not have any built-in support for aggregating/viewing the reports. This question got me thinking so I filed https://github.com/twitter/secureheaders/issues/71
Please add your thoughts. I don't think it is a trivial task to build something meaningful, but I'm beginning to see how valuable it could be. There's also a good amount of low hanging fruit that might be good enough for the time being.
The Secure Headers Gem does not provide a reporting endpoint for CSP violations. It is something you would have to build yourself or use a solution that provides both out of the box.
I posted an overview of the different ways of deploying a Content Security Policy with Ruby on Rails, including SecureHeaders Gem and Templarbit (which includes a reporting endpoint): https://www.templarbit.com/blog/2018/03/14/content-security-policy-with-ruby-on-rails

Rails memcache store default auto expiration time

I have been struggling for a while to find out if there is some default expiration time set by Rails, in case we don't provide any while storing a key-value pair to memcache?
e.g. Rails.cache.write('some-key', 'some-value')
Would rails set some expiration time by default if we haven't specified?
If you're using the default, built-in MemCacheStore class provided by Rails, then no. It won't assume an expiry time when you create new cache entries. You can read the applicable code to verify that. It checks to see if you've passed an expires_in option to the #write method like
Rails.cache.write("key", "content", expires_in: 2.hours)
and if you haven't, simply passes 0 to memcache indicating no expiry time. Hope this helps!
If you are using the newer (and I think better) Dalli memcached gem, you can configure it at the adapter-level using a line like the following:
config.cache_store = :dalli_store, 'cache-1.example.com', 'cache-2.example.com',
{ :namespace => NAME_OF_RAILS_APP, :expires_in => 1.day}
See the README for a detailed explanation of the :expires_in option. Overall, I think Dalli is worth checking out for more than just this feature, its also faster and supports some newer authentication features, etc.

Suppressing ActionView::MissingTemplate exception for Rails 3.x

Starting with Rails 3.0, from time to time, I've been receiving an exception notification like this:
ActionView::MissingTemplate: Missing template [...] with {:locale=>[:en],
:formats=>[:text], :handlers=>[:erb, :builder, :haml]}. Searched in: * [...]
For instance, an arbitrary hand-written URL like http://example.com/some/path/robots.txt raises the error. Not fun.
I reported the problem in this ticket quite a long ago, and been using the patch mentioned here, but the problem persists.
https://rails.lighthouseapp.com/projects/8994/tickets/6022-content-negotiation-fails-for-some-headers-regression
A fix is suggested in this blog post,
http://trevorturk.wordpress.com/2011/12/09/handling-actionviewmissingtemplate-exceptions/
To use this:
respond_to do |format|
format.js
end
But it doesn't feel right to me, as I'm not interested in overloading an action with multiple formats. In my app, there are separate URLs for HTML and JSON API, so simple render should be sufficient.
Should I just swallow the exception by rescue_from ActionView::MissingTemplate and return 406 myself?
Is there a better way to handle this situation?
Or I can ask this way - in the first place, is there any real-world usefulness in raising this kind of exception on production?
If you've no need for formatted routes you can disable them with :format => false in your route specification, e.g.
get '/products' => 'products#index', :format => false
This will generate a RoutingError which gets converted to a 404 Not Found. Alternatively you can restrict it to a number of predefined formats:
get '/products' => 'products#index', :format => /(?:|html|json)/
If you want a formatted url but want it restricted to a single format then you can do this:
get '/products.json' => 'products#index', :format => false, :defaults => { :format => 'json' }
There are a number of valid reasons to raise this error in production - a missing file from a deploy for example or perhaps you'd want notification of someone trying to hack your application's urls.
Best that worked for me is in application_controller.rb:
rescue_from ActionView::MissingTemplate, with: :not_found
After some source diving I found another way. Put this in an initializer.
ActionDispatch::ExceptionWrapper.rescue_responses.merge! 'ActionView::MissingTemplate' => :not_found
If you have a resource that will only ever be served in one format and you want to ignore any Accept header and simply force it to always output the default format you can remove the format from the template filename. So for instance, if you have:
app/views/some/path/robots.txt.erb
You can change it to simply
app/views/some/path/robots.erb
Some schools of thought would say this is a bad thing since you are returning data in a different format from what was requested, however in practice there are a lot of misbehaving user agents, not every site carefully filters content type requests, and consistently returning the same thing is predictable behavior, so I think this is a reasonable way to go.
Try adding
render nothing: true
at the end of your method.
If there are specific paths that periodically get called that generate errors -- and they are the same set of urls that get called regularly (i.e., robots.txt or whatever) -- the best thing to do if you can is to eliminate them from hitting your rails server to begin with.
How to do this depends on your server stack. One way to do it is to block this in directly in RACK prior to having the url passed into rails.
Another way may be to block it in NGINX or Unicorn, depending on which web listener you're using for your app.
I'd recommend looking into this and then coming back and posting an additional question here on 'How to Block URL's using Rack?" (Or unicorn or nginx or wherever you think it makes sense to block access.

How do I set the HttpOnly flag on a cookie in Ruby on Rails

The page Protecting Your Cookies: HttpOnly explains why making HttpOnly cookies is a good idea.
How do I set this property in Ruby on Rails?
Set the 'http_only' option in the hash used to set a cookie
e.g. cookies["user_name"] = { :value => "david", :httponly => true }
or, in Rails 2:
e.g. cookies["user_name"] = { :value => "david", :http_only => true }
Re Laurie's answer:
Note that the option was renamed from :http_only to :httponly (no underscore) at some point.
In actionpack 3.0.0, that is, Ruby on Rails 3, all references to :http_only are gone.
That threw me for a while.
Just set :http_only to true as described in the changelog.
If you’ve a file called config/session_store.rb including this line (Rails 3+), then it’s automatically set already.
config/initializers/session_store.rb:
# Be sure to restart your server when you modify this file.
Rails.application.config.session_store :cookie_store, key: "_my_application_session"
Also rails allows you to set following keys:
:expires - The time at which this cookie expires, as a Time object.
:secure - Whether this cookie is only transmitted to HTTPS servers. Default is false.
I also wrote a patch that is included in Rails 2.2, which defaults the CookieStore session to be http_only.
Unfortunately session cookies are still by default regular cookies.

Resources