I have a Rails application that has to co-exist with a very old legacy application. The legacy application looks for a cookie that has a value containing a specific string of characters. Unfortunately, the characters in the legacy cookie often contain slashes. The problem I have is that when the Rails application writes the cookie it first does URL-encoding which causes the legacy app to break because the cookie values is incorrect.
I had this working in Rails 1.13.5 by editing the file cookie_performance_fix.rb (Path: ./actionpack-1.13.5/lib/action_controller/cgi_ext/cookie_performance_fix.rb)
In order to get this to work I changed the code as shown:
def to_s
buf = ""
buf << #name << '='
if #value.kind_of?(String)
rails code.
#buf << CGI::escape(#value)
buf << #value
else
#buf << #value.collect{|v| CGI::escape(v) }.join("&")
buf << #value.collect{|v| (v) }.join("&")
end
This actually worked fine until I decided to upgrade Rails to version 2.3.2
In Rails 2.3.2 the cookie_performance_fix.rb file no longer exists. I looked in the same directory and found a file called cookie.rb which I tried modifying in a similar fashion.
def to_s
buf = ''
buf << #name << '='
#buf << (#value.kind_of?(String) ? CGI::escape(#value) : #value.collect{|v| CGI::escape(v) }.join("&"))
buf << (#value.kind_of?(String) ? #value : #value.collect{|v| (v) }.join("&"))
buf << '; domain=' << #domain if #domain
buf << '; path=' << #path if #path
buf << '; expires=' << CGI::rfc1123_date(#expires) if #expires
buf << '; secure' if #secure
buf << '; HttpOnly' if #http_only
buf
end
This unfortunately does not seem to work. The cookie keeps getting URL-encoded in the new Rails 2.3.2. I know that turning off URL-encoding is not the best idea, but I don't have much choice until the legacy application is retired. I unfortunately do not have access to the legacy code to add support for URL-unencoding the cookie so I have to make sure the legacy cookie is written with the correct sequence including the slashes. If anyone can tell me how to turn off URL-encoding in Rails 2.3.2 it would be greatly appreciated.
Thanks.
After doing some digging I have found the answer to my question and I am documenting it here in case it is of use to anyone else.
In order to turn off URL-encoding in Rails 2.3.2 it is necessary to edit the following file: actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/response.rb
Around line 70 the ID and value of the cookie is set. I made the following change to turn of URL-encoding:
cookie = Utils.escape(key) + "=" +
#value.map { |v| Utils.escape v }.join("&") +
value.map { |v| v }.join("&") +
"#{domain}#{path}#{expires}#{secure}#{httponly}"
NOTE: This modification only affects standard cookies - not the cookies used as session data by Rails in version 2.3.2.
DISCLAIMER: I am in no way recommending this modification as a best practice. This modification was only done for the specific reason of handling legacy code requirement that required a cookie to be in a particular format. A better option would even be to modify the legacy code to handle URL-encoding. Unfortunately, that option was closed to me so I was forced to hack around on the underlying Rails code - which is not something I would generally recommend. Of course it should go without saying that making this type of modification runs the risk that the problem will have to be re-addressed every time you upgrade your Rails installation as the underlying code may change. That is actually what happened in my case. And of course there are also probably good reasons (security, standards compliance, etc.) for keeping the URL-encoding if at all possible.
A simple way to do this is using the rack method,
response["set-cookie"]="id_cookie=this cookie will be not escaped"
Related
For a PoC I'm building I need to emulate the TRACE method in lighttpd. Using mod_magnet and lua I can reconstruct the request using the method/version/headers functions. But by doing so, the header order, case etc will be changed from the original.
Is it possible to improve upon this by accessing the raw headers, as received? Or is there a better way to achieve the desired outcome?
https://wiki.lighttpd.net/DebugVariables
debug.log-request-header = "enable"
If you need access to the raw bytes as lighttpd processes them, then you will need to modify the lighttpd source code.
In the end I just used the mod_magnet module plus LUA, and whilst not perfect, it is actually close enough not to matter. Et voilla, a TRACE response:
lighty.header["Content-Type"] = "message/http"
lighty.r.resp_body.add(lighty.r.req_attr["request.method"] .. " " .. lighty.r.req_attr["request.orig-uri"] .. " HTTP/1.1\n")
for k, v in pairs(lighty.r.req_header) do
lighty.r.resp_body.add( k .. ": " .. v .. "\n")
end
lighty.r.resp_body.add( "\n" )
return 200
I want to check if the URL that the user inputs is in fact a valid page.
I tried:
if Nokogiri::HTML(open("http://example.com"))
#DO REQUIRED TASK
end
But that immediately throws an error upon attempting to open the page. I want to return the result of whether it is a document of any kind.
I either get the error:
no such file or directory
or:
getaddrinfo: Name or service not known
depending on how I try to make the check.
I'd start with something like:
require 'nokogiri'
require 'open-uri'
begin
doc = Nokogiri.HTML(open(url))
rescue Exception => e
puts "Couldn't read \"#{ url }\": #{ e }"
exit
end
puts (doc.errors.empty?) ? "No problems found" : doc.errors
Nokogiri sets the document's errors array to the values of any errors that occurred during the parsing process.
This only addresses one part of the issue though. Malicious people like to break things, and this would be very easy to break. In general, be very careful about anything a user gives you, especially if your site is exposed to the wild internet.
Prior to telling OpenURI to load the file to give to Nokogiri, you should sniff that URL and do some sanity checks using a HTTP HEAD request to find out the size and MIME-TYPE of the content being retrieved. Once you know those, you can try loading the file.
Firstly, it's bad style to 'rescue Exception => e' in Ruby.
[Refer: http://daniel.fone.net.nz/blog/2013/05/28/why-you-should-never-rescue-exception-in-ruby/ ]
Secondly, for this case, "rescue OpenURI::HTTPError => e" would be more suitable.
I'm not familiar with handling exceptions but something like :
begin
page = Nokogiri::HTML(open("http://example.com"))
ensure
puts "not a document of any kind"
end
do_something_whith(page) if page
...should do the trick.
or (after reading your comment) :
begin
page = open("http://example.com")
ensure
puts "not a document of any kind"
end
Nokogiri::HTML(page) if page
Using Rails 3.1.1 and Herkou
I have 1.000 products in my app. They all have a very slow controller which is effectively solved by fragment caching. Although the data doesn't change very often, it still needs to expire (which I do by sweeping) periodically, in my case once a week.
Now, after sweeping the cached views I don't want my users to create the new fragments by trying to access the products one after another (takes about 6-8 secs at the first load, 2-3 sec for the cached load). I assume I can do that with some sort of script that will load each Product Page one by one and thus make the server create those fragments.
I can imagine this can be handled in three ways:
Run a script on my local machine that will try to access each url with some sort of get-command - Downside: Not very pretty and will affect visitor stats in a way I would not prefer.
Run the same type script on the server after the sweeper, that will load each Product. How can I do that, in that case?
Using a smart Rails command to do this automatically. Is there such an elegant command?
I made this script and it works. The "product.slug" is because I have friendly_id installed. It will produce url-variables with names such as www.mydomain.com/productabc-123/ which will be read by Nokogiri (Nokogiri gem is needed for this solution).
PLEASE NOTE THAT I SWITCHED FROM FRAGMENT CACHING TO ACTION CACHING IN THIS SOLUTION (as opposed to the question, where I am using fragment caching). The important difference for this is when I check the cache if Rails.cache.exist?('views/www.mydomain.com/' + product.slug). For fragment_caching it should be the fragment name there instead.
require 'nokogiri'
require 'open-uri'
Product.all.each do |product|
url = 'http://www.mydomain.com/' + product.slug
begin
if Rails.cache.exist?('views/www.mydomain.com/' + product.slug)
puts url + " is already in cache"
else
doc = Nokogiri::HTML(open(url))
puts "Reads " + url
# Verifies if the caching worked. Only for trouble shooting
if Rails.cache.exist?('views/www.mydomain.com/' + product.slug)
puts "--->" + url + " is NOW in the cache"
else
puts "--->" + url + " is still not in the cache!"
end
sleep 1
end
rescue
puts 'Normal rescue of ' + url
rescue Timeout::Error
puts 'Timeout rescue of ' + url
puts 'Sleep for 5 sec'
sleep 5
retry
end
end
Create a script that runs as rake task, or better yet a worker, that runs and curls the page. There is no need to include a gem when you can just call curl
`curl -A "CacheRefresher" #{ENV['HOSTNAME']}/api/v1/#{klass.name.underscore.pluralize}/#{id} >/dev/null 2>&1`
I am trying to interact with third party real time Web messaging System created and maintained by Pusher.com. Now, i cannot send anything through the API unless i produce an HMAC SHA256 hex digest of my data. A sample source code written in ruby could try to illustrate this:
# Dependencies
# gem install ruby-hmac
#
require 'rubygems'
require 'hmac-sha2'
secret = '7ad3773142a6692b25b8'
string_to_sign = "POST\n/apps/3/channels/test_channel/events\nauth_key=278d425bdf160c739803&auth_timestamp=1272044395&auth_version=1.0&body_md5=7b3d404f5cde4a0b9b8fb4789a0098cb&name=foo"
hmac = HMAC::SHA256.hexdigest(secret, string_to_sign)
puts hmac
# >> 309fc4be20f04e53e011b00744642d3fe66c2c7c5686f35ed6cd2af6f202e445
I checked the erlang crypto Library and i cannot even generate a SHA256 hex digest "directly"
How do i do this whole thing in Erlang ? help....
* UPDATE *
I have found solutions here: sha256 encryption in erlang and they have led me to erlsha2. But still, how do i generate the HMAC of a SHA256 hexdigest output from this module ?
With erlsha2, use the following to get the equivalent of your Ruby code:
1> hmac:hexlify(hmac:hmac256(<<"7ad3773142a6692b25b8">>, <<"POST\n/apps/3/channels/test_channel/events\nauth_key=278d425bdf160c739803&auth_timestamp=1272044395&auth_version=1.0&body_md5=7b3d404f5cde4a0b9b8fb4789a0098cb&name=foo">>)).
"309FC4BE20F04E53E011B00744642D3FE66C2C7C5686F35ED6CD2AF6F202E445"
I just stumbled through this myself and finally managed it just using crypto, so thought I would share. For your usage I think you would want:
:crypto.hmac(:sha256, secret, string_to_sign) |> Base.encode16
The hmac portion should take care of digest + hmac and then piping to encode 16 should provide the hex part. I imagine you probably moved on some time ago, but since I just had the same issue and wanted to try and figure it out in stdlib stuff I thought I would share.
The same project (erlsha2) has a module for this:
https://github.com/vinoski/erlsha2/blob/master/src/hmac.erl
If you're using Elixir, you can use
:crypto.hash(:sha256, [secret, string_to_sign])
|> Base.encode16
|> String.downcase
This is a one-liner (Erlang 24):
[begin if N < 10 -> 48 + N; true -> 87 + N end end ||
<<N:4>> <= crypto:mac(hmac, sha256, Secret1, StringToSign1)].
>>> "309fc4be20f04e53e011b00744642d3fe66c2c7c5686f35ed6cd2af6f202e445"
No need for external libs.
I have been using open_uri to pull down an ftp path as a data source for some time, but suddenly found that I'm getting nearly continual "530 Sorry, the maximum number of allowed clients (95) are already connected."
I am not sure if my code is faulty or if it is someone else who's accessing the server and unfortunately there's no way for me to really seemingly know for sure who's at fault.
Essentially I am reading FTP URI's with:
def self.read_uri(uri)
begin
uri = open(uri).read
uri == "Error" ? nil : uri
rescue OpenURI::HTTPError
nil
end
end
I'm guessing that I need to add some additional error handling code in here...
I want to be sure that I take every precaution to close down all connections so that my connections are not the problem in question, however I thought that open_uri + read would take this precaution vs using the Net::FTP methods.
The bottom line is I've got to be 100% sure that these connections are being closed and I don't somehow have a bunch open connections laying around.
Can someone please advise as to correctly using read_uri to pull in ftp with a guarantee that it's closing the connection? Or should I shift the logic over to Net::FTP which could yield more control over the situation if open_uri is not robust enough?
If I do need to use the Net::FTP methods instead, is there a read method that I should be familiar with vs pulling it down to a tmp location and then reading it (as I'd much prefer to keep it in a buffer vs the fs if possible)?
I suspect you are not closing the handles. OpenURI's docs start with this comment:
It is possible to open http/https/ftp URL as usual like opening a file:
open("http://www.ruby-lang.org/") {|f|
f.each_line {|line| p line}
}
I looked at the source and the open_uri method does close the stream if you pass a block, so, tweaking the above example to fit your code:
uri = ''
open("http://www.ruby-lang.org/") {|f|
uri = f.read
}
Should get you close to what you want.
Here's one way to handle exceptions:
# The list of URLs to pass in to check if one times out or is refused.
urls = %w[
http://www.ruby-lang.org/
http://www2.ruby-lang.org/
]
# the method
def self.read_uri(urls)
content = ''
open(urls.shift) { |f| content = f.read }
content == "Error" ? nil : content
rescue OpenURI::HTTPError
retry if (urls.any?)
nil
end
Try using a block:
data = open(uri){|f| f.read}