Heroku Geolocation >> Always Returns "Seattle, WA" - ruby-on-rails

In Rails 3, both the geo_ip and the geo_location gems return accurate results on my local machine, but once uploaded to Heroku, persistently returns "Seattle, WA" (I'm located in Pennsylvania).
I did some digging, and found that the Heroku shared database I'm using is located in Seattle. Can anyone point me in the right direction for how to handle this situation? Again, while running locally the geolocation is working as intended.
Thanks!!

If you are using hostname-based SSL on Heroku, there is currently no way to get the request's original IP. See this thread:
http://groups.google.com/group/heroku/browse_thread/thread/8cd2cba55f9aeb19
On that thread, someone mentioned http://jsonip.com/, which does what you'd expect:
ultramarine:~ jdc$ curl http://jsonip.com/
{"ip":"131.247.152.2"}
My plan is to make an Ajax request to jsonip, then pass the IP back to the server and geolocate it using geoip or geokit. I tried it with geoip thusly:
ruby-1.9.2-p136 :004 > c = GeoIP.new('GeoIP.dat').country('131.247.152.2')
=> ["131.247.152.2", "131.247.152.2", 225, "US", "USA", "United States", "NA"]
(It seems like geokit will be easier to deal with because it doesn't require me to manage a .dat file of IP mappings. I expect to end up using that.)

I don't know how heroku works, but you might be behind a load balancer. You'll need to set your TRUSTED_PROXIES to get request.remote_ip to be the HTTP_X_FORWARDED_FOR address.
You can check to see if this is your problem by adding an action to one of your controllers like this:
def remote_ip
render :text => "REMOTE_ADDR: %s<br/>remote_ip: %s<br/>HTTP_X_FORWARDED_FOR: %s" %
[ request.env['REMOTE_ADDR'],
request.remote_ip,
request.env['HTTP_X_FORWARDED_FOR'] ]
end
If you've got an HTTP_X_FORWARDED_FOR, then you need to tell Rails about trusted proxies. Once you do that, your request.remote_ip and your HTTP_X_FORWARDED_FOR ips will be the same.
In your production.rb, add these lines, where the allowed_ips regex includes your load balancer IPs. Replace the a.b.c. with the load balancer IPs you get from heroku.
# Setup Trusted Proxies
allowed_ips = /^a\.b\.c\.|^127\.0\.0\.1$|^(10|172\.(1[6-9]|2[0-9]|30|31)|192\.168)\./i
ActionController::Request.const_set("TRUSTED_PROXIES", allowed_ips)

Interesting. Being a Rails noob, the only thing I can ask is where is it getting the IP from? Is there a way to make sure it is getting it from the user? (Complete rails noob, so you may already be doing this, and I just don't know)
I also found a plugin that seems to be well made called GeoKit. Link: http://geokit.rubyforge.org/ -- Maybe it will work better?

Related

(JSON::ParserError) "{N}: unexpected token at 'alihack<%eval request(\"alihack.com\")%>

I have the website on Ruby on Rails 3.2.11 and Ruby 1.9.3.
What can cause the following error:
(JSON::ParserError) "{N}: unexpected token at 'alihack<%eval request(\"alihack.com\")%>
I have several errors like this in the logs. All of them tries to eval request(\"alihack.com\").
Part of the log file:
"REMOTE_ADDR" => "10.123.66.198",
"REQUEST_METHOD" => "PUT",
"REQUEST_PATH" => "/ali.txt",
"PATH_INFO" => "/ali.txt",
"REQUEST_URI" => "/ali.txt",
"SERVER_PROTOCOL" => "HTTP/1.1",
"HTTP_VERSION" => "HTTP/1.1",
"HTTP_X_REQUEST_START" => "1407690958116",
"HTTP_X_REQUEST_ID" => "47392d63-f113-48ba-bdd4-74492ebe64f6",
"HTTP_X_FORWARDED_PROTO" => "https",
"HTTP_X_FORWARDED_PORT" => "443",
"HTTP_X_FORWARDED_FOR" => "23.27.103.106, 199.27.133.183".
199.27.133.183 - is CLoudFlare IP.
"REMOTE_ADDR" => "10.93.15.235" and "10.123.66.198" and others, I think, are fake IPs of proxy.
Here's a link guy has the same issue with his web site from the same ip address(23.27.103.106).
To sum up, the common ip from all errors is 23.27.103.106 and they try to run the script using ruby's eval.
So my questions are:
What type of vulnerability they try to find?
What to do? Block the ip?
Thank you in advance.
Why it happens?
It seems like an attempt to at least test for, or exploit, a remote code execution vulnerability. Potentially a generic one (targeting a platform other than Rails), or one that existed in earlier versions.
The actual error however stems from the fact that the request is an HTTP PUT with application/json headers, but the body isn't a valid json.
To reproduce this on your dev environment:
curl -D - -X PUT --data "not json" -H "Content-Type: application/json" http://localhost:3000
More details
Rails action_dispatch tries to parse any json requests by passing the body to be decoded
# lib/action_dispatch/middleware/params_parser.rb
def parse_formatted_parameters(env)
...
strategy = #parsers[mime_type]
...
case strategy
when Proc
...
when :json
data = ActiveSupport::JSON.decode(request.body)
...
In this case, it's not a valid JSON, and the error is raised, causing the server to report a 500.
Possible solutions
I'm not entirely sure what's the best strategy to deal with this. There are several possibilities:
you can block the IP address using iptables
filter (PUT or all) requests to /ali.txt within your nginx or apache configs.
use a tool like the rack-attack gem and apply the filter there. (see this rack-attack issue )
use the request_exception_handler gem to catch the error and handle it from within Rails (See this SO answer and this github issue)
block PUT requests within Rails' routes.rb to all urls but those that are explicitly allowed. It looks like that in this case, the error is raised even before it reaches Rails' routes - so this might not be possible.
Use the rack-robustness middleware and catch the json parse error with something like this configuration in config/application.rb
Write your own middleware. Something along the lines of the stuff on this post
I'm currently leaning towards options #3, #4 or #6. All of which might come in handy for other types of bots/scanners or other invalid requests that might pop-up in future...
Happy to hear what people think about the various alternative solutions
I saw some weird log entries on my own site [which doesn't use Ruby] and Google took me to this thread. The IP on my entries was different. [120.37.236.161]
After poking around a bit more, here is my mostly speculation/educated guess:
First, in my own logs I saw a reference to http://api.alihack.com/info.txt - checked this link out; looked like an attempt at a PHP injection.
There's also a "register.php" page there - submitting takes you to an "invite.php" page.
Further examination of this domain took me to http://www.alihack.com/2014/07/10/168.aspx (page is in Chinese but Google Translate helped me out here)
I expect this "Black Spider" tool has been modified by script kiddies and is being used as a carpet bomber to attempt to find any sites which are "vulnerable."
It might be prudent to just add an automatic denial of any attempt including the "alihack" substring to your configuration.
I had a similar issue show up in my Rollbar logs, a PUT request to /ali.txt
Best just to block that IP, I only saw one request on my end with this error. The request I received came from France -> http://whois.domaintools.com/37.187.74.201
If you use nginx, add this to your nginx conf file;
deny 23.27.103.106/32;
deny 199.27.133.183/32;
For Rails-3 there is a special workaround-gem: https://github.com/infopark/robust_params_parser

GeoIP / IP name & location resolution in unicorn logs?

I'm at a point in my web application where it is useful to see ip addresses resolved down to location (for example with MaxMind) and DNS name via reverse lookup of the IPs. I'm running unicorn on Heroku and other servers. Is there a prebuilt method of doing this or do I need to write some kind of a filter to pass my logs through in order to provide this additional information?
You can use something like http://www.rubygeocoder.com/ to determine the location of the user based of their IP address and then use that within your application. An exact example of what it sounds like you're trying to do is on their homepage
request.ip # => "81.137.210.82"
request.location.city # => "Erith"
request.location.country # => "United Kingdom"

Support for multiple domains/subdomains in Rails

I have a Rails app that has a similar setup to Tumblr, that is, you can have either:
(1) Subdomain hosting (your-username.myapp.com)
(2) Domain hosting (your-username.com)
Both would forward to a personalized website for that user, created with my application.
How can I accomplish this in Rails? I have been able to get (1) working with subdomain-fu, but I'm not sure how to get (2) working. Any pointers (plugins, gems, tutorials), etc. would be greatly helpful, I can't seem to find any.
Thanks!
The principle for domains is the same as the subdomain - find the domain, map to an account.
The details will depend on how your hosting is going to handle the DNS.
I am currently using Heroku and its wildcard service.
In this case, the domain is mapped with a cname to the subdomain hosted by my Heroku app. From here I can work out the associated account and details.
EDIT: I've found a much easier way: http://www.arctickiwi.com/blog/7-host-and-domain-based-routing-in-ruby-on-rails
Not exactly an answer but this is the best I can give. Maybe this'll help you too.
Ideally, this blog post from transfs.com and subdomain-fu should do the trick. I've been trying to implement it, however, and they don't seem to play nicely together.
Basically, if I don't include the intiializer, the subdomain route works fine. If I include the initializer, the subdomain route breaks (everything gets caught by map.root). I have a feeling it's with the way it builds the condition string in the initializer. If you can figure out how it breaks, then you'll have a working app.
My initializer:
module ActionController
module Routing
class RouteSet
def extract_request_environment(request)
env = { :method => request.method }
env[:domain] = request.domain if request.domain
env[:host] = request.host if request.host
env
end
end
class Route
alias_method :old_recognition_conditions, :recognition_conditions
def recognition_conditions
result = old_recognition_conditions
[:host, :domain].each do |key|
if conditions[key]
operator = "==="
if conditions[key].is_a?(Regexp)
operator = "=~"
end
result << "conditions[:#{key.to_s}] #{operator} env[:#{key.to_s}]"
end
end
result
end
end# end class Route
end
end
My routes (just for development). You'll see my local development domain, stiltify.dev. Sorry, I tried to make it look good in here but I couldn't get the code block to look nice. I put it on pastie instead: http://pastie.org/940619.
The comments section in Ryan Bates' screencast was very helpful, and got me to figure out the subdomain => false and the other errors they were getting into. Still didn't fix the problem though!

How should I test request-related logic in Rails development?

I have several before_filters defined in my application controller to handle requests I don't like. One representative example is:
before_filter :reject_www
private
def reject_www
if request.subdomains.include? 'www'
redirect_to 'http://example.com' + request.path, :status => 301
false
end
end
(Returning false skips any following before_filters and simply returns the redirection immediately)
So, two questions:
One, how should I test this functionality? The only testing framework I've used so far is Cucumber + Webrat, which isn't really set up to handle this kind of thing. Is there another framework I should also use to fake requests like this?
Two, is there any way I can try out this functionality myself in my development environment? Since I'm simply browsing the site at localhost:3000, I can't ensure that the above code works in my browser - I'd have to push it to production, hope it works and hope it doesn't mess up anything for anyone in the meantime, which makes me nervous. Is there an alternative?
In a functional test, you can explicitly set the request host. I'm not sure what testing framework you prefer, so here is an example in good ole' Test::Unit.
def test_should_redirect_to_non_www
#request.host = 'www.mydomain.com'
get :index
assert_redirected_to 'http://mydomain.com/'
end
To address #2
You can add entries in your hosts file so that www.mydomain.com points to your local host and then test the logic in your development environment .
You can try hosting it using passenger so that it works on apache .
Hope that helps .

How do I view the HTTP response to an ActiveResource request?

I am trying to debug an ActiveResource call that is not working.
What's the best way to view the HTTP response to the request ActiveResource is making?
Monkey patch the connection to enable Net::HTTP debug mode. See https://gist.github.com/591601 - I wrote it to solve precisely this problem. Adding this gist to your rails app will give you Net::HTTP.enable_debug! and Net::HTTP.disable_debug! that you can use to print debug info.
Net::HTTP debug mode is insecure and shouldn't be used in production, but is extremely informative for debugging.
Add a new file to config/initializers/ called 'debug_connection.rb' with the following content:
class ActiveResource::Connection
# Creates new Net::HTTP instance for communication with
# remote service and resources.
def http
http = Net::HTTP.new(#site.host, #site.port)
http.use_ssl = #site.is_a?(URI::HTTPS)
http.verify_mode = OpenSSL::SSL::VERIFY_NONE if http.use_ssl
http.read_timeout = #timeout if #timeout
# Here's the addition that allows you to see the output
http.set_debug_output $stderr
return http
end
end
This will print the whole network traffic to $stderr.
It's easy. Just look at the response that comes back. :)
Two options:
You have the source file on your computer. Edit it. Put a puts response.inspect at the appropriate place. Remember to remove it.
Ruby has open classes. Find the right method and redefine it to do exactly what you want, or use aliases and call chaining to do this. There's probably a method that returns the response -- grab it, print it, and then return it.
Here's a silly example of the latter option.
# Somewhere buried in ActiveResource:
class Network
def get
return get_request
end
def get_request
"I'm a request!"
end
end
# Somewhere in your source files:
class Network
def print_request
request = old_get_request
puts request
request
end
alias :old_get_request :get_request
alias :get_request :print_request
end
Imagine the first class definition is in the ActiveRecord source files. The second class definition is in your application somewhere.
$ irb -r openclasses.rb
>> Network.new.get
I'm a request!
=> "I'm a request!"
You can see that it prints it and then returns it. Neat, huh?
(And although my simple example doesn't use it since it isn't using Rails, check out alias_method_chain to combine your alias calls.)
I like Wireshark because you can start it listening on the web browser client end (usually your development machine) and then do a page request. Then you can find the HTTP packets, right click and "Follow Conversation" to see the HTTP with headers going back and forth.
This only works if you also control the server:
Follow the server log and fish out the URL that was called:
Completed in 0.26889 (3 reqs/sec) | Rendering: 0.00036 (0%) | DB: 0.02424 (9%) | 200 OK [http://localhost/notifications/summary.xml?person_id=25738]
and then open that in Firefox. If the server is truely RESTful (ie. stateless) you will get the same response as ARes did.
Or my method of getting into things when I don't know the exact internals is literally just to throw in a "debugger" statement, start up the server using "script/server --debugger" and then step through the code until I'm at the place I want, then start some inspecting right there in IRB.....that might help (hey Luke btw)
Maybe the best way is to use a traffic sniffer.
(Which would totally work...except in my case the traffic I want to see is encrypted. D'oh!)
I'd use TCPFlow here to watch the traffic going over the wire, rather than patching my app to output it.
the firefox plugin live http headers (http://livehttpheaders.mozdev.org/) is great for this. Or you can use a website tool like http://www.httpviewer.net/

Resources