I have a rails application that makes calls to another server via net::http to retrieve documents.
I have set up Nginx with secure_link.
The nginx config has
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$remote_addr mySecretCode";
On the client side (which is in fact my rails server) I have to create the secure url something like:
time = (Time.now + 5.minute).to_i
hmac = Digest::MD5.base64digest("#{time}/#{file_path}#{IP_ADDRESS} mySecretCode").tr("+/","-_").gsub("==",'')
return "#{DOCUMENT_BASE_URL}/#{file_path}?md5=#{hmac}&expires=#{time}"
What I want to know is the best way to get the value above for IP_ADDRESS
There are multiple answers in SO on how to get the ip address but alot of them do not seem as reliable as actually making a request to a web service that returns the ip address of the request as this is what the nginx secure link will see (we don't want some sort of localhost address).
I put the following method on my staging server:
def get_client_ip
data=Hash.new
begin
data[:ip_address]=request.ip
data[:error]=nil
rescue Exception =>ex
data[:error]=ex.message
end
render :json=>data
end
I then called the method from the requesting server:
response = Net::HTTP.get_response(URI("myserver.com/web_service/get_client_ip"))
if response.class==Net::HTTPOK
response_hash=JSON.parse response.body
ip=response_hash["ip_address"] unless response_hash[:error]
else
#deal with error
end
After getting the ip address successfully I just cached it and did not keep on calling the web service method.
Related
I have a data scraper in ruby that retrieves article data.
Another dev on my team needs my scraper to spin up a webServer he can make a request to so that he may import the data on a Node Application he's built.
Being a junior, I do not understand the following :
a) Is there a proper convention in Rails that tells me where to place my scraper.rb file
b) Once that file is properly placed, how would i get the server to accept connections with the scrapedData
c)What (functionally) is the relationship between the ports, sockets, and routing
I understand this may be a "rookieQuestion" but I honestly dont know.
Can someone please BREAK THIS DOWN.
I have already:
i) Setup a server.rb file and have it linking to localhost:2000 but Im not sure how to create a proper route or connection that allows someone to use Postman for a valid route and connect to my data.
require 'socket'
require 'mechanize'
require 'awesome_print'
port = ENV.fetch("PORT",2000).to_i
server = TCPServer.new(port)
puts "Listening on port #{port}..."
puts "Current Time : #{Time.now}"
loop do
client = server.accept
client.puts "= Running Web Server ="
general_sites = [
"https://www.lovebscott.com/",
"https://bleacherreport.com/",
"https://balleralert.com/",
"https://peopleofcolorintech.com/",
"https://afrotech.com/",
"https://bossip.com/",
"https://www.itsonsitetv.com/",
"https://theshaderoom.com/",
"https://shadowandact.com/",
"https://hollywoodunlocked.com/",
"https://www.essence.com/",
"http://karencivil.com/",
"https://www.revolt.tv/"
]
holder=[]
agent = Mechanize.new
general_sites.each do |site|
page=agent.get(site);
newRet = page.search('a')
newRet.each do |e|
data = e.attr('href').to_s
if(data.length > 50)
holder.push(data)
end
end
pp holder.length.to_s + " [ posts total] ==> Now Scraping --> " + site
end
client.write(holder)
client.close
end
In Rails you don't spin up a web server manually, as it's done for you using rackup, unicorn, puma or any other compatible application server.
Rails itself is never "talking" to the HTTP clients directly, it is just a specific application that exposes a rack-compatible API (basically have an object that responds to call(hash) and returns [integer, hash, enumerable_of_strings]); the app server will get the data from unix/tcp sockets and call your application.
If you want to expose your scraper to an external consumer (provided it's fast enough), you can create a controller with a method that accepts some data, runs the scraper, and finally renders back the scraping results in some structured way. Then in the router you connect some URL to your controller method.
# config/routes.rb
post 'scrape/me', to: 'my_controller#scrape'
# app/controllers/my_controller.rb
class MyController < ApplicationController
def scrape
site = params[:site]
results = MyScraper.run(site)
render json: results
end
end
and then with a simple POST yourserver/scrape/me?site=www.example.com you will get back your data.
I have a rails form and am trying to capture a client's IP address and ultimately convert that into a zip code. I've done the following:
Controller
def create
...
begin
response = open('https://jsonip.com/').read
data = JSON.parse(response)
ip_address = data['ip']
ip_type = 'jsonip'
rescue
ip_type = 'request.remote_ip'
ip_address = request.remote_ip
end
if ip_address
zip = Geocoder.search(ip_address)
p "IP Address (#{ip_type}): #{ip_address}, zip: #{zip}"
#potential_client.zip_code = zip.first.try(:postal) if zip.present?
end
...
end
This code came from here because request.remote_ip kept returning the same IP address. It seemed to work but once I push to Heroku, it seems like everyone is still coming from the same IP address.
What else am I missing?
With the call to jsonip.com, that is your server rendering and requesting that page so it makes sense that the server IP is returned.
With Heroku and other SaaS providers the remote IP will return the IP of the server or load balancer. Heroku does provide a header "x-forwarded-for" that has a list of IPs the request has passed through.
I am working on a project whereby we have sites (developed with ruby on rails) hosted on an Ubuntu server using tomcat. We want these sites to make HTTP calls to a service developed using Nancy. We have this working locally whereby the service is hosted on a machine that we can call within our network. We cannot however get it working when live. Here is an example call:
def get_call(routePath)
started_at = Time.now
enc_url = URI.encode("#{settings.service_endpoint}#{routePath}")
uri = URI.parse(enc_url)
http = Net::HTTP.new(uri.host, uri.port)
req = Net::HTTP::Get.new(uri.request_uri)
resp = http.request(req)
logger.bench 'SERVICE - GET', started_at, routePath
return resp if response_ok?(resp)
end
When working locally the settings are as follows:
settings.service_endpoint = http://10.10.10.27:7820
routePath = /Customers
When we upload it to the server we use the following:
settings.service_endpoint = http://127.0.0.1:24099
routePath = /Customers
We currently get the following error:
SocketError at /register
initialize: name or service not know
with the following line being highlighted:
resp = http.request(req)
Are we completely wrong with the IP being called. Should it be 127.0.0.1, localhost. 10.10.10.27 or something entirely different? The strange thing is we can do a GET call via telnet in our Ubuntu server (telnet 127.0.0.1 24099) so that must mean the server can make the calls but the site hosted on the server cannot. Do we need to include a HTTP proxy (have read some reference to that but dont really know if its needed).
Apologies if its obvious but we have never tried anything like this before so its all very perplexing. Any further information required just let me know.
We changed the service_endpoint to localhost and it worked. Not sure if this is because it didnt like "http://" or some other reason. Any explanation as to why this is the case would be much appreciated, just so we know. Thanks!
I am currently trying to write an auxiliary module for Metasploit. The module basically tries multiple default credentials to get access to the router's management page. The authentication is done via web, i.e. HTTP POST.
Currently, the module works as expected for plain HTTP connections, i.e. unsecured connections, however every connection attempt via HTTPS (port 443), returns nil. Below is the function used within the Metasploit class to retrieve the login page:
def get_login_page(ip)
begin
response = send_request_cgi(
'uri' => '/',
'method' => 'GET'
)
# Some models of ZyXEL ZyWALL return a 200 OK response
# and use javascript to redirect to the rpAuth.html page.
if response && response.body =~ /changeURL\('rpAuth.html'\)/
vprint_status "#{ip}- Redirecting to rpAuth.html page..."
response = send_request_cgi(
'uri' => '/rpAuth.html',
'method' => 'GET'
)
end
rescue ::Rex::ConnectionError
vprint_error "#{ip} - Failed to connect to Web management console."
end
return response
end
When trying to connect via HTTPS, the first send_request_cgi call returns nil. No exception are caught or thrown. I have tried with 3 different hosts to make sure the issue was not with a specific endpoint. All my 3 attempts failed to return a response. At every attempt, I set the RPORT option to 443;
RHOSTS 0.0.0.0 yes The target address range or CIDR identifier
RPORT 443 yes The target port
Note that I have replaced the real IP with 0.0.0.0. Using a web browser, I can actually connect to the router via HTTPS with no issue (other than having to add an exception since the certificate is untrusted) and am presented the login page. With Wireshark, I tried to look at the generated traffic. I can clearly see that nothing is sent by the router. I notice the 3-way handshake being completed and the HTTP GET request being made:
GET / HTTP/1.1
Host: 0.0.0.0
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
Content-Type: application/x-www-form-urlencoded
Content-Length: 0
There are 3-4 ACK after and then a FIN/PUSH sent by the server.
Based on this page on Metasploit's GitHub, I was under the impression that connections to HTTPS websites were handled by the underlying framework. I have not seen any articles/tutorial/source that leads me to believe otherwise. The doc about the send_request_cgi does not specify any specific requirement to establish a HTTPS connection. Other posts did not had the exact same issue I'm having. At this point I suspect either the OS, the framework or me forgetting to enable something. Other modules I have looked at either only targets HTTP websites - which I doubt - or do not have any special handling for HTTPS connections.
Any help determining the cause would be greatly appreciated.
Version of Metasploit:
Framework: 4.9.3-2014060501
Console : 4.9.3-2014060501.15168
Version of OS:
SMP Debian 3.14.5-1kali1 (2014-06-07)
As per this post on SecurityStreet, the solution was to set SSL to true in the DefaultOptions in the initialize function:
def initialize
super(
...
'DefaultOptions' =>
{
...
'SSL' => true
}
)
...
end
Connections to routers using HTTPS worked afterwards.
I know how to set the open/read timeout for the request going through the proxy. My problem, however, is that occasionally my proxy goes down, and therefore I am unable to ever connect to the proxy. So I want to be able to set the timeout to connect to the proxy to some value, and then handle the timeout by trying something else. Any idea how I can set the timeout value for connecting to an http proxy? Thanks!
First the code, then a bit of explaination below:
# get an instance of Net::HTTP that has proxy settings embedded
# see the source: http://ruby-doc.org/stdlib-1.9.3/libdoc/net/http/rdoc/Net/HTTP.html#method-c-Proxy
proxyclass = Net::HTTP::Proxy("proxy_host");
# Create a new instance of the URL you want to connect to
# NOTE: no connection is attempted yet
proxyinstance = proxyclass.new("google.com");
# Make your setting changes, specifically the timeouts
proxyinstance.open_timeout = 5;
proxyinstance.read_timeout = 5;
# now, attempt connecting through the proxy with the desired
# timeout settings.
proxyinstance.start do |http|
# do something with the http instance
end
The key is realizing open_timeout and read_timeout are instance variables and that Net::HTTP::Proxy actually returns a decorated Net::HTTP class.
You would run into this same problem with similar Net::HTTP usage. You must construct it the "long" way, not using the Net::HTTP.start() class method shortcut.