How do I transfer Data using Web Server/TCPsockets in Ruby? - ruby-on-rails

I have a data scraper in ruby that retrieves article data.
Another dev on my team needs my scraper to spin up a webServer he can make a request to so that he may import the data on a Node Application he's built.
Being a junior, I do not understand the following :
a) Is there a proper convention in Rails that tells me where to place my scraper.rb file
b) Once that file is properly placed, how would i get the server to accept connections with the scrapedData
c)What (functionally) is the relationship between the ports, sockets, and routing
I understand this may be a "rookieQuestion" but I honestly dont know.
Can someone please BREAK THIS DOWN.
I have already:
i) Setup a server.rb file and have it linking to localhost:2000 but Im not sure how to create a proper route or connection that allows someone to use Postman for a valid route and connect to my data.
require 'socket'
require 'mechanize'
require 'awesome_print'
port = ENV.fetch("PORT",2000).to_i
server = TCPServer.new(port)
puts "Listening on port #{port}..."
puts "Current Time : #{Time.now}"
loop do
client = server.accept
client.puts "= Running Web Server ="
general_sites = [
"https://www.lovebscott.com/",
"https://bleacherreport.com/",
"https://balleralert.com/",
"https://peopleofcolorintech.com/",
"https://afrotech.com/",
"https://bossip.com/",
"https://www.itsonsitetv.com/",
"https://theshaderoom.com/",
"https://shadowandact.com/",
"https://hollywoodunlocked.com/",
"https://www.essence.com/",
"http://karencivil.com/",
"https://www.revolt.tv/"
]
holder=[]
agent = Mechanize.new
general_sites.each do |site|
page=agent.get(site);
newRet = page.search('a')
newRet.each do |e|
data = e.attr('href').to_s
if(data.length > 50)
holder.push(data)
end
end
pp holder.length.to_s + " [ posts total] ==> Now Scraping --> " + site
end
client.write(holder)
client.close
end

In Rails you don't spin up a web server manually, as it's done for you using rackup, unicorn, puma or any other compatible application server.
Rails itself is never "talking" to the HTTP clients directly, it is just a specific application that exposes a rack-compatible API (basically have an object that responds to call(hash) and returns [integer, hash, enumerable_of_strings]); the app server will get the data from unix/tcp sockets and call your application.
If you want to expose your scraper to an external consumer (provided it's fast enough), you can create a controller with a method that accepts some data, runs the scraper, and finally renders back the scraping results in some structured way. Then in the router you connect some URL to your controller method.
# config/routes.rb
post 'scrape/me', to: 'my_controller#scrape'
# app/controllers/my_controller.rb
class MyController < ApplicationController
def scrape
site = params[:site]
results = MyScraper.run(site)
render json: results
end
end
and then with a simple POST yourserver/scrape/me?site=www.example.com you will get back your data.

Related

json-rpc event-machine stand alone service

What am I doing wrong?
I try to run example code from json-rpc documentation. Togather with EventMachine:
require 'json-rpc'
require 'thin'
class AsyncApp
include JsonRpc
AsyncResponse = [-1, {}, []].freeze
def call env
rpc_call(env)
end
def rpc_sum a, b
result = Rpc::AsyncResult.new
EventMachine::next_tick do
result.reply a + b
result.succeed
end
result
end
end
EM::run do
Thin::Server.start('0.0.0.0', 8999) do
map('/'){ run AsyncApp.new }
end
end
There is no error on this server console appears.
The result is on transport layer on the json-rpc client is:
500 Internal Server Error
I've try same client with jimson gem implememtation - it work fine but does not support EventMachine and async calls. (Show example if you know how it possible)
The problem was at default "welcome" page assigned to route "/".
I do not try to go with browser to "/", but only try to connect by rpc client.
Some how default "welcome" page route "/" rule is not overwrited by map("/"){...} rule.
The solution is rewrite route rule like this map("/rpc"){...}

How to use $remote_addr with rails and nginx secure_link

I have a rails application that makes calls to another server via net::http to retrieve documents.
I have set up Nginx with secure_link.
The nginx config has
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$remote_addr mySecretCode";
On the client side (which is in fact my rails server) I have to create the secure url something like:
time = (Time.now + 5.minute).to_i
hmac = Digest::MD5.base64digest("#{time}/#{file_path}#{IP_ADDRESS} mySecretCode").tr("+/","-_").gsub("==",'')
return "#{DOCUMENT_BASE_URL}/#{file_path}?md5=#{hmac}&expires=#{time}"
What I want to know is the best way to get the value above for IP_ADDRESS
There are multiple answers in SO on how to get the ip address but alot of them do not seem as reliable as actually making a request to a web service that returns the ip address of the request as this is what the nginx secure link will see (we don't want some sort of localhost address).
I put the following method on my staging server:
def get_client_ip
data=Hash.new
begin
data[:ip_address]=request.ip
data[:error]=nil
rescue Exception =>ex
data[:error]=ex.message
end
render :json=>data
end
I then called the method from the requesting server:
response = Net::HTTP.get_response(URI("myserver.com/web_service/get_client_ip"))
if response.class==Net::HTTPOK
response_hash=JSON.parse response.body
ip=response_hash["ip_address"] unless response_hash[:error]
else
#deal with error
end
After getting the ip address successfully I just cached it and did not keep on calling the web service method.

Having trouble with WebMock, not stubbing correctly

Ruby 1.9.3, RSpec 2.13.0, WebMock 1.17.4, Rails 3
I am writing tests for a company app. The controller in question displays a table of a customer's placed calls, and allows for sort/filter options.
EDIT The test fails because with my current setup, the path does not render, because the recorder_server is either not running locally, OR not setup correctly. Please help with this, too.
A Errno::ECONNREFUSED occurred in recordings#index:
Connection refused - connect(2)
/usr/local/lib/ruby/1.9.1/net/http.rb:763:in `initialize'
-------------------------------
Request:
-------------------------------
* URL : http://www.recorder.example.com:8080/recorded_calls
* IP address: 127.0.0.1
* Parameters: {"controller"=>"recordings", "action"=>"index"}
* Rails root: /var/www/rails/<repository>
As a call is placed, its data joins an xml file, created by an external API, called Recorder
The RecordingsController takes the xml file, and parses it into a hash.
When you visit the associated path, you see the results of the hash -- a table of placed calls, their attributes, and parameters for sort/filter.
Here is my spec so far.
require 'spec_helper'
include Helpers
feature 'Exercise recordings controller' do
include_context "shared admin context"
background do
canned_xml = File.open("spec/support/assets/canned_response.xml").read
stub_request(:post, "http://recorder.example.com:8080/recorder/index").
with(body: {"durations"=>["1"], "durations_greater_less"=>["gt"], "filter_from_day"=>"29", "filter_from_hour"=>"0", "filter_from_minute"=>"0", "filter_from_month"=>"12", "filter_from_year"=>"2014", "filter_prefix"=>true, "filter_to_day"=>"29", "filter_to_hour"=>"23", "filter_to_minute"=>"59", "filter_to_month"=>"12", "filter_to_year"=>"2014"}, # "shared_session_id"=>"19f9a08807cc70c1bf41885956695bde"},
headers: {'Accept'=>'*/*', 'Content-Type'=>'application/x-www-form-urlencoded', 'User-Agent'=>'Ruby'}).
to_return(status: 200, body: canned_xml, headers: {})
uri = URI.parse("http://recorder.example.com:8080/recorder/index")
visit recorded_calls_path
end
scenario 'show index page with 1 xml result' do
#page.save_and_open_page
expect(title).to eq("Recorded Calls")
end
end
And here is the RecordingsController
class RecordingsController < ApplicationController
# before_filter options
def index
test_session_id = request.session_options[:id]
#Make request to recording app for xml of files
uri = URI.parse("http://#{Rails.application.config.recorder_server}:#{Rails.application.config.recorder_server_port}/recorder/index")
http = Net::HTTP.new(uri.host, uri.port)
xml_request = Net::HTTP::Post.new(uri.request_uri)
xml_request_data = Hash.new
# sorting params
xml_request_data[:shared_session_id] = request.session_options[:id]
xml_request.set_form_data(xml_request_data)
response = http.request(xml_request)
if response.class == Net::HTTPOK
#recordings_xml = XmlSimple.xml_in(response.body)
#recordings_sorted = #recordings_xml["Recording"].sort { |a,b| Time.parse("#{a["date"]} #{a["time"]}") <=> Time.parse("#{b["date"]} #{b["time"]}") } unless #recordings_xml["Recording"].nil?
else #recordings_xml = Hash.new
end
end
# other defs
end
Any and all advice is much appreciated. Thank you.
How I configured WebMock
I am answering my own question, with the help of B-Seven and a string of comments. File by file, I will list the changes made in order to properly use WebMock.
Add WebMock to Gemfile under group :test, :development.
bundle install to resolve dependencies
my current setup included Ruby 1.9.3, Rails 2.13.0, WebMock 1.17.4
Setup spec_helper.rb to disable "Real HTTP connections". (This was a backtrace error received later on in this puzzling process.) This allows, to my understanding, all "real connections" to translate into localhost connections and work offline... Which is great since, ideally, I do not want the external app's server to run simultaneously.
require 'webmock/rspec'
WebMock.disable_net_connect!(allow_localhost: true)
In my test.rb environment file, the configurations for recorder_server and port were commented out... If left uncommented, the controller would raise an exception stating uninitialized constants. I used the test server/port (substituting the company name for example) as my layout for the spec stubbing.
In recordings_controller_spec.rb, I had already figured out how to make a canned XML response. With these changes above, my spec was able to correctly stub a response on an external, secondary app, and use such response to correctly render the view associated with the controller being tested.
require 'spec_helper'
include Helpers
feature "Exercise recordings_controller" do
include_context "shared admin context"
# A background is currently not used, because I have 3 scenario types... No xml
# results, 1 result, and 2 results. I will later DRY this out with a background,
# but the heavy lifting is over, for now.
scenario "show index page with 1 xml result" do
canned_xml_1 = File.open("spec/support/assets/canned_response_1.xml").read
stub_request(:post, "http://recorder.example.com:8080/recorder/index").
with(headers: {'Accept'=>'*/*', 'User-Agent'=>'Ruby'}).
to_return(status: 200, body: canned_xml_1, headers: {})
uri = URI.parse("http://recorder.example.com:8080/recorder/index")
visit recorded_calls_path
title.should == "Recorded Calls"
page.should have_content("Search Results")
page.should have_content("Inbound", "5551230000", "175", "December 24 2014", "12:36:24", "134")
end
end
Advice/Resources that helped
With B-Seven's suggestion to my original question (see revisions), I was initially stubbing localhost:3000. He said this was incorrect. After further research, I agree since stubbing with WebMock is typically reserved for outside http connections.
In comments after his answer, B-Seven listed articles to refer to. I will list the ones that helped me the most.
http://robots.thoughtbot.com/how-to-stub-external-services-in-tests
http://railscasts.com/episodes/275-how-i-test
https://github.com/bblimke/webmock
http://www.agileventures.org/articles/testing-with-rspec-stubs-mocks-factories-what-to-choose
It is very important to read the backtrace generated from an errors. What took me so long to figure out how to mock was mainly reading them incorrectly. As you can see from my question, I was making a :get stub request. A coworker pointed out that the backtrace suggested to use :post. That was the final piece to make my spec pass.
I decided not to input the configuration variables as my stub request, for it would result in long lines of code. Instead, this is why I needed to uncomment out those configurations in test.rb.
Why are you stubbing localhost? I think you want to
stub_request(:get, "http://#{Rails.application.config.recorder_server}:#{Rails.application.config.recorder_server_port}/recorder/index").

Can I use a Request / Reply - RPC pattern in Rails 3 with AMQP?

For reasons similar to the ones in this discussion, I'm experimenting with messaging in lieu of REST for a synchronous RPC call from one Rails 3 application to another. Both apps are running on thin.
The "server" application has a config/initializers/amqp.rb file based on the Request / Reply pattern in the rubyamqp.info documentation:
require "amqp"
EventMachine.next_tick do
connection = AMQP.connect ENV['CLOUDAMQP_URL'] || 'amqp://guest:guest#localhost'
channel = AMQP::Channel.new(connection)
requests_queue = channel.queue("amqpgem.examples.services.time", :exclusive => true, :auto_delete => true)
requests_queue.subscribe(:ack => true) do |metadata, payload|
puts "[requests] Got a request #{metadata.message_id}. Sending a reply..."
channel.default_exchange.publish(Time.now.to_s,
:routing_key => metadata.reply_to,
:correlation_id => metadata.message_id,
:mandatory => true)
metadata.ack
end
Signal.trap("INT") { connection.close { EventMachine.stop } }
end
In the 'client' application, I'd like to render the results of a synchronous call to the 'server' in a view. I realize this is a bit outside the comfort zone of an inherently asynchronous library like the amqp gem, but I'm wondering if there's a way to make it work. Here is my client config/initializers/amqp.rb:
require 'amqp'
EventMachine.next_tick do
AMQP.connection = AMQP.connect 'amqp://guest:guest#localhost'
Signal.trap("INT") { AMQP.connection.close { EventMachine.stop } }
end
Here is the controller:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
end
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
When I start both applications on different ports and visit the welcome#index view.
#message is nil in the view, since the result has not yet returned. The result arrives a few milliseconds after the view is rendered and is displayed on the console:
$ thin start
>> Using rack adapter
>> Thin web server (v1.5.0 codename Knife)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
[request] Sending a request...
[response] Response for 3877031: "2012-11-27 22:04:28 -0600"
No surprise here: subscribe is clearly not meant for synchronous calls. What is surprising is that I can't find a synchronous alternative in the AMQP gem source code or in any documentation online. Is there an alternative to subscribe that will give me the RPC behavior I want? Given that there are other parts of the system in which I'd want to use legitimately asynchronous calls, the bunny gem didn't seem like the right tool for the job. Should I give it another look?
edit in response to Sam Stokes
Thanks to Sam for the pointer to throw :async / async.callback. I hadn't seen this technique before and this is exactly the kind of thing I was trying to learn with this experiment in the first place. send_response.finish is gone in Rails 3, but I was able to get his example to work for at least one request with a minor change:
render :text => #message
rendered_response = response.prepare!
Subsequent requests fail with !! Unexpected error while processing request: deadlock; recursive locking. This may have been what Sam was getting at with the comment about getting ActionController to allow concurrent requests, but the cited gist only works for Rails 2. Adding config.allow_concurrency = true in development.rb gets rid of this error in Rails 3, but leads to This queue already has default consumer. from AMQP.
I think this yak is sufficiently shaven. ;-)
While interesting, this is clearly overkill for simple RPC. Something like this Sinatra streaming example seems a more appropriate use case for client interaction with replies. Tenderlove also has a blog post about an upcoming way to stream events in Rails 4 that could work with AMQP.
As Sam points out in his discussion of the HTTP alternative, REST / HTTP makes perfect sense for the RPC portion of my system that involves two Rails apps. There are other parts of the system involving more classic asynchronous event publishing to Clojure apps. For these, the Rails app need only publish events in fire-and-forget fashion, so AMQP will work fine there using my original code without the reply queue.
You can get the behaviour you want - have the client make a simple HTTP request, to which your web app responds asynchronously - but you need more tricks. You need to use Thin's support for asynchronous responses:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
# Trigger Rails response rendering now we have the message.
# Tested in Rails 2.3; may or may not work in Rails 3.x.
rendered_response = send_response.finish
# Pass the response to Thin and make it complete the request.
# env['async.callback'] expects a Rack-style response triple:
# [status, headers, body]
request.env['async.callback'].call(rendered_response)
end
# This unwinds the call stack, skipping the normal Rails response
# rendering, all the way back up to Thin, which catches it and
# interprets as "I'll give you the response later by calling
# env['async.callback']".
throw :async
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
As far as the client is concerned, the result is indistinguishable from your web app blocking on a synchronous call before returning the response; but now your web app can process many such requests concurrently.
CAUTION!
Async Rails is an advanced technique; you need to know what you're doing. Some parts of Rails do not take kindly to having their call stack abruptly dismantled. The throw will bypass any Rack middlewares that don't know to catch and rethrow it (here is a rather old partial solution). ActiveSupport's development-mode class reloading will reload your app's classes after the throw, without waiting for the response, which can cause very confusing breakage if your callback refers to a class that has since been reloaded. You'll also need to ask ActionController nicely to allow concurrent requests.
Request/response
You're also going to need to match up requests and responses. As it stands, if Request 1 arrives, and then Request 2 arrives before Request 1 gets a response, then it's undefined which request would receive Response 1 (messages on a queue are distributed round-robin between the consumers subscribed to the queue).
You could do this by inspecting the correlation_id (which you'll have to explicitly set, by the way - RabbitMQ won't do it for you!) and re-enqueuing the message if it's not the response you were waiting for. My approach was to create a persistent Publisher object which would keep track of open requests, listen for all responses, and lookup the appropriate callback to invoke based on the correlation_id.
Alternative: just use HTTP
You're really solving two different (and tricky!) problems here: persuading Rails/thin to process requests asynchronously, and implementing request-response semantics on top of AMQP's publish-subscribe model. Given you said this is for calling between two Rails apps, why not just use HTTP, which already has the request-response semantics you need? That way you only have to solve the first problem. You can still get concurrent request processing if you use a non-blocking HTTP client library, such as em-http-request.

What's the best way to use SOAP with Ruby?

A client of mine has asked me to integrate a 3rd party API into their Rails app. The only problem is that the API uses SOAP. Ruby has basically dropped SOAP in favor of REST. They provide a Java adapter that apparently works with the Java-Ruby bridge, but we'd like to keep it all in Ruby, if possible. I looked into soap4r, but it seems to have a slightly bad reputation.
So what's the best way to integrate SOAP calls into a Rails app?
I built Savon to make interacting with SOAP webservices via Ruby as easy as possible.
I'd recommend you check it out.
We used the built in soap/wsdlDriver class, which is actually SOAP4R.
It's dog slow, but really simple. The SOAP4R that you get from gems/etc is just an updated version of the same thing.
Example code:
require 'soap/wsdlDriver'
client = SOAP::WSDLDriverFactory.new( 'http://example.com/service.wsdl' ).create_rpc_driver
result = client.doStuff();
That's about it
We switched from Handsoap to Savon.
Here is a series of blog posts comparing the two client libraries.
I also recommend Savon. I spent too many hours trying to deal with Soap4R, without results. Big lack of functionality, no doc.
Savon is the answer for me.
Try SOAP4R
SOAP4R
Getting Started with SOAP4R
And I just heard about this on the Rails Envy Podcast (ep 31):
WS-Deathstar SOAP walkthrough
Just got my stuff working within 3 hours using Savon.
The Getting Started documentation on Savon's homepage was really easy to follow - and actually matched what I was seeing (not always the case)
Kent Sibilev from Datanoise had also ported the Rails ActionWebService library to Rails 2.1 (and above).
This allows you to expose your own Ruby-based SOAP services.
He even has a scaffold/test mode which allows you to test your services using a browser.
I have used HTTP call like below to call a SOAP method,
require 'net/http'
class MyHelper
def initialize(server, port, username, password)
#server = server
#port = port
#username = username
#password = password
puts "Initialised My Helper using #{#server}:#{#port} username=#{#username}"
end
def post_job(job_name)
puts "Posting job #{job_name} to update order service"
job_xml ="<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:ns=\"http://test.com/Test/CreateUpdateOrders/1.0\">
<soapenv:Header/>
<soapenv:Body>
<ns:CreateTestUpdateOrdersReq>
<ContractGroup>ITE2</ContractGroup>
<ProductID>topo</ProductID>
<PublicationReference>#{job_name}</PublicationReference>
</ns:CreateTestUpdateOrdersReq>
</soapenv:Body>
</soapenv:Envelope>"
#http = Net::HTTP.new(#server, #port)
puts "server: " + #server + "port : " + #port
request = Net::HTTP::Post.new(('/XISOAPAdapter/MessageServlet?/Test/CreateUpdateOrders/1.0'), initheader = {'Content-Type' => 'text/xml'})
request.basic_auth(#username, #password)
request.body = job_xml
response = #http.request(request)
puts "request was made to server " + #server
validate_response(response, "post_job_to_pega_updateorder job", '200')
end
private
def validate_response(response, operation, required_code)
if response.code != required_code
raise "#{operation} operation failed. Response was [#{response.inspect} #{response.to_hash.inspect} #{response.body}]"
end
end
end
/*
test = MyHelper.new("mysvr.test.test.com","8102","myusername","mypassword")
test.post_job("test_201601281419")
*/
Hope it helps. Cheers.
I have used SOAP in Ruby when i've had to make a fake SOAP server for my acceptance tests. I don't know if this was the best way to approach the problem, but it worked for me.
I have used Sinatra gem (I wrote about creating mocking endpoints with Sinatra here) for server and also Nokogiri for XML stuff (SOAP is working with XML).
So, for the beginning I have create two files (e.g. config.rb and responses.rb) in which I have put the predefined answers that SOAP server will return.
In config.rb I have put the WSDL file, but as a string.
##wsdl = '<wsdl:definitions name="StockQuote"
targetNamespace="http://example.com/stockquote.wsdl"
xmlns:tns="http://example.com/stockquote.wsdl"
xmlns:xsd1="http://example.com/stockquote.xsd"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns="http://schemas.xmlsoap.org/wsdl/">
.......
</wsdl:definitions>'
In responses.rb I have put samples for responses that SOAP server will return for different scenarios.
##login_failure = "<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<LoginResponse xmlns="http://tempuri.org/">
<LoginResult xmlns:a="http://schemas.datacontract.org/2004/07/WEBMethodsObjects" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<a:Error>Invalid username and password</a:Error>
<a:ObjectInformation i:nil="true"/>
<a:Response>false</a:Response>
</LoginResult>
</LoginResponse>
</s:Body>
</s:Envelope>"
So now let me show you how I have actually created the server.
require 'sinatra'
require 'json'
require 'nokogiri'
require_relative 'config/config.rb'
require_relative 'config/responses.rb'
after do
# cors
headers({
"Access-Control-Allow-Origin" => "*",
"Access-Control-Allow-Methods" => "POST",
"Access-Control-Allow-Headers" => "content-type",
})
# json
content_type :json
end
#when accessing the /HaWebMethods route the server will return either the WSDL file, either and XSD (I don't know exactly how to explain this but it is a WSDL dependency)
get "/HAWebMethods/" do
case request.query_string
when 'xsd=xsd0'
status 200
body = ##xsd0
when 'wsdl'
status 200
body = ##wsdl
end
end
post '/HAWebMethods/soap' do
request_payload = request.body.read
request_payload = Nokogiri::XML request_payload
request_payload.remove_namespaces!
if request_payload.css('Body').text != ''
if request_payload.css('Login').text != ''
if request_payload.css('email').text == some username && request_payload.css('password').text == some password
status 200
body = ##login_success
else
status 200
body = ##login_failure
end
end
end
end
I hope you'll find this helpful!
I was having the same issue, switched to Savon and then just tested it on an open WSDL (I used http://www.webservicex.net/geoipservice.asmx?WSDL) and so far so good!
https://github.com/savonrb/savon

Resources