Why it is returning an empty array while it has content? - ruby-on-rails

I am trying to get auto-corrected spelling from Google's home page using Nokogiri.
For example, if I am typing "hw did" and the correct spelling is "how did", I have to get the correct spelling.
I tried with the xpath and css methods, but in both cases, I get the same empty array.
I got the XPath and CSS paths using FireBug.
Here is my Nokogiri code:
#requ=params[:search]
#requ_url=#requ.gsub(" ","+") //to encode the url(if user inputs space than it should be convet into + )
#doc=Nokogiri::HTML(open("https://www.google.co.in/search?q=#{#requ_url}"))
binding.pry
Here are my XPath and CSS selectors:
Using XPath:
pry(#<SearchController>)> #doc.xpath("/html/body/div[5]/div[2]/div[6]/div/div[4]/div/div/div[2]/div/p/a").inspect
=> "[]"
Using CSS:
pry(#<SearchController>)> #doc.css('html body#gsr.srp div#main div#cnt.mdm div.mw div#rcnt div.col div#center_col div#taw div div.med p.ssp a.spell').inner_text()
=> ""

First, use the right tools to manipulate URLs; They'll save you headaches.
Here's how I'd find the right spelling:
require 'nokogiri'
require 'uri'
require 'open-uri'
requ = 'hw did'
uri = URI.parse('https://www.google.co.in/search')
uri.query = URI.encode_www_form({'q' => requ})
doc = Nokogiri::HTML(open(uri.to_s))
doc.at('a.spell').text # => "how did"
it works fine with "how did",check it with "bnglore" or any one word string,it gives an error. the same i was facing in my previous code. it is showing undefined method `text'
It's not that hard to figure out. They're changing the HTML so you have to change your selector. "Inspect" the suggested word "bangalore" and see where it exists in relation to the previous path. Once you know that, it's easy to find a way to access the word:
doc.at('span.spell').next_element.text # => "bangalore"
Don't trust Google to do things the easy way, or even the best way, or be consistent. Just because they return HTML one way for words with spaces, doesn't mean they're going to do it the same way for a single word. I would do it consistently, but they might be trying to discourage you from mining their pages so don't be surprised if you see variations.
Now, you need to figure out how to write code that knows when to use one selector/method or the other. That's for you to do.

Related

Detect and replace URLs in text

I want to detect and replace URLs in texts input by users. An example worth thousand words:
Here's a link to stackoverflow.com, so is http://stackoverflow.com.
=>
Here's a link to [stackoverflow.com](http://stackoverflow.com), so is [http://stackoverflow.com](http://stackoverflow.com).
All I found from Google is how to detect URLs and change them to <a> tags. Is there a way that I can detect URLs, and replace them with custom code blocks to generate something as the example above? Thanks a lot!
The tricky part of this is finding a regexp which will match all urls. eg this might work, from http://ryanangilly.com/post/8654404046/grubers-improved-regex-for-matching-urls-written
regexp = /\b((?:https?:\/\/|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}\/?)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s\`!()\[\]{};:\'\".,<>?«»“”‘’]))/i
Once you've got your regexp, then use gsub with a block, eg
text = "Here's a link to stackoverflow.com, so is http://stackoverflow.com."
=> "Here's a link to stackoverflow.com, so is http://stackoverflow.com."
text.gsub(regexp){|url| "FOO#{url}BAR"}
=> "Here's a link to stackoverflow.com, so is FOOhttp://stackoverflow.comBAR."
Note that this doesn't do anything with the first one in the text (that doesn't have the protocol), because it's not a url. if you were expecting it to pick up the first one too then that's going to be much harder for you.

Nokogiri- Parsing HTML <a href> and displaying only part of the URL

So basically I am scraping a website, and I want to display only part of the address. For instance, if it is www.yadaya.com/nyc/sales/manhattan and I want to only put "sales" in a hash or an array.
{
:listing_class => listings.css('a').text
}
That will give me the whole URL. Would I want to gsub to get the partial output?
Thanks!
When you are dealing with URLs, you should start with URI, then, to mess with the path, switch to using File.dirname and/or File.basename:
require 'uri'
uri = URI.parse('http://www.yadaya.com/nyc/sales/manhattan')
dir = File.dirname(uri.path).split('/').last
which sets dir to "sales".
No regex is needed, except what parse and split do internally.
Using that in your code's context:
File.dirname(URI.parse(listings.css('a').text).path).split('/').last
but, personally, I'd break that into two lines for clarity and readability, which translate into easier maintenance.
A warning though:
listings.css('a')
returns a NodeSet, which is akin to an Array. If the DOM you are searching has multiple <a> tags, you will get more than one Node being passed to text, which will then be concatenated into the text you are treating as a URL. That's a bug in waiting:
require 'nokogiri'
html = '<div>foobar</div>'
doc = Nokogiri::HTML(html)
doc.at('div').css('a').text
Which results in:
"foobar"
Instead, your code needs to be:
listings.at('a')
or
listings.at_css('a')
so only one node is returned. In the context of my sample code:
doc.at('div').at('a').text
# => "foo"
Even if the code that sets up listings only results in a single <a> node being visible, use at or at_css for correctness.
Since you have the full URL using listings.css('a').text, you could parse out a section of the path using a combination of the URI class and a regular expression, using something like the following:
require 'uri'
uri = URI.parse(listings.css('a').text)
=> #<URI::HTTP:0x007f91a39255b8 URL:http://www.yadaya.com/nyc/sales/manhattan>
match = %r{^/nyc/([^/]+)/}.match(uri.path)
=> #<MatchData "/nyc/sales/" 1:"sales">
match[1]
=> "sales"
You may need to tweak the regular expression to meet your needs, but that's the gist of it.

using 'puts' to get information from external domain

ive just started with ruby on rails the other day and i was wandering is it possible to using the puts function to get the content of a div from a page on an external page.
something like puts "http://www.example.com #about"
would something like this work ? or would you have to get the entire page and then puts that section that you wanted ?
additionaly if the content on the "example.com" #about div is constantly changing would puts constantly update its output or would it only run the script each time the page is refreshed ?
The open-uri library (for fetching the page) and the Nokogiri gem (for parsing and retrieving specific content) can assist with this.
require 'open-uri'
require 'nokogiri'
doc = Nokogiri::HTML(open('http://www.example.com/'))
puts doc.at('#about').text
puts will not work that way. Ruby makes parsing HTML fairly easy though. Take a look at the Nokogirl library, and you can use xpath queries to get to the div you want to print out. I believe you would need to reopen the file if the div changes, but I'm not positive about that - you can easily test it (or someone here can confirm or reject that statement).

Extracting email addresses in an html block in ruby/rails

I am creating a parser that wards off against spamming and harvesting of emails from a block of text that comes from tinyMCE (so it may or may not have html tags in it)
I've tried regexes and so far this has been successful:
/\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b/i
problem is, i need to ignore all email addresses with mailto hrefs. for example:
test#mail.com
should only return the second email add.
To get a background of what im doing, im reversing the email addresses in a block so the above example would look like this:
moc.liam#tset
problem with my current regex is that it also replaces the one in href. Is there a way for me to do this with a single regex? Or do i have to check for one then the other? Is there a way for me to do this just by using gsub or do I have to use some nokogiri/hpricot magicks and whatnot to parse the mailtos? Thanks in advance!
Here were my references btw:
so.com/questions/504860/extract-email-addresses-from-a-block-of-text
so.com/questions/1376149/regexp-for-extracting-a-mailto-address
im also testing using this:
http://rubular.com/
edit
here's my current helper code:
def email_obfuscator(text)
text.gsub(/\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b/i) { |m|
m = "<span class='anti-spam'>#{m.reverse}</span>"
}
end
which results in this:
<a target="_self" href="mailto:<span class='anti-spam'>moc.liamg#tset</span>"><span class="anti-spam">moc.liamg#tset</span></a>
Another option if lookbehind doesn't work:
/\b(mailto:)?([A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4})\b/i
This would match all emails, then you can manually check if first captured group is "mailto:" then skip this match.
Would this work?
/\b(?<!mailto:)[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b/i
The (?<!mailto:) is a negative lookbehind, which will ignore any matches starting with mailto:
I don't have Ruby set up at work, unfortunately, but it worked with PHP when I tested it...
Why not just store all the matched emails in an array and remove any duplicates? You can do this easily with the ruby standard library and (I imagine) it's probably quicker/more maintainable than adding more complexity to your regex.
emails = ["email_one#example.com", "email_one#example.com", "email_two#example.com"]
emails.uniq # => ["email_one#example.com", "email_two#example.com"]

How to use ruby to get string between HTML <cite> tags?

Greetings everyone:
I would love to get some information from a huge collection of Google Search Result pages.
The only thing I need is the URLs inside a bunch of <cite></cite> HTML tags.
I cannot get a solution in any other proper way to handle this problem so now I am moving to ruby.
This is so far what I have written:
require 'net/http'
require 'uri'
url=URI.parse('http://www.google.com.au')
res= Net::HTTP.start(url.host, url.port){|http|
http.get('/#hl=en&q=helloworld')}
puts res.body
Unfortunately I cannot use the recommended hpricot ruby gem (because it misses a make command or something?)
So I would like to stick with this approach.
Now that I can get the response body as a string, the only thing I need is to retrieve whatever is inside the ciite(remove an i to see the true name :)) HTML tags.
How should I do that? using regular expression? Can anyone give me an example?
Here's one way to do it using Nokogiri:
Nokogiri::HTML(res.body).css("cite").map {|cite| cite.content}
I think this will solve it:
res.scan(/<cite>([^<>]*)<\/cite>/imu).flatten
# This one to ignore empty tags:
res.scan(/<cite>([^<>]*)<\/cite>/imu).flatten.select{|x| !x.empty?}
If you're having problems with hpricot, you could also try nokogiri which is very similar, and allows you to do the same things.
Split the string on the tag you want. Assuming only one instance of tag (or specify only one split) you'll have two pieces I'll call head and tail. Take tail and split it on the closing tag (once), so you'll now have two pieces in your new array. The new head is what was between your tags, and the new tail is the remainder of the string, which you may process again if the tag could appear more than once.
An example that may not be exactly correct but you get the idea:
head1, tail1 = str.split('<tag>', 1) # finds the opening tag
head2, tail2 = tail1.split('</tag>', 1) # finds the closing tag

Resources