Using nokogiri I need to search through some HTML for something like:
new GLatLng(-14.468352,132.270434)
and then assign the latitude and longitude values in that code to two variables.
You haven't shown us any example HTML. Nokogiri seems to be the wrong tool for this job if you're just searching for plain text. You could simply do:
require 'open-uri'
html = open('http://stackoverflow.com/questions/6739202/find-google-map-line-w-nokogiri').read
match = /new GLatLng\((?<lat>.+?),(?<long>.+?)\)/.match html
p match[:lat].to_f
#=> -14.468352
Or, if you need an array of all such matches, say the page also has new GLatLng(17.3,42.1) on it:
matches = html.scan /new GLatLng\((.+?),(.+?)\)/
p matches
#=> [["-14.468352", "132.270434"],["17.3", "42.1"]]
The only reason you might want to use Nokogiri would be to limit your searching to a particular HTML element (e.g. some <script> block).
Related
I wrote a script to extract the price from HTML:
sign = '$'
doc.css('*:contains("'+sign+'")').each do |element|
#Code that i wrote that extract the price from a text
end
The problem is, for some sites Nokogiri finds all the elements that contain a dollar sign, and for some sites it doesn't find even one.
Example for site that it doesn't find even one element: http://www.urbanoutfitters.com/urban/catalog/productdetail.jsp?id=39101399&category=W-SOUTH
What am I doing wrong?
I am trying to get auto-corrected spelling from Google's home page using Nokogiri.
For example, if I am typing "hw did" and the correct spelling is "how did", I have to get the correct spelling.
I tried with the xpath and css methods, but in both cases, I get the same empty array.
I got the XPath and CSS paths using FireBug.
Here is my Nokogiri code:
#requ=params[:search]
#requ_url=#requ.gsub(" ","+") //to encode the url(if user inputs space than it should be convet into + )
#doc=Nokogiri::HTML(open("https://www.google.co.in/search?q=#{#requ_url}"))
binding.pry
Here are my XPath and CSS selectors:
Using XPath:
pry(#<SearchController>)> #doc.xpath("/html/body/div[5]/div[2]/div[6]/div/div[4]/div/div/div[2]/div/p/a").inspect
=> "[]"
Using CSS:
pry(#<SearchController>)> #doc.css('html body#gsr.srp div#main div#cnt.mdm div.mw div#rcnt div.col div#center_col div#taw div div.med p.ssp a.spell').inner_text()
=> ""
First, use the right tools to manipulate URLs; They'll save you headaches.
Here's how I'd find the right spelling:
require 'nokogiri'
require 'uri'
require 'open-uri'
requ = 'hw did'
uri = URI.parse('https://www.google.co.in/search')
uri.query = URI.encode_www_form({'q' => requ})
doc = Nokogiri::HTML(open(uri.to_s))
doc.at('a.spell').text # => "how did"
it works fine with "how did",check it with "bnglore" or any one word string,it gives an error. the same i was facing in my previous code. it is showing undefined method `text'
It's not that hard to figure out. They're changing the HTML so you have to change your selector. "Inspect" the suggested word "bangalore" and see where it exists in relation to the previous path. Once you know that, it's easy to find a way to access the word:
doc.at('span.spell').next_element.text # => "bangalore"
Don't trust Google to do things the easy way, or even the best way, or be consistent. Just because they return HTML one way for words with spaces, doesn't mean they're going to do it the same way for a single word. I would do it consistently, but they might be trying to discourage you from mining their pages so don't be surprised if you see variations.
Now, you need to figure out how to write code that knows when to use one selector/method or the other. That's for you to do.
So basically I am scraping a website, and I want to display only part of the address. For instance, if it is www.yadaya.com/nyc/sales/manhattan and I want to only put "sales" in a hash or an array.
{
:listing_class => listings.css('a').text
}
That will give me the whole URL. Would I want to gsub to get the partial output?
Thanks!
When you are dealing with URLs, you should start with URI, then, to mess with the path, switch to using File.dirname and/or File.basename:
require 'uri'
uri = URI.parse('http://www.yadaya.com/nyc/sales/manhattan')
dir = File.dirname(uri.path).split('/').last
which sets dir to "sales".
No regex is needed, except what parse and split do internally.
Using that in your code's context:
File.dirname(URI.parse(listings.css('a').text).path).split('/').last
but, personally, I'd break that into two lines for clarity and readability, which translate into easier maintenance.
A warning though:
listings.css('a')
returns a NodeSet, which is akin to an Array. If the DOM you are searching has multiple <a> tags, you will get more than one Node being passed to text, which will then be concatenated into the text you are treating as a URL. That's a bug in waiting:
require 'nokogiri'
html = '<div>foobar</div>'
doc = Nokogiri::HTML(html)
doc.at('div').css('a').text
Which results in:
"foobar"
Instead, your code needs to be:
listings.at('a')
or
listings.at_css('a')
so only one node is returned. In the context of my sample code:
doc.at('div').at('a').text
# => "foo"
Even if the code that sets up listings only results in a single <a> node being visible, use at or at_css for correctness.
Since you have the full URL using listings.css('a').text, you could parse out a section of the path using a combination of the URI class and a regular expression, using something like the following:
require 'uri'
uri = URI.parse(listings.css('a').text)
=> #<URI::HTTP:0x007f91a39255b8 URL:http://www.yadaya.com/nyc/sales/manhattan>
match = %r{^/nyc/([^/]+)/}.match(uri.path)
=> #<MatchData "/nyc/sales/" 1:"sales">
match[1]
=> "sales"
You may need to tweak the regular expression to meet your needs, but that's the gist of it.
I have an RSS document that has a few tags, let's say named <foo> and <bar>, where I want to replace/massage the content. What's the most efficient way of doing this? Do I parse the entire feed and replace content inline? If so, how would the block look like if I want to do it for the two sibling nodes above?
Does it require parsing the document sequentially and creating a new one as I go through content?
The document is getting created with something like:
doc = Nokogiri::XML(open("http://example.com/rss.xml"))
What's the best way to iterate over doc and modify the contents of <foo> and <bar> from that point?
You can edit XML document directly in memory. If you're looking for the simple way how to do it, you can use CSS selectors. Following code will change content of foo and bar elements no matter where they are located within the document:
doc = Nokogiri::XML(open("http://example.com/rss.xml"))
for element in doc.css('foo, bar')
element.content = "something"
end
You can also use multiple CSS selectors or XPath query, have a look at Nokogiri documentation:
http://nokogiri.org
http://nokogiri.org/Nokogiri/XML/Node.html#method-i-css
http://nokogiri.org/Nokogiri/XML/Node.html#method-i-xpath
xml = "<r>
<foo>Hello<b>World</b></foo>
<x>It's <bar>Nice</bar> to see you.</x>
<foo>Here's another</foo>
<y>Don't touch me.</y>
</r>"
require 'nokogiri'
doc = Nokogiri::XML(xml)
doc.search('foo,bar').each do |node|
node.inner_html = "I am #{node.name} and I used to say #{node.text.inspect}"
end
puts doc
#=> <?xml version="1.0"?>
#=> <r>
#=> <foo>I am foo and I used to say "HelloWorld"</foo>
#=> <x>It's <bar>I am bar and I used to say "Nice"</bar> to see you.</x>
#=> <foo>I am foo and I used to say "Here's another"</foo>
#=> <y>Don't touch me.</y>
#=> </r>
You can also use doc.xpath('//foo|//bar') to find all the foo and bar elements at any depth. (The CSS syntax is shorter and sufficiently powerful, though.)
In the future, you should supply an actual sample of the XML you are parsing, and an actual sample of the sort of transformation you wish to apply.
Greetings everyone:
I would love to get some information from a huge collection of Google Search Result pages.
The only thing I need is the URLs inside a bunch of <cite></cite> HTML tags.
I cannot get a solution in any other proper way to handle this problem so now I am moving to ruby.
This is so far what I have written:
require 'net/http'
require 'uri'
url=URI.parse('http://www.google.com.au')
res= Net::HTTP.start(url.host, url.port){|http|
http.get('/#hl=en&q=helloworld')}
puts res.body
Unfortunately I cannot use the recommended hpricot ruby gem (because it misses a make command or something?)
So I would like to stick with this approach.
Now that I can get the response body as a string, the only thing I need is to retrieve whatever is inside the ciite(remove an i to see the true name :)) HTML tags.
How should I do that? using regular expression? Can anyone give me an example?
Here's one way to do it using Nokogiri:
Nokogiri::HTML(res.body).css("cite").map {|cite| cite.content}
I think this will solve it:
res.scan(/<cite>([^<>]*)<\/cite>/imu).flatten
# This one to ignore empty tags:
res.scan(/<cite>([^<>]*)<\/cite>/imu).flatten.select{|x| !x.empty?}
If you're having problems with hpricot, you could also try nokogiri which is very similar, and allows you to do the same things.
Split the string on the tag you want. Assuming only one instance of tag (or specify only one split) you'll have two pieces I'll call head and tail. Take tail and split it on the closing tag (once), so you'll now have two pieces in your new array. The new head is what was between your tags, and the new tail is the remainder of the string, which you may process again if the tag could appear more than once.
An example that may not be exactly correct but you get the idea:
head1, tail1 = str.split('<tag>', 1) # finds the opening tag
head2, tail2 = tail1.split('</tag>', 1) # finds the closing tag