confused how to use opentables in yql - yql

i am trying to get access to whitepages using YQL. Unfortunately i don't have much experience with opentables.
I was directed to the whitepages xml file at:
http://github.com/spullara/yql-tables/blob/c63212b2ac9db6feb77ae3cecace51ed52e08c01/whitepages/whitepages.search.xml
Does anyone know how to use this table to extract meaningful information using YQL?
Specifically, I'm not sure how to make a query in YQL using this table to search for a person's name.
Help?

Go to the YQL console http://developer.yahoo.com/yql/console/ and click Show Community Tables. That should make the list on the right much bigger, and will include your table.

have you tried to put
&env=http%3A%2F%2Fdatatables.org%2Falltables.env
at the end of your request?

Related

How do I use importXML/importHTML to retrieve a stock options chain that is hidden?

I am trying to use importXML or importHTML function to retrieve a stock options chain, specifically using this function to get a table. So far, this is what I have:
=importhtml("https://bigcharts.marketwatch.com/quickchart/options.asp?symb=TSLA","table",3)
The problem I'm getting is that I can't retrieve "hidden" tables. For example, if you go on the website: https://bigcharts.marketwatch.com/quickchart/options.asp?symb=TSLA
If you scroll down on this page the hidden tables would only be revealed if you click on ,"show april 2022", "show may 2022", etc. I am trying to retrieve all of this information.
The end result is that I would like to create a table that looks like this:
https://www.barchart.com/stocks/quotes/tsla/put-call-ratios
And a table that looks like this:
https://www.barchart.com/stocks/quotes/TSLA/options?moneyness=10&view=stacked&expiration=2022-04-14-m
As a result, there are two things that I am trying to create, the above tables shown on barchart.
I have tried to use importHTML or importXML on barchart, but it looks like it's not allowed. If there is a way to directly retrieve the information from barchart, that would be a much better solution rather than having to import all of the data separately using a different website.
Please note, that I do have a beginner knowledge, so a step-by-step solution on what to do would be very helpful. Thank you

ImportXML not returning entire table

I cannot get an entire table to populate with ImportXML. At best I get the first column and I cannot figure this out.
The website I am trying to scrape is: https://classic.warcraftlogs.com/character/us/kromcrush/chills
Do I have any options to retrieve the table rather it be column by column or as a whole?
I have tried all the following plus several others.
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills","//table[#id='boss-table-1010']/tbody/tr")
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills","//tbody/tr")
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills","//tbody/tr/td")
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills","//tr")
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills","//tr/td")
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills","//tr/td[1]")
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills","//tr/td[2]")
Anything outside of column one says Imported content is empty. Please help!
P.S. I have scoured this website and google for answers and every case I find seems to be a syntax error, starting at the table itself doesn't return the entire table which tells me I need a clever method.
It seems that's an issue with the website, because when you click on Inspect you can see the table with id "boss-table-1010" but if you click on View Source that ID is not available, so the table is dynamically rendered and Sheets doesn't find such id.
I've checked it and I can get the data by doing
=IMPORTXML("https://classic.warcraftlogs.com/character/us/kromcrush/chills", "//table/tbody//td")
But if you want a more robust solution, it'll be better doing it programmatically by using Python for web scraping

Attempting to import from a XPath, seems to always yield blank information

Currently in my google doc, i'm working on a database for my card worth, and it seems like it doesn't want to grab the information no matter what xpath i want to attempt.
Website i'm trying to take information available here. *This is the hyperlink i'm feeding
In the top right corner i'm attempting to grab the worth box information, here is current xpaths i've attempted
"//a[#id='worthBox']/h4"
"/html/body/div[4]/div[1]/div[2]/form/div[1]/div[2]/div/a/h4"
"/h4"
"/h4[0-20]"
"//a[#id='worthBox'][1]/h4"
"//div[#id='estimate-box']/a/h4"
"//div[#id='estimate-box']/a[1]/h4"
Can someone explain to me why it doesn't seem to wanna fetch, is it even possible?
Thank you so much for your time and help!
In the URL, the value is put using the Javascript. But IMPORTXML cannot retrieve the result after Javascript was run. IMPORTXML retrieves the HTML without running Javascript. I think that your xpath is the result after Javascript was run. By this, they cannot be used. But it seems that the value you expect can be retrieved other xpath.
Modified xpath:
//input[#id='medianHiddenField']/#value
Sample formula:
=IMPORTXML(A1,"//input[#id='medianHiddenField']/#value")
In this case, the URL of https://mavin.io/search?q=Lugia%20NM%209%2F111%20-PSA&bt=sold# put in the cell "A1".
Result:
Reference:
IMPORTXML

Using Lucene.Net_2_9_1/contrib/Spatial.Net with Umbraco

I've tried doing this with Umbraco 6.1.6. I've pretty much implemented what Drew did here: http://our.umbraco.org/forum/developers/extending-umbraco/23200-Lucene-with-spatialnet?p=0
I'm storing lat and long data in umbraco nodes. The nodes are being indexed as encided values with tiers. But, I am not returning any results when I add the DistanceFilter to the query.
I'm just wondering if anyone else has tried this and got it working. Perhaps you can post some code.
Thanks.
Josh,
Have you inspected the index using luke just to ensure that fields are there? Also have you tried running raw queries (get the generated query written out using code) on the index using luke?
Regards
Ismail

Creating a pdf invoice in rails/ruby

I want to create an invoice in PDF format using rails/ruby.
So company header and the top, client information, and then line items with pricing and then a total at the bottom etc.
What do you guys advise?
Is it tricky to get the formatting correct so it prints out correctly?
I recommend a gem called prawn:
http://prawn.majesticseacreature.com/
I've used it before and liked the results.
There is this great alternative oo:
http://railscasts.com/episodes/220-pdfkit

Resources