Hi I am using the "Place Search Requests" from google:
A Place Search request is an HTTP URL of the following form:
https://maps.googleapis.com/maps/api/place/search/output?parameters
But the problem is I have to make hundreds of queries to google and this makes the APP very very slow.
Could I somehow bundle the requests at once? For example, I can send all the names of the places to google and get the result back in a row?
Regards, Yashu
You can only do what the documentation says will work.
Side note: Bulk downloads of data are forbidden by the Terms. It's open to question why you need to make hundreds of queries, and indeed whether your use case is allowed.
Related
I am new to Twitter and need some tips.
I need to display tweet feed from multiple users on some webpage.
The first thing I stumbled upon is Embedded Timelines. It allows to display tweets from list of users but the gotcha is that those lists should be maintained on Twitter-side (i.e. I cannot specify #qwe and #asd only on my side and get timeline without adding those users into list on Twitter-side).
The thing is that list of users that should be included into timeline is dynamic and managing those lists through Twitter API will probably be painful. Not to mention that my website will probably generate tons of those lists and I feel that I will violate some api quotas sooner or later.
So, my question is - am I stuck with using Embedded Timelines that refer some user list on Twitter-side and managing those lists through, say Twitter REST api, or there is a simplier way to do what I want?
It's pretty simple to display tweets for multiple users.
Links to start with
This post explains some of the search queries you can make
This post is a simple library to make requests to the twitter API that 'just works'
Your Query
Okay, so you want multiple users. The endpoint you're looking at using is the search/tweets one: https://api.twitter.com/1.1/search/tweets.json.
The query string uses :from and you can interpolate multiple froms with AND/OR.
An example query for the GET request:
?q=from:user1+OR+from:user2
Read more about the search API queries here.
Your "over-the-quote" issue
This is something you're going to need to figure out yourself - depending on the number of requests you expect to make, and the twitter imposed limits, maybe some sort of caching or saving information when you hit your limit, and only pull back from the cache whilst you're hitting your limit..
I'm building a portal that lists certain products and automatically gets the prices from product pages of listed vendors. To get the URL for the product page on a vendor's website, I've been using Google search API and it's been working great - the first result is invariably the page of the product. However, now I'm getting errors saying that Google has blocked my website (actually my develoment machine's IP) from the API because I've been making automated requests such as scraping (the only item that applies).
Fine, Google can go jump off a cliff, but... how do product portals generally get URLs for thewir products? I can enter the URLs manually but that can be a problem if the vendor's website changes the URL scheme somehow. I obviously need an automated way to do this.
I'm making no more than 50-60 requests per day so I don't get what Google wants. Do they want money?
First, they want you to use one of their APIs, not scrape their web page directly. Their custom search API is documented here. Once you register they'll give you an API key. You can get results in JSON format by requesting
https://www.googleapis.com/customsearch/v1?q=SEARCH_TERMS&key=YOUR_KEY
Second, they do like money, but you might be okay. You're allowed 100 searches per day for free; beyond that you're you're going to be charged $5 per thousand searches.
Is there any built-in way to get the long, expanded URL from a twitter RSS feed? Right now the feed lists all the urls as http://t.co.... I'd like to do what the Twitter display does and display the long URLs; I'd also like to avoid having to do either an API call or HTTP request for each URL in the feed. Ideally, I'd also like to avoid using the Twitter API directly but if that's the only way, so be it.
Clarification
I'm not interested in doing a separate request for every single t.co link, or calling the Twitter API. I was hoping there was a single request I could make that would include the long URLs in the metadata (or even provide the tweet in full expanded form as it appears on Twitter). Turns out the way to do this is by requesting the JSON version from search.twitter.com rather than the RSS feed, and tacking on include_entities=True.
Rewrite, hopefully this makes it more clear
I'm using http://search.twitter.com/search.rss to get a feed of tweets matching a search term. The feed contains only the shortened t.co urls. Is there a way to modify my request so that the tweets contain the expanded URLs instead?
The goal is to do just one request rather than having to go through the tweets and parse each t.co url separately (especially since for a feed with several dozen t.co urls, that means several dozen separate requests). If necessary, I am willing to use the Twitter API directly to do the search instead of using RSS, but for my purposes using a feed is more ideal.
No, Twitter does not offer a urls entity in its RSS responses, nor does the include_entities option appear to work. You'll have to use a different response format e.g. JSON (with which you can use the include_entities option which includes an entities['urls'][n]['expanded_url'] object), or "unshorten" the URLs yourself after the fact.
There is a way to do this without using the Twitter API directly. You can use one of several resources,
http://expandurl.appspot.com/
API call prototype : http://expandurl.appspot.com/expand?url=
http://longurl.org
API call prototype : http://api.longurl.org/v2/expand?url=
http://unshort.me/
API call prototype: http://api.unshort.me/?r=http://
of course you can also use the Twitter API directly for this as Jordan mentioned by including &include_entities=1 or true as a parameter to some calls.
also try to CURL the URL and see what information you can gleam from that. I think this pretty much exhausts the options.
I'm having a hard time finding out where to start with this one. I pull information from an external website and put some of the content on my page. I think I need two things done. 1. A google search that takes the url of the top search given a name of my current object. 2. A way to examine the source of the result and output the information of a tag with a specific class.
To better explain this, I'll create a hypothetical situation: Say I have a website that lists mattresses and gives reviews. Say I want to add other websites reviews and in this website there's a tag like 3.5/5. Then I want to display this review along with a link to the external page. Is there a way to search the site like "site:http://mattressreviewsite/ #matress.name", pull that top url, and then search the source for the string "class='rating'" and display this in my view?
Thanks for any help or guidance. I'm using Rails 3.
You need an HTTP client (httparty, net/http-default) for that and do some parsing to get the required results.
Go study the url patterns of google (as far as I remember it was google.com?q=search_string) and use the http client for requests (get/post). Parse the result (there are many HTML parser gems available too) to get what you need and for any subsequent HTTP requests. And don't forget the 'I am feeling lucky' feature of google which returns only one result.
All the best!
Is there a way to have treat google serach results as an rss feed?
For example say I worked for stackoverflow and wanted to montior how if the results from the following search url: http://www.google.com/search?hl=en&q=stackoverflow changes from day today.
It would be cool if I could append &output=rss to the url and get back a feed like with google news. But that does not seem to be supported.
Anyone have ideas? (Note I am programing with Ruby and Rails, if that matters)
Thanks!
Jonathan
Google has the Google Alerts service, which notifies you whenever it finds new content matching a certain query. Besides sending to an email address (instantly, daily, weekly), it allows you to create an RSS feed out of it.
No, Google doesn't offer that feature.
If you need to parse/convert the result of a query on Google, you can use a (X)HTML parser such as Nokogiri.
Beware that automatic requests to Google may violate its TOS.