I am using Twitter Search API to get 10 entries for a particular search term.
I issued:
http://search.twitter.com/search.atom?q=%40cldmgc&rpp=10
But I am getting only 5 entries.
Is there anyway I can resolve this which doesn't require authentication?
You problem isn't rate-limiting, instead you've run into one of the annoying limitations of the search API: it only returns results for about the last four days. For your query, there have only been five status updates in the last four days, and that's all you will be able to fetch.
If, however, your search terms yielded more than five results (like this query for the word "purple": http://search.twitter.com/search.atom?q=purple&rpp=10) you will find that Twitter has nicely included in your results the query you need to get the next page of results.
<link type="application/atom+xml" href="http://search.twitter.com/search.atom?max_id=86674031113289728&page=2&q=purple&rpp=10" rel="next"/>
Just read the href from the <link> having type="application/atom+xml" with rel="next". After your XML parser decodes the href, you'll have a URL in the form: http://search.twitter.com/search.atom?max_id=86674031113289728&page=2&q=purple&rpp=10
Related
On third page not getting next page token google place api?
I am using Google place text search API in my Ruby on rails application, everything is working fine but after third page I am not getting any next page token so for every text search I am getting only 60 result. Is I am missing something please suggest any help would be appreciable. This happen for every text.
My request for first page :-
https://maps.googleapis.com/maps/api/place/textsearch/json?key=#{my_key}&query=#{my_query}
My request for other page with token:-
https://maps.googleapis.com/maps/api/place/textsearch/json?key=#{my_key}&pagetoken=#{next_page_token}
And usually I am searching on google it show 100's of result for same place text. How can I get result more than 60.
This is intended behavior for Google's Places API, as you can only get up to 60 places, split across 3 pages (3 queries). This is why there is no next_page_token on the third page.
By default, each Nearby Search or Text Search returns up to 20
establishment results per query; however, each search can return as
many as 60 results, split across three pages. If your search will
return more than 20, then the search response will include an
additional value — next_page_token.
Reference here.
I'm making a new iOS (swift) app to test some concepts, and I'm using the GitHub Serach API to retrieve a list of filtered repositories.
The request is working fine so far, but I'm having trouble understanding the pagination process and how to know I reached the end of the results.
For what I saw, the Search API returns a maximum of 1k results, broke in pages of 100 maximum results. But the field in the returned json with the total count shows way more available results (I imagine that it shows the total repositories that satisfy the query and not the maximum available for return in the API).
The only way I found so far to obtain information about the pages (and the pagination process) in GitHub Documentation comes in the header of the response, like:
Status: 200 OK
Link: <https://api.github.com/resource?page=2>; rel="next",
<https://api.github.com/resource?page=5>; rel="last"
X-RateLimit-Limit: 20
X-RateLimit-Remaining: 19
Anyone can suggest the best approach to detect the end of the pages in this case?
Should I try to parse the information from the header or infer it somehow based on the returned json? I even got the "Link" header value but don't know how to parse it.
This question already has answers here:
Scraping data to Google Sheets from a website that uses JavaScript
(2 answers)
Closed last month.
Using this webpage as an example http://forums.macrumors.com/showthread.php?t=1688317
On a google spreadsheet, the following DO NOT work with importxml():
//a[contains(#href,"showpost")]/#href
//a[contains(#href,"showcount")]/#href
//*[#id="postcount18545482"]
The last one (//*[#id="postcount18545482"]) was copied directly from Chrome's element viewer.
The following DO work but exclude any results with the word "showcount", "postcount", or "showpost":
//div[contains(#id,"post_message")]/#id
//a[contains(#href,"show")]/#href
//a[contains(#href,"post")]/#href
Is there something special about the word "count" when working with importxml() or XPATH? How can I get the missing entries?
ImportXML function in Google Docs spreadsheet can not process data that is created in a two-step process. For example, when an authentication token must be retrieved first before making the url request, or when the URL tells the server to dynamically create an xml output after which the user is redirected to the output, even when the URL stays the same. You might want to look into Google Apps Scripts (http://code.google.com/googleapps/appsscript/index.html) to handle this case.
Taken from here
In your particular case the anchor parameters get set in the vbulletin_post_loader.js script called after the page container is loaded.
...
pc_obj=fetch_object("postcount"+this.postid);
openWindow("showpost.php?"+(SESSIONURL?"s="+SESSIONURL:"")
+(pc_obj!=null?"&postcount="+PHP.urlencode(pc_obj.name):"")+"&p="+A)
...
In other words, when importXML() scans the page, the nodes containing 'showpost' or 'postcount' in href are not yet on the page:
Looks like importXML() works with static pages only and not able to handle dynamically loaded content.
Try to find another way of obtaining the number of post in a thread.
I would like to know how can I get all tweets from a certain hash tag?
I am currently using the following code:
xhr.open("GET","http://search.twitter.com/search.json?q=%23PrayForJapan");
This only returns me 15 tweets. Does anyone know how to make it return more?
Also, I have got a code to get me the tweets of a certain screen name, this only returns 20 tweets, how can i ask the following 20 tweets?
The code i used for that is:
xhr.open("GET","http://api.twitter.com/1/statuses/user_timeline.json?screen_name=Eminem");
I'm using titanium to create this, but I don't think that is an issue?
Thanks!
You can usually add count=x as parameter to the query string to get up to x tweets (for the search api it seems to be rpp). Query string parameters are added to the base url via ? and each individual parameter is then separated by & as in http://api?user=1&count=4
Most of the time, it is better though to remember the last tweets and then add ?since_id=x as this way you only get tweets you did not see before.
Have a look at the api documentation.
How would I go about displaying tweets that contain a certain hashtag using the Twitter API? Thanks
I'd also like to know if there is a way to get all tweets from a certain hashtag in a separate file, also the ones that don't show up in your feed anymore. I suppose that's what the earlier question was about, too.
This answer was written in 2010. The API it uses has since been retired. It is kept for historical interest only.
Search for it.
Make sure include_entities is set to true to get hashtag results. See Tweet Entities
Returns 5 mixed results with Twitter.com user IDs plus entities for the term "blue angels":
GET http://search.twitter.com/search.json?q=blue%20angels&rpp=5&include_entities=true&with_twitter_user_id=true&result_type=mixed
UPDATE for v1.1:
Rather than giving q="search_string" give it q="hashtag" in URL encoded form to return results with HASHTAG ONLY. So your query would become:
GET https://api.twitter.com/1.1/search/tweets.json?q=%23freebandnames
%23 is URL encoded form of #. Try the link out in your browser and it should work.
You can optimize the query by adding since_id and max_id parameters detailed here. Hope this helps !
Note: Search API is now a OAUTH authenticated call, so please include your access_tokens to the above call
Updated
Twitter Search doc link:
https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html
The answer here worked better for me as it isolates the search on the hashtag, not just returning results that contain the search string. In the answer above you would still need to parse the JSON response to see if the entities.hashtags array is not empty.