I'm making a new iOS (swift) app to test some concepts, and I'm using the GitHub Serach API to retrieve a list of filtered repositories.
The request is working fine so far, but I'm having trouble understanding the pagination process and how to know I reached the end of the results.
For what I saw, the Search API returns a maximum of 1k results, broke in pages of 100 maximum results. But the field in the returned json with the total count shows way more available results (I imagine that it shows the total repositories that satisfy the query and not the maximum available for return in the API).
The only way I found so far to obtain information about the pages (and the pagination process) in GitHub Documentation comes in the header of the response, like:
Status: 200 OK
Link: <https://api.github.com/resource?page=2>; rel="next",
<https://api.github.com/resource?page=5>; rel="last"
X-RateLimit-Limit: 20
X-RateLimit-Remaining: 19
Anyone can suggest the best approach to detect the end of the pages in this case?
Should I try to parse the information from the header or infer it somehow based on the returned json? I even got the "Link" header value but don't know how to parse it.
Related
On third page not getting next page token google place api?
I am using Google place text search API in my Ruby on rails application, everything is working fine but after third page I am not getting any next page token so for every text search I am getting only 60 result. Is I am missing something please suggest any help would be appreciable. This happen for every text.
My request for first page :-
https://maps.googleapis.com/maps/api/place/textsearch/json?key=#{my_key}&query=#{my_query}
My request for other page with token:-
https://maps.googleapis.com/maps/api/place/textsearch/json?key=#{my_key}&pagetoken=#{next_page_token}
And usually I am searching on google it show 100's of result for same place text. How can I get result more than 60.
This is intended behavior for Google's Places API, as you can only get up to 60 places, split across 3 pages (3 queries). This is why there is no next_page_token on the third page.
By default, each Nearby Search or Text Search returns up to 20
establishment results per query; however, each search can return as
many as 60 results, split across three pages. If your search will
return more than 20, then the search response will include an
additional value — next_page_token.
Reference here.
I am working on YouTube Content ID API to fetch assests which are added today.
First of all I try to explore API on YouTube Content ID API explorer and then find asset search suitable for my criteria.So I provided the required parameters and got the response but response include only 25 results each time so I used NextPageToken
recived from response to get next assests,so far so good ,but for all this responses I noticed the ResultsPerPage varies for each request which confused me.As I assumed that ResultsPerPage indicates the all assests for the particular content owner and considering this I decided to code but now I'm unable to decide how should procced.
Can any one help me to understand this
Both the totalResults and ResultsPerPage are not reliable, according to employees at YouTube. You can only rely on the data that comes through. You can verify this with your TAM/Partner Manager.
In order to get the real count of your assets (I'm assuming you're using AssetSearch?), you have to keep paginating until there's no "nextPageToken' in the response, and count your results.
By the way, if you set the parameter "maxResults=50" in your request, you'll get 50 per page (until there's less than 50 left to display, which should only happen on the last page, given that you have a number of assets not divisible by 50).
After a lot of debugging, it finally occured to me that seemingly Youtube is only issueing the first 100 comments when using the v2 YouTube-API for getting comments. I finally tried using:
curl -Lk -X GET "http://gdata.youtube.com/feeds/api/videos/MShbP3OpASA/comments?alt=json&start-index=100&max-results=50"
And all I get is a response without an entry parameter. That is to say, I do not receive an error response or something like that - I get a perfectly good response, but without the entry parameter.
Digging a little deeper, in my response the value for openSearch$totalResults is 100, so in accordance to this resource this seems to be the expected result (although it tells about some kind of error message which I don't get?).
But here comes the kicker: When I use
curl -Lk -X GET "http://gdata.youtube.com/feeds/api/videos/MShbP3OpASA/comments?alt=json&start-index=1&max-results=50&orderby=published"
openSearch$totalResults equals 3141, the actual count of the comments.
Now here is my question: Since the v2 API is officially been deprecated about a week ago, is it possible that Google just set up a limit on the comments? So only the first 100 comments are accessible? Since the v3 API does not allow for comment retrieval, that would be a pretty bummer for me.
Does anyone have any ideas?
I've figured out how to retrieve all the comments using the navigation links embedded in the json response.
Suppose you retrieve the first using a link like (python here, but you get the point):
r'https://gdata.youtube.com/feeds/api/videos/' + aVideoID + r'/comments?alt=json&start-index=1&max-results=50&prettyprint=true&orderby=published'
Embedded in the json under "feed" (and before the comments) will be a four element array called "link". The fourth element will be called "rel": "next" and under "href" there will be a link you can use to get the next 50 comments. The link will look something like:
https://gdata.youtube.com/feeds/api/videos/fH0cEP0mvlU/comments?alt=json&orderby=published&alt=json&start-token=EgkI2NqyoZDRvgIosK%2FPosPRvgIw653cmsXRvgI4AUAC&max-results=50&orderby=published
for an original URL of:
https://gdata.youtube.com/feeds/api/videos/fH0cEP0mvlU/comments?alt=json&start-index=1&max-results=50&prettyprint=true&orderby=published
If you follow the next link it will return similar json to the original link, with another 50 comments. Continue this process over and over until you get all the comments (in my code I check for both the absence of this item in the json or zero comments in the json to determine when to stop).
You need the "&orderby=published" in the original URL because otherwise the "next" links eventually grow to be too large and cause an error (something in the token the API uses to track which comments you've seen in the default orderby takes a lot of space). Something about the published orderby keeps the "start-token" small, whereas after about 500 comments with the default orderby you will start getting 414 Request URI too long errors.
Hope this helps.
I'm trying to use the YouTube API to return videos that were recently published, but the filter I'm using doesn't seem to work as expected.
This API call only returns two videos whereas there should be tons more that were published after March 1st:
https://gdata.youtube.com/feeds/api/videos?q=&fields=entry[xs:dateTime(published)%20%3E%20xs:dateTime('2013-03-01T12:00:00.000Z')]
However, if I add a query string, then many more results are returned. For example:
https://gdata.youtube.com/feeds/api/videos?q=surfing&fields=entry[xs:dateTime(published)%20%3E%20xs:dateTime('2013-03-01T12:00:00.000Z')]
Anyone know why? Is there another approach I should be using to just get me the latest videos published regardless of query string?
I understand your confusion, but that's not what the fields= parameter is used for. The documentation should hopefully clear things up, but to summarize, using fields= in that manner is equivalent to making a request without the fields= parameter and then filtering the results of that request so that it only includes the entries that match your filter.
So if your request without fields= would normally return 25 specific videos, adding fields= to it will give you a response that includes somewhere between 0 and 25 videos—all the non-matching videos are filtered out.
You can request a feed of recently published videos without any other filters using http://gdata.youtube.com/feeds/api/videos?v=2&orderby=published
I'm using the twitter List API to get tweets from a set of accounts that I've added to a list. However, I'm noticing that for some reason I'm not receiving the correct number of tweets in the response from twitter. Here is my URL
https://api.twitter.com/1/lists/statuses.xml?list_id=68707107&per_page=30
I'm clearly asking for 30 results there, however if you just type that into a web browser you'll see it does not return 30 results. Does anyone know why this is?
Thanks!!
The per_page attribute it's an "up to" value. If you use the since_id you may get better results. And the pages in the api are being deprecated as you can read in the doc.
Work your solution using the since_id and max_id arguments in the api.
Check https://dev.twitter.com/docs/working-with-timelines
Check this answer: GET lists/statuses per_page returning unexpected results
Twitter may be limiting the number of tweets in requests to 20 so you could try downloading 20 and then load the next 10 after that.