I am using Twitters REST api for the first time and I am a little confused by their documentation. I want to poll their API every ten minutes (to avoid the rate limits), retrieve the previous ten minutes of tweets and then do some processing on them.
I am using "GET statuses/home_timeline" to do this. The first part of the documentation says it will return the most recent 20 tweets but then says it will return up to 800 and then later on says it will return 200.
Could someone advise me on the correct method to use?
Thanks
EDIT: Documentation link: http://dev.twitter.com/doc/get/statuses/home_timeline
To get the home timeline (assuming you've already authenticated), you will have to GETthe Home Timeline like follows:
For XML:
http://api.twitter.com/1/statuses/home_timeline.xml
For JSON:
http://api.twitter.com/1/statuses/home_timeline.json
For RSS:
http://api.twitter.com/1/statuses/home_timeline.rss
For ATOM:
http://api.twitter.com/1/statuses/home_timeline.atom
It will return the latest 20 timelines (if no count attribute is passed), but it's maximum return statuses is limited to 800, if retweets are included.
The count attribute only allows you to pull more than 20 (default) statuses but you can request up to 200 maximum statuses (since Twitter has to include retweets).
Does that make sense?
Related
I'm making a new iOS (swift) app to test some concepts, and I'm using the GitHub Serach API to retrieve a list of filtered repositories.
The request is working fine so far, but I'm having trouble understanding the pagination process and how to know I reached the end of the results.
For what I saw, the Search API returns a maximum of 1k results, broke in pages of 100 maximum results. But the field in the returned json with the total count shows way more available results (I imagine that it shows the total repositories that satisfy the query and not the maximum available for return in the API).
The only way I found so far to obtain information about the pages (and the pagination process) in GitHub Documentation comes in the header of the response, like:
Status: 200 OK
Link: <https://api.github.com/resource?page=2>; rel="next",
<https://api.github.com/resource?page=5>; rel="last"
X-RateLimit-Limit: 20
X-RateLimit-Remaining: 19
Anyone can suggest the best approach to detect the end of the pages in this case?
Should I try to parse the information from the header or infer it somehow based on the returned json? I even got the "Link" header value but don't know how to parse it.
i want to get all tweets,all followers and all followings from 2015-12-01 to 2016-03-20 using twitter api.
when i use following code it always give me latest 20 tweet,i am passing the 'until' parameter to get all tweets but i want to pass date such as "from 2015-12-01 to 2016-03-20".
how is this possible to get twitter data from 2015-12-01 to 2016-03-20.
$connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET, $access_token['oauth_token'], $access_token['oauth_token_secret']);
$twtrdata = $connection->get("https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=" . $twitteruser . "&until:2016-03-01");
There is no until-parameter for the Twitter API user_timeline.
The Twitter API is pretty limited, for example, using user_timeline will only return a maximum of 200 tweets at a time, and you will have to call it multiple times to get all tweets. There is a created_at-attribute connected to the tweets returned that you can use to filter out the tweets you want within a time frame, but you'll still have to run multiple requests to get all of them, probably. In addition to the 200 per request limit, there is also a restriction in place so that you cannot fetch more than the 3200 most recent tweets of a user.
For more info on this API call, and the rest of the Twitter API, have a look at Twitter's documentation.
I am working on YouTube Content ID API to fetch assests which are added today.
First of all I try to explore API on YouTube Content ID API explorer and then find asset search suitable for my criteria.So I provided the required parameters and got the response but response include only 25 results each time so I used NextPageToken
recived from response to get next assests,so far so good ,but for all this responses I noticed the ResultsPerPage varies for each request which confused me.As I assumed that ResultsPerPage indicates the all assests for the particular content owner and considering this I decided to code but now I'm unable to decide how should procced.
Can any one help me to understand this
Both the totalResults and ResultsPerPage are not reliable, according to employees at YouTube. You can only rely on the data that comes through. You can verify this with your TAM/Partner Manager.
In order to get the real count of your assets (I'm assuming you're using AssetSearch?), you have to keep paginating until there's no "nextPageToken' in the response, and count your results.
By the way, if you set the parameter "maxResults=50" in your request, you'll get 50 per page (until there's less than 50 left to display, which should only happen on the last page, given that you have a number of assets not divisible by 50).
I have a problem with twitter API. I tweeted in the past (around 400) but recently I haven't tweeted anything. When I try to fetch tweets by me using the twitter api, there are no results. How can I retrieve the older tweets?
Twitter doesn't return tweets older than a week through search api. Take a look at the limitations section from the below link:
https://dev.twitter.com/docs/using-search
I have the same problem as you, so after see that Twitter Web Search works I've started to implement my own solution, you can see on my GitHub. It is implemented in Java, but it will make a post on my blog to explain how to do in other languages. I've downloaded tweets without any problems, my last test I parse more than 600k within 2014 from some specific users.
You can use the REST API resource GET statuses/user_timeline to retrieve the most recent 3200 tweets from any public timeline.
This is possible in Twitter web search portal but not through their API. Bummer
https://twitter.com/search-home
This elaborates on #bennett-mcelwee 's answer where getting up to 3200 most recent user tweets can be done in series of API calls. Currently the max # of tweets you can get by a user in 1 request is 200, using the GET statuses/user_timeline API call. To get all tweets a user has posted on their timeline do the following:
STEP 1
Make a GET call to this endpoint passing parameter count=200.
STEP 2
From the returned data in step 1, get the ID of the last tweet
Make the same GET call but this time pass in parameter max_id= followed by the ID of last tweet returned form the first call, or -1. So for example max_id=9987999
STEP 3
Repeat step 2 until you don't get any new(older) data.
For my purpose I was able to do this in Ruby using https://github.com/sferik/twitter
Once a client object is instantiated, it's as simple as:
tweets = client.user_timeline('foobar', count: 200)
max_id = tweets.last.id - 1
tweets << client.user_timeline('foobar', count: 200, max_id: max_id)
From here you get idea and it's fairly trivial to write a loop until you've gotten all the tweets you can grab from the API.
I'm using the twitter List API to get tweets from a set of accounts that I've added to a list. However, I'm noticing that for some reason I'm not receiving the correct number of tweets in the response from twitter. Here is my URL
https://api.twitter.com/1/lists/statuses.xml?list_id=68707107&per_page=30
I'm clearly asking for 30 results there, however if you just type that into a web browser you'll see it does not return 30 results. Does anyone know why this is?
Thanks!!
The per_page attribute it's an "up to" value. If you use the since_id you may get better results. And the pages in the api are being deprecated as you can read in the doc.
Work your solution using the since_id and max_id arguments in the api.
Check https://dev.twitter.com/docs/working-with-timelines
Check this answer: GET lists/statuses per_page returning unexpected results
Twitter may be limiting the number of tweets in requests to 20 so you could try downloading 20 and then load the next 10 after that.