how to get the total number of retweets of my account? - twitter

looks like there is a method that gives the retweets of a particular tweet. IS there any way to find out the total number of all retweets of my tweets?

The answer is no. There may be a few hacks to get an approximation, but the answer is still no.
Twitter urges developers to think of timelines as an infinite stream rather than a finite list of tweets. You cannot count something when it has infinite length, so you cannot get the total number of retweets.
What you can do is take a small piece of the timeline (1000 tweets?) and say "I was retweeted 200 times in my past 1000 tweets".
When developing Twitter applications, always take this into consideration. There's no such thing as "all tweets", just the last x.

Related

Check the number of times a word or a phrase was tweeted

I have a general question regarding twitter APIs in python - is there a way to get the total number of times a particular word, or phrase were tweeted?
Thanks in advance.
You can't get that for the life of Twitter. However, you might be able to the search API to get an idea of how many times over the last 2 weeks, which is the approximate max amount of time the search API goes into the past:
auth = tweepy.auth.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
search_results = api.search(q="<your word>")
Then count the number of tweets you get back for an approximation.
For more info, look at the Tweepy Search API. Also, look at Tweepy Cursors for getting more than the default count of tweets.

How can I get the most liked Tweet of a particular day?

Here is my example query. It specifies that Tweets must be:
Written in English
Tweeted between 23Jan2010 and 24Jan2010
Have at least 100 "favorites" (likes)
My idea is to use something like the binary search algorithm to narrow down the minimum number of likes the Tweet has. Once only one Tweet is returned by a query, I'll know it is the Tweet with the most likes. The problem is, min_faves--the value that specifies the minimum number of likes--doesn't seem to work. Look at this query. It specifies min_faves as 100. As you can see, this Justin Bieber Tweet appears. It has 1.6k likes. Now, when I attempt to increase the min_faves value to 300 (to narrow down the most liked Tweet), the Justin Bieber Tweet is excluded! I don't know if I am not understanding the query system correctly, or if it is not working, but this seems incorrect. The Justin Bieber Tweet should show up, as it has more than 300 likes. This is just one example of how it doesn't seem to work.
Perhaps this is ocurring because, within the specified time range, the Justin Bieber Tweet did not have enough likes to meet the requirements. This would be very good for me, as I am trying to find the most liked Tweet on that particular day, and not the Tweet with the most likes right now that happens to have been posted on that particular day.
But, I do not believe this is the case. For instance, this query includes 3 Tweets from "Rev Run" when min_faves is set to 249, but returns 0 Tweets when min_faves is set to 250. I doubt that these Tweets all had exactly 249 likes on that day (as implied by these symptoms).
Does anyone either:
Understand why these results occur and how I can use this method to find the most liked Tweet of a particular day
Know of a better, alternative way I can find the most liked Tweet of a particular day
Thank you all
#sinanspd requested an example from 2018:
Here is a search with min_faves at 300k. It includes a post with 769k likes and a post with 479k likes. When the query's min_faves is bumped up to 400k neither are returned.

Is there any limit to the number of rows returned by API?

I am making a bulk call with 30 posts and daily data of all. Is there any limits to the number of rows that will be returned by the API?
I am having problem getting the results.
Can anyone please help.
YouTube doesn't return any rows ... it's not relational data. That may sound like a pedantic thing to point out, but it's crucial for this next point; the API will return 50 videos at a time, along with tokens to get more results based on the same query, up to a total of 500 ... because the data isn't relational, you can't just "select all rows" that match a certain criteria. Rather, it is probabilistically determining relevance to your search parameters, and after about 500 results the algorithms don't have enough certainty to make additional results relevant.
So in your case, where you can change the date as needed (to allow the algorithms to be more specific), you'll want to do a series of calls; perhaps one at a time (since you have to paginate anyway to get more than 50 results, it's probably not that much more expensive in terms of network bandwidth).

Why limited number of next page tokens?

Through a script I can collect a sequence of videos that search list returns. The maxresults variable was set to 50. The total number items are big in number but the number of next page tokens are not enough to retrieve all the desired results. Is there any way to take all the returned items or it is YouTube restricted?
Thank you.
No, retrieving the results of a search is limited in size.
The total results that you are allowed to retrieve seems to have been reduced to 500 (in the past it was limited to 1000). The api does not allow you to retrieve more from a query. To try to get more, try using a number of queries with different parameters, like: publishedAfter, publishedBefore, order, type, videoCategoryId, or vary the query tags and keep track of getting different video id's returned.
See for a reference:
https://code.google.com/p/gdata-issues/issues/detail?id=4282
BTW. "totalResults" is an estimation and its value can change on the next page call.
See: YouTube API v3 totalResults field is returning 1 000 000 when it shoudn't

Collecting follower/friend Ids of large number of users - Twitter4j

I'm working on a research project which analyses closure patterns in social networks.
Part of my requirement is to collect followers and following IDs of thousands of users under scrutiny.
I have a problem with rate limit exceeding 350 requests/hour.
With just 4-5 requests my limit is exceeding - ie, when the number of followers I collected exceeds the 350 mark.
ie, if I have 7 members each having 50 followers, then when I collect the follower details of just 7 members, my rate exceeds.(7*50 = 350).
I found a related question in stackoverflow here - What is the most effective way to get a list of followers using Twitter4j?
The resolution mentioned there was to use lookupUsers(long[] ids) method which will return a list of User objects... But I find no way in the API to find the screen names of friends/followers of a particular "User" object. Am I missing something here.. Is there a way to collect friends/followers of thousands of users effectively?
(Right now, I'm using standard code - Oauth authentication(to achieve 350 request/hour) followed by a call to twitter.getFollowersIDs)
It's fairly straightforward to do this with a limited number of API calls.
It can be done with two API calls.
Let's say you want to get all my followers
https://api.twitter.com/1/followers/ids.json?screen_name=edent
That will return up to 5,000 user IDs.
You do not need 5,000 calls to look them up!
You simply post those IDs to users/lookup
You will then get back the full profile of all the users following me - including screen name.

Resources