I need to only get the followers I have not fetched before. Currently I can only get the top 200 items or given count but I don't want to get the same data more than once.
The only way I know how is to cycle through them and follow them if they haven't already been followed. I don't believe it's possible by looking at the API:
https://www.geeksforgeeks.org/python-api-followers-in-tweepy/
Make sure you have the third line for the API:
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth, wait_on_rate_limit = True)
Here's my snippet for following:
followers = tweepy.Cursor(api.get_followers).items()
for follower in followers:
if follower.id not in friends:
user = api.get_user(follower.id)
follower.follow()
followers = tweepy.Cursor(api.get_followers).items()
I just discovered that .items() can be passed a number. So you could do something like:
followers = tweepy.Cursor(api.get_followers).items(50)
Additionally, looking at the API documentation, API.get_followers() method, you can also set the number of followers to go through by passing a value to the count variable.
API.get_followers(*, user_id, screen_name, cursor, count, skip_status, include_user_entities)
API.get_followers(count=50)
The followers are returned in the number that they were added.
when a user login using facebook then i need collect all the movies list liked by the user and his/her friends.
user = FbGraph::User.fetch('me', :access_token => "access_token")
userFrnd = user.friends
movies=[]
userFrnd.each do | uf |
frnd = FbGraph::User.fetch(uf.raw_attributes['id'], :access_token => "access_token")
movies << frnd.movies
end
final_movie_list = movies.flatten.uniq_by {|track| track.raw_attributes["id"]}
this is my fb_graph function and it's working fine.but i need to make it as batch request since the i have 360 friend it take 360 request to process the above function correctly.but help me out to optimize this and reduce the time it takes to calculate this function.
I came to know that batch request may help me but,i don't know how to do that in fb_graph.
Please help me
I'm using FbGraph from ( github.com/nov/fb_graph ) Version: 2.7.8 and I'm able to make a batch request for 100 users, at a time and get following information about them by default.
I'm not using any access token. So you might be able to get more information including movies.
id
name
first_name
last_name
link
username
gender
locale
Here's the demonstration code where ids is an array of Facebook User Ids:
r = FbGraph::User.fetch("?ids=#{ids.join(",")}")
r.raw_attributes #information about the users hashed by their id (String)
I'm querying the Google API to list all files in the drive using the Google API official gem for ruby. I'm using the example given in the Google developers page - https://developers.google.com/drive/v2/reference/files/list
The first request I made returns in the "items" an array of ruby "Hashes". The next requests return in the "items" an array of either "Google::APIClient::Schema::Drive::V2::File" or "Google::APIClient::Schema::Drive::V2::ParentReference" (the reason behind each type also buggs me).
Does anyone know why this happens? At the reference page of "files.list" none is said about changing the type of the results.
def self.retrieve_all_files(client)
drive = client.discovered_api('drive', 'v2')
result = Array.new
page_token = nil
begin
parameters = {}
if page_token.to_s != ''
parameters['pageToken'] = page_token
end
api_result = client.execute(
:api_method => drive.files.list,
:parameters => parameters)
if api_result.status == 200
files = api_result.data
result.concat(files.items)
page_token = files.next_page_token
else
puts "An error occurred: #{result.data['error']['message']}"
page_token = nil
end
end while page_token.to_s != ''
result
end
EDIT:
I couldn't solve the problem yet, but I manage to understand it better:
When the first request to the API is made, after the authorization is granted by the user, the "file.list" returns an array of Hashes at "Items" attribute of the File resource. Each of this Hashes is like a File resource, with all the attributes of the File, the difference is just in the type of the access. For example: the title of the file can be accessed like this "File['title']".
After the first request is made, all the subsequent requests return an array of File resources, that can be accessed like this "File.title".
FYI, this was a bug in the client lib. Using the latest version should fix it.
How do I query the contents of a specific collection using the Python client for Google Docs API?
This is how far I've come:
client = gdata.docs.service.DocsService()
client.ClientLogin('myuser', 'mypassword')
FOLDER_FEED1 = "/feeds/documents/private/full/-/folder"
FOLDER_FEED2 = "/feeds/default/private/full/folder%3A"
feed = client.Query(uri=FOLDER_FEED1 + "?title=MyFolder&title-exact=true")
full_id = feed.entry[0].resourceId.text
(res_type, res_id) = full_id.split(":")
feed = client.Query(uri=FOLDER_FEED2 + res_id + "/contents")
for entry in feed.entry:.
print entry.title.text
The first call to Client.Query succeeds and seems to provide a valid resource ID. The second call, however, returns:
{'status': 400, 'body': 'Invalid request URI', 'reason': 'Bad Request'}
How can I correct this to get it working?
It is much easier once you have a folder entry, to call client.GetResources(entry.content.src) rather than generating the URI by yourself and using a Query.
In your case, client.GetResources(feed.entry[0].content.src).
Is there a way in the Twitter API to get the replies to a particular tweet? Thanks
Here is the procedure to get the replies for a tweets
when you fetch the tweet store the tweetId ie., id_str
using twitter search api do the following query
[q="to:$tweeterusername", sinceId = $tweetId]
Loop all the results , the results matching the in_reply_to_status_id_str to $tweetid is the replies for the post.
From what I understand, there's not a way to do that directly (at least not now). Seems like something that should be added. They recently added some 'retweet' capabilities, seem logical to add this as well.
Here's one possible way to do this, first sample tweet data (from status/show):
<status>
<created_at>Tue Apr 07 22:52:51 +0000 2009</created_at>
<id>1472669360</id>
<text>At least I can get your humor through tweets. RT #abdur: I don't mean this in a bad way, but genetically speaking your a cul-de-sac.</text>
<source>TweetDeck</source>
<truncated>false</truncated>
<in_reply_to_status_id></in_reply_to_status_id>
<in_reply_to_user_id></in_reply_to_user_id>
<favorited>false</favorited>
<in_reply_to_screen_name></in_reply_to_screen_name>
<user>
<id>1401881</id>
...
From status/show you can find the user's id. Then statuses/mentions_timeline will return a list of status for a user. Just parse that return looking for a in_reply_to_status_id matching the original tweet's id.
The Twitter API v2 supports this now using a conversation_id field. You can read more in the docs.
First, request the conversation_id field of the tweet.
https://api.twitter.com/2/tweets?ids=1225917697675886593&tweet.fields=conversation_id
Second, then search tweets using the conversation_id as the query.
https://api.twitter.com/2/tweets/search/recent?query=conversation_id:1225912275971657728
This is a minimal example, so you should add other fields as you need to the URL.
Twitter has an undocumented api called related_results. It will give you replies for the specified tweet id. Not sure how reliable it is as its experimental, however this is the same api call that is called on twitter web.
Use at your own risk. :)
https://api.twitter.com/1/related_results/show/172019363942117377.json?include_entities=1
For more info, check out this discussion on dev.twitter:
https://dev.twitter.com/discussions/293
Here is my solution. It utilizes Abraham's Twitter Oauth PHP library: https://github.com/abraham/twitteroauth
It requires you to know the Twitter user's screen_name attribute as well as the id_str attribute of the tweet in question. This way, you can get an arbitrary conversation feed from any arbitrary user's tweet:
*UPDATE: Refreshed code to reflect object access vs array access:
function get_conversation($id_str, $screen_name, $return_type = 'json', $count = 100, $result_type = 'mixed', $include_entities = true) {
$params = array(
'q' => 'to:' . $screen_name, // no need to urlencode this!
'count' => $count,
'result_type' => $result_type,
'include_entities' => $include_entities,
'since_id' => $id_str
);
$feed = $connection->get('search/tweets', $params);
$comments = array();
for ($index = 0; $index < count($feed->statuses); $index++) {
if ($feed->statuses[$index]->in_reply_to_status_id_str == $id_str) {
array_push($comments, $feed->statuses[$index]);
}
}
switch ($return_type) {
case 'array':
return $comments;
break;
case 'json':
default:
return json_encode($comments);
break;
}
}
Here I am sharing simple R code to fetch reply of specific tweet
userName = "SrBachchan"
##fetch tweets from #userName timeline
tweets = userTimeline(userName,n = 1)
## converting tweets list to DataFrame
tweets <- twListToDF(tweets)
## building queryString to fetch retweets
queryString = paste0("to:",userName)
## retrieving tweet ID for which reply is to be fetched
Id = tweets[1,"id"]
## fetching all the reply to userName
rply = searchTwitter(queryString, sinceID = Id)
rply = twListToDF(rply)
## eliminate all the reply other then reply to required tweet Id
rply = rply[!rply$replyToSID > Id,]
rply = rply[!rply$replyToSID < Id,]
rply = rply[complete.cases(rply[,"replyToSID"]),]
## now rply DataFrame contains all the required replies.
You can use twarc package in python to collect all the replies to a tweet.
twarc replies 824077910927691778 > replies.jsonl
Also, it is possible to collect all the reply chains (replies to the replies) to a tweet using command below:
twarc replies 824077910927691778 --recursive
Not in an easy pragmatic way. There is an feature request in for it:
http://code.google.com/p/twitter-api/issues/detail?id=142
There are a couple of third-party websites that provide APIs but they often miss statuses.
I've implemented this in the following way:
1) statuses/update returns id of the last status (if include_entities is true)
2) Then you can request statuses/mentions and filter the result by in_reply_to_status_id. The latter should be equal to the particular id from step 1
As states satheesh it works great. Here is REST API code what I used
ini_set('display_errors', 1);
require_once('TwitterAPIExchange.php');
/** Set access tokens here - see: https://dev.twitter.com/apps/ **/
$settings = array(
'oauth_access_token' => "xxxx",
'oauth_access_token_secret' => "xxxx",
'consumer_key' => "xxxx",
'consumer_secret' => "xxxx"
);
// Your specific requirements
$url = 'https://api.twitter.com/1.1/search/tweets.json';
$requestMethod = 'GET';
$getfield = '?q=to:screen_name&sinceId=twitter_id';
// Perform the request
$twitter = new TwitterAPIExchange($settings);
$b = $twitter->setGetfield($getfield)
->buildOauth($url, $requestMethod)
->performRequest();
$arr = json_decode($b,TRUE);
echo "Replies <pre>";
print_r($arr);
die;
I came across the same issue a few months ago at work, as I was previously using their related_tweets endpoint in REST V1.
So I had to create a workaround, which I have documented here:
http://adriancrepaz.com/twitter_conversations_api Mirror - Github fork
This class should do exactly what you want.
It scrapes the HTML of the mobile site, and parses a conversation. I've used it for a while and it seems very reliable.
To fetch a conversation...
Request
<?php
require_once 'acTwitterConversation.php';
$twitter = new acTwitterConversation;
$conversation = $twitter->fetchConversion(324215761998594048);
print_r($conversation);
?>
Response
Array
(
[error] => false
[tweets] => Array
(
[0] => Array
(
[id] => 324214451756728320
[state] => before
[username] => facebook
[name] => Facebook
[content] => Facebook for iOS v6.0 ? Now with chat heads and stickers in private messages, and a more beautiful News Feed on iPad itunes.apple.com/us/app/faceboo?
[date] => 16 Apr
[images] => Array
(
[thumbnail] => https://pbs.twimg.com/profile_images/3513354941/24aaffa670e634a7da9a087bfa83abe6_normal.png
[large] => https://pbs.twimg.com/profile_images/3513354941/24aaffa670e634a7da9a087bfa83abe6.png
)
)
[1] => Array
(
[id] => 324214861728989184
[state] => before
[username] => michaelschultz
[name] => Michael Schultz
[content] => #facebook good April Fools joke Facebook?.chat hasn?t changed. No new features.
[date] => 16 Apr
[images] => Array
(
[thumbnail] => https://pbs.twimg.com/profile_images/414193649073668096/dbIUerA8_normal.jpeg
[large] => https://pbs.twimg.com/profile_images/414193649073668096/dbIUerA8.jpeg
)
)
....
)
)
since statuses/mentions_timeline will return the 20 most recent mention this won't be that efficient to call, and it has limitations like 75 requests per window (15min) , insted of this we can use user_timeline
the best way: 1. get the screen_name or user_id parameters From status/show.
2. now use user_timeline
GET https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=screen_name&count=count
(screen_name== name which we got From status/show)
(count== 1 to max 200)
count: Specifies the number of Tweets to try and retrieve, up to a maximum of 200 per distinct request.
from the result Just parse that return looking for an in_reply_to_status_id matching the original tweet's id.
Obviously, it's not ideal, but it will work.
If you need all replies related to one user for ANY DATE RANGE, and you only need to do it once (like for downloading your stuff), it is doable.
Make a Twitter development application
Apply for elevated credentials. You will instantly get them after filling out the forms. At least I did on two separate accounts today.
Your development account now has access to the v1.1 API search in the "Sandbox" tier. You get 50 requests against the tweets/search/fullarchive endpoint maxing out at 5000 returned tweets.
Make an environment for your development application.
Make a script to query https://api.twitter.com/1.1/tweets/search/fullarchive/<env name>.json where <env name> is the name of your environment. Make your query to:your_twitter_username and fromDate when you created your account, toDate today.
Iterate over the results
This will not get your replies recursively