I'm trying to build a trendsmap sort of an application. trendsmap.com
So far I have used twitters streaming api to get the tweets and I am filtering these based on geocoordinates since this returns only geotagged tweets. I am storing these into couchdb.
No I need to find the most trending topics based on locations.
I cant figure out how to do this?
Is my approach right?
Twitter trends api gives only the most top ten trending tweets with a woeid or the world. and atmost 30 daily most trending tweets. I need to find the tweets that are trending based on location and then map them onto some visualization.
Can anyone help me with any idea?
If you're willing to use Java you could use Twitter4J and get trends like so...
public List<String> getTrends(AccessToken atoken) throws TwitterException{
twitter.setOAuthAccessToken(atoken);
Trends trends = twitter.getLocationTrends(1);
List<String> currentTrends = new ArrayList<String>();
for(Trend t: trends.getTrends()){
String name = t.getName();
currentTrends.add(name);
}
return currentTrends;
}
That about functions returns the current trends world wide. You simply change the getLocationTrends()argument with a different woeid to get the trends for that area. If you're looking to map particular tweets onto precise areas you'd filter your tweets to only return those with geolocations tagged.
If you're not using Java just use the Twitter API directly on an alternative library. The same process applies.
Related
I have a general question regarding twitter APIs in python - is there a way to get the total number of times a particular word, or phrase were tweeted?
Thanks in advance.
You can't get that for the life of Twitter. However, you might be able to the search API to get an idea of how many times over the last 2 weeks, which is the approximate max amount of time the search API goes into the past:
auth = tweepy.auth.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
search_results = api.search(q="<your word>")
Then count the number of tweets you get back for an approximation.
For more info, look at the Tweepy Search API. Also, look at Tweepy Cursors for getting more than the default count of tweets.
This is a comprehensive and complete version of the answer I've already asked a while ago at Get location with Wikimedia API. I happened to dig through all the Mediawiki API, GeoData API and Wikidata Query SPARQL Service documentation for days, publish my question on Stackoverflow and several talk boards in Wikimedia but didn't find the satisfying answer.
The question is as follows: I am trying to make use of GeoData API to perform aforementioned task - country and city attribution of geolocated item. The short description of my task: get a list of Wikipedia pages around a certain location defined with coordinates, get some page properties (page views, main image), then get the country and the city (the human readable - not the IDs) which this page item belongs to. Example description: let's imagine I have some geo coordinate near Sagrada Familia as an input. I want to receive a list of N Wikipedia pages in 1km radius around this coordinate. I want to receive number of page views and main image for each of this pages. I want for each item described on the page to be determined it is located in Barcelona, Spain. I could perform it in one Wikimedia call and N Wikibase Query Service calls but it is crucial to perform the requested in one call.
I found GeoData API very clean, simple and user friendly in retrieving various data according to geo location of the item. But there are difficulties with retrieving country/city affiliation of the item. While country can theoretically be get in a single request (also not always but only if being specified and not in name format but rather by its alphabetic designation) as the parameter of GeoData API itself, the city is possible to be get only for items which are cities by themselves. From the second hand this information does exist for every geo tagged item and is available for example through Wikibase SPARQL query service. But then I'll need to perform secondary requests to WikiData which I would have liked to avoid by all means. I managed to try all the ways round:
To call Wikimedia API (GeoData extension) from within Wikibase SPARQL request but it doesn't seem to work.
To retrieve Wikidata items around certain coordinates with Wikibase SPARQL request but then I can't get information from Wikipedia about page views.
To produce a list of pages around geo location with "generator=geosearch" and pass it to several props and pageprops of Wikimedia API calling for related Wikidata item. But then I only get the IDs of Wikidata properties, while I need human readable labels.
I'd like to extract all tweets in the Arabic language in all countries.
I modified the code in this tutorial.
This is my search query.
api.search(q="*", count=tweetsPerQry, lang ['ar'],tweet_mode='extended'). I expect to find a very large number of tweets, but I only collected about 7000 tweets.
I checked the content of some of them and I noticed that they are posted in my country even I did not specify the location/Country (Can anyone explain why this happen??).
I tried to know the reason for finding a limited number of tweets, so I modified the query by replacing the lang parameter by geocode to find tweets in a city. I fetched more than 65,000 Arabic tweets. After that, I used the lang parameter with the geocode and I found a very limited number of tweets.
Can anyone help me to know why I'm not able to get a large number of tweets when I used lang parameter?
The free twitter API's are good for small projects, but keep in mind that they don't display all of the tweets. Twitter has paid API's that are much more powerful, though what you are trying to achieve should be possible. I ran the query attached bellow, it seemed to work I was able to find a considerable amount of tweets. This method also seemed to work for #ebt_dev too I think it was just the structure of your request was set out like the stream listener version not the cursor search.
# Search Query change the X of .items(X) to the amount of tweets you are looking for
for tweet in tweepy.Cursor(api.search, q='*',tweet_mode='extended', lang='de').items(9999999):
# Defining Tweets Creators Name
tweettext = str( tweet.full_text.lower().encode('ascii',errors='ignore')) #encoding to get rid of characters that may not be able to be displayed
# Defining Tweets Id
tweetid = tweet.id
# printing the text of the tweet
print('\ntweet text: '+str(tweettext))
# printing the id of the tweet
print('tweet id: '+str(tweetid))
I am trying to connect to streaming API of twitter and retrieve tweets keywords using specific keywords. I am using the phirehose library for the same. It says in the twitter documentation that "commas as logical ORs, while spaces are equivalent to logical ANDs (e.g. ‘the twitter’ is the AND twitter, and ‘the,twitter’ is the OR twitter)."
But I want to search for keywords with AND operator even if there are other words in between. Meaning if we want to search for tweets having Keyword1 AND Keyword2, tweets which have only one keyword should not be retrieved.
Using the settrack function of the phirehose library -
setTrack(array('the , twitter'));
retrieves tweets with either the OR twitter while
setTrack(array('the twitter'));
retrieves tweets with the phrase the twitter and does not retrieve tweets like the busy twitter for example.
Please help.
140dev by Adam Green gives a solution for this by using ``typeenum('words','phrase') NOT NULL DEFAULT 'words'
Please see - http://140dev.com/twitter-api-programming-blog/streaming-api-enhancements-part-2-keyword-collection-database-changes/ and
http://140dev.com/twitter-api-programming-blog/streaming-api-enhancements-part-3-collecting-tweets-based-on-table-of-keywords/
Is there any way to have updated lists of top 100 videos of youtube by genre and/or by country!
Any kind of resources like json files or xml.
You'll want to use the chart parameter of the videos.list endpoint. Set the chart to mostPopular, and then include a regionCode parameter and videoCategoryId parameter for further narrowing down. For example,
GET https://www.googleapis.com/youtube/v3/videos?part=snippet&chart=mostPopular®ionCode=UA&key={YOUR_API_KEY}
Will retrieve the 5 most popular videos in the Ukraine.
GET https://www.googleapis.com/youtube/v3/videos?part=snippet&chart=mostPopular&maxResults=25®ionCode=DE&videoCategoryId=1&key={YOUR_API_KEY}
Will retrieve the 25 most popular videos in Germany that relate to Film/Animation. And so on.
Note that if you don't include a videoCategoryId parameter, it will return results from all categories. If you don't include a regionCode, it returns the most popular videos across all regions. You can only set videoCategoryId to a value that's valid in the region you're searching in (you can use the videoCategories.list endpoint to find valid categories for regions, languages, etc.)