For me
https://api.twitter.com/1.1/search/tweets.json?geocode=9.98%2C76.28%2C5km
returns tweets
but for
https://api.twitter.com/1.1/search/tweets.json?geocode=9.98%2C76.28%2C15km
returns none
For the second the radius is increased to 15km
Why it is not working? I tried it on
https://dev.twitter.com/rest/tools/console
Some people (including myself) have started experiencing issues with existing geo-related searches from approximately 2014-11-20 23:00 UTC. Perhaps it relates to your problem.
Looks like Twitter developers have been aware of this issue for at least last four days and are working on it. There is no ETA though, last response from Twitter published 12 hours ago says:
...we are aware of the issue and working to resolve it, but it may be a few days. We apologise for the disruption to your applications, and appreciate your patience. We're hopeful that we can get a fix deployed soon, but some things take time
Please see these discussions for more details:
[1] Search API returning (very) sparse geocode results: https://twittercommunity.com/t/search-api-returning-very-sparse-geocode-results/27998
[2] Twitter Advanced Search Not Working: https://twittercommunity.com/t/twitter-advanced-search-not-working/28114
Related
I've been able to implement the functionality and everything works fine with the test data. My concerns are with the limitations and possible problems with tracking more users. My questions are:
How many users are we allowed to track?
I've found that the max value for hub.lease_seconds is 828000 here and am wondering if this essentially means that I have to renew that every (say) 9 days?
Any other limitations one should be concerned of?
I couldn't find information about this online so even pointing me in the direction of some docs would be highly appreciated!
if i add more than 100 products under s.products in adobe analytics, I am seeing 414 status code
First, since you got this error, and I see in your screenshot it is a GET request, it sounds like you may not be using the latest Adobe Analytics AppMeasurement library (AppMeasurement.js) and Experience Cloud ID Service (VisitorAPI.js) - or at least a version that supports POST requests. So the first thing I suggest is update to the latest libraries.
But second - and perhaps more importantly - as #RobertSim commented - what are you doing that requires pushing 100+ products to an AA hit? I've been doing this for over 10 years with countless clients both directly working with them and indirectly on help sites such as this, and this is the first time I have ever seen someone try to push so many products at a time.. I'm a little impressed.
But nonetheless you are almost certainly going about things the wrong way. Are you trying to do product impression tracking on a category/product listing page? There is no way a visitor is viewing 100+ products at a time. The standard is to do top 5 or top 10 on a category/product listing page.
Are you trying to push meta data about products to AA? Definitely should not be doing it like this. You should probably be using SAINT classification uploads.
Provide more details about what you're trying to do here, what's the goal of this, etc. and perhaps a better answer can be given.
I have a standard Rails 4.1 Application with a Grape on top providing API.
Recently I found that Application#call has taken quite a big part of my response time, following is what I see in Newrelic:
Maleskine::Application#call takes the largest part, 67.6ms and 29.8% of the average response time. I don't think this is right, is it? But I don't know where to look at.
I need help, Thank you very much.
I am working with the YouTube API's ability to capture a list of videos. I am curious as to why the response from the API does not match-up with what I see on YouTube. I have read the reference material https://developers.google.com/youtube/2.0/reference and I do not believe this to be a caching issue as the videos in question are years old.
As an example, consider the following link: http://www.youtube.com/results?search_query=Gramatik
The top 10 results returned:
Just Jammin
So Much For Love
Muy Tranquilo
Orchestrated Incident
Liquified
While I Was Playin' Fair
Solidified
Defying Gravity
Still Night (Gramatik Remix)
So Much For Love (again, different video thougH)
By contrast, consider the following API query: https://gdata.youtube.com/feeds/api/videos?q=Gramatik&alt=json&prettyprint=true
The top 10 results:
Just Jammin'
So Much For Love
Muy Tranquilo
Liquified
While I Was Playin' Fair
Solidified
Still Night (Gramatik Remix)
So Much For Love (different)
Hit That Jive
Knight of Cydonia (Gramatik Remix)
Why the discrepancy? As far as I can tell my query should return identical results.
Any advice would be appreciated.
The services are slightly different. The signals are weighted differently for each service. You should expect to have the top videos be the same.
I'd like to get a big list (say 1,000 or more) of word phrases that people search for on the internet recently (anything from the most recent month or week or day is ok). Results from Google or any of the bigger search sites would be okay. And is there a way to do this programmatically? Python would be first choice, shell scripts works too. Thanks!
Bonus points for historical results too.
http://www.google.com/trends
google is pretty data friendly
they even provide rss feeds
http://www.google.com/trends/hottrends/atom/hourly
Yes, It's python friendly with API and easy_install to boot!
http://pypi.python.org/pypi/pyGTrends/0.81
Along with what TelsaBoil post, Google Insights looks to give historical results too
http://www.google.com/insights/search/
I think you should check this ones:
http://www.google.com/trends/hottrends
http://www.google.com/trends/hottrends/atom/hourly [RSS Hourly Feed]
http://pypi.python.org/pypi/pyGTrends/0.81 [Python Google Trends Information Retrieval]