According to the documentation from the twitter service:
Use IBM® Insights for Twitter to incorporate Twitter content from the Twitter Decahose into your IBM Bluemix™ applications. The content store is refreshed and indexed in real-time, making searches dynamic and fast. The service enriches Tweets with sentiment and other insights for multiple languages, based on deep natural language processing algorithms from IBM Social Media Analytics.
But whatever queries we run, the data returned is always 2-3 days old. We are not getting latest real time data. How do we get real time data with the Insights for twitter service?
Related
We are building an application that manages video campaigns on DV 360. It includes uploading and changing a lot of YouTube videos on a daily basis.
We found that the YT Data API allows us to upload about 5 videos daily before we exhaust the API quota. We requested an extension via the public form. But there is no public information about the SLA for that form or how to get additional Dev support with the API. Which we will need, as our use case is different from the typical API user (apparently).
Has anybody gone through this process successfully and/or found a way to get Dev Support from Google for the YouTube Data API?
Thanks!
Your questions are indeed very much pertinent.
I can state -- based on my experience monitoring the tags youtube-data-api and youtube-api for more than three years -- that there's no official info related to SLAs and Dev Support here on SO.
The YouTube Data API is officially a free of charge API. But that does not exclude the possibility of Google offering this API under different conditions to clients willing to pay for the services provided to them.
I'd recommend to address your issues directly to Google, either through its own issue tracker site, or through its own support forum.
Basically, i want to get analytic reports through YouTube API by using Python. After hours searching how to make it happend. I am known that YouTube just supporting API through their graphical design, which is really limited.
Please advised me, is there any way to get daily/weekly/monthly report by using Python?
FYI, at the moment, i am using YouTube's service to automatically update the reports into my database which is BigTable.
I want to develop some stuff with the twitter streaming API and twitter4j in university. I read now about shutting down the share-count API (https://blog.twitter.com/2015/hard-decisions-for-a-sustainable-platform). Will this effect the twitter streaming API and how it works in any way? Because I need this service for at least 6 month.
The Share-Count and the Streaming API do not cross paths, actually you can obtain the share-count from the Streaming API data as suggested in this post.
Since they are discontinuing that service, it will have no effect on the data that you're able to obtain from the Streaming API so it won't effect the progress of your project.
As far as GNIP goes, that's overkill, it should not have been suggested at all. For research base, especially during initial stages and possibly later phases, the Streaming API will provide you with excellent amount of data. You can even request a limit increase through Twitter's Sale Department but it's up to them to make the final decision. They can be contacted at data-sales#twitter.com
Share count and streaming are totally separate APIs.
If you need guaranteed access, I suggest paying for Twitter's GNIP service - https://www.gnip.com/
what Tin Can API can do other than storing the state of the agent and how can we retrieve the publicly stored statements from Tin Can API
Thanks in advance
You can do a lot with the Tin Can API (Experience API). The point of the xAPI is to store user experiences, anything from I completed a course to I started watching a video. I've seen or worked on things as simple as using the xAPI to send SCORM tracking to an LRS, to support mobile, tracking sensor data from field exercises, to storing information collected in games and simulations. And the Experience API gives you the ability, like you said, to get data back out in a standard way, to support reporting and evaluation of data.
There are groups working with the Experience API to do interesting things. https://groups.google.com/a/adlnet.gov/forum/#!forum/xapi-design
There is also a spec working group forum where you can get more resources and answers: https://groups.google.com/a/adlnet.gov/forum/#!forum/xapi-spec
There are also resources and articles talking about what you can do with the Experience API. http://www.adlnet.gov/tla/experience-api/
and http://en.wikipedia.org/wiki/Tin_Can_API
There are some open source projects on ADL's GitHub page that also show how you can use the Experience API. https://github.com/adlnet
For sending and retrieving info from an LRS in web browsers there's a JavaScript library: https://github.com/adlnet/xAPIWrapper .. it's been built and minified..you can just include the xapiwrapper.min.js in your page and use the readme examples to get started.
For reporting and querying data you can look at the new project: https://github.com/adlnet/xAPI-Dashboard
There's a starting Java library to make talking to an LRS easier in Java, which could be used for regular Java apps or for Android apps: https://github.com/adlnet/jxapi
They're also starting a JQuery Mobile Plugin: https://github.com/adlnet/xapi-jqm
And even an example of using the Experience API with MedBiquitous and Common Core competencies to identify learner's progress toward becoming competent in some aspect: https://github.com/adlnet/xci
As for your question about getting statements from an LRS, you would just do a GET request to the statements endpoint. The spec currently says that requests must include the Experience API version header: https://github.com/adlnet/xAPI-Spec/blob/master/xAPI.md#62-api-versioning . And you will probably need to authenticate as a client using the LRS. This is generally done by registering on the LRS and getting some sort of credentials. This will vary based on the LRS you use, but they all have instructions on how to use and send the credentials. https://github.com/adlnet/xAPI-Spec/blob/master/xAPI.md#64-security
ADL's hosted example LRS opened up the GET statements endpoint so that people new to the Experience API could hit it and see statements without needing to figure out the request rules: https://lrs.adlnet.gov/xapi/statements
I want to develop a web application that uses the Twitter API. Before going any further there are some questions that require answer:
Should I store on my server the list of followers/following or should I query the API each time?
Same as 1 but for tweets instead of people.
If I store messages in my application, search should be performed on the local database or using the API?
Mostly sure unimportant details: ASP.NET (MVC?) and MSSQL will be used.
i would use the api, and if you find the app is pulling data slowly or you're running into limits, cache some of the results in the session (like the followers list could be cached and refreshed if it's more than 10 minutes old). you could also put the cache in mssql if you need even greater persistence.
System.Web.Caching.Cache is useful for that...
the twitter search api has a lot of options and can search through wider time ranges, so i would use that.
TweetSharp is an easy-to-use twitter api for .net that simplifies a lot of the operations:
http://tweetsharp.com/
Roughly, this can help you to make a decision:
Can your application run even if that API server is down or do you have any API call count limit?
If you answer "Yes" to any of this questions, cache that information.