The documentation says the jti identifies the event and "is unique to the stream". That means it could be repeated in multiple streams. What differentiates the stream? And how can I make sure to only search for JTIs that pertain to the same stream when de-duping events?
Google's Cross Account Protection only supports a single stream per GCP project number today. https://developers.google.com/identity/protocols/risc#config_stream.
So you can only set up multiple streams through multiple projects that represent different apps you build.
Related
Our group is working on a sentiment analysis research project. We are trying to use the Twitter API to collect tweets. Out aimed dataset involves a lot of query terms and filters. However, since each of us has a developer account, we were wondering if we can pool API access tokens to accelerate the data collection. For example, we will make an app that allows us to define a configuration file that contains a list of our access tokens that the app will try to use to search for a tweet. This app will be run on our local computer. Since the app uses our individual access tokens, we believe that we are not actually not bypassing or changing any Twitter limit as the record is kept for each access token. Are there any problems legal/technical that may arise from this methodology? Thank you! =D
Here is a pseudocode for what we are trying to do:
1. define a list of search terms such as 'apple', 'banana'
and 'oranges' (we have 100 of these search terms, we are okay
with the 100 limit per tweet)
2. define a list of frequent emotional adjectives such as 'happy', 'sad', 'crazy', etc. (we have have 100 of these) using TF-IDF
3. get the product of the search terms and emotional adjectives,
in total we have 10,000 query terms and we have computed
through the rate limit rules that we would need at least
55 runs of 15-minute sessions with 180 tweets per 15-minute.
55 * 15 = 825 minutes or ~14 hours to collect this amount of tweets.
4. we were thinking of improving the data collection by
pooling access tokens so that we can trim down the time
of collection from 14 hours to ~4 hours, e.g. by dividing the query items into subsets and letting a specific access token work on a subset
We were pushing for this since we just think it's efficient if it's possible and permitted since why not and it might help future researches as well?
The question is, are we actually breaking any Twitter rules or policies by doing this? By sharing one access token per each of us three and creating an app that we name as clones of the research project, we believe that in turn we are also losing something which is the headroom for one more app that we fully control.
I can't find specific rule in Twitter so far about this. Our concern is that we will publish a paper and will publish the app we will program and use for documentation and the app we plan to build. Disclaimer: Only the app's source code will be published and not the dataset because of Twitter's explicit rules about datasets.
This is absolutely not allowed under the Twitter Developer Policy and Agreement.
Twitter developer policy 5a:
Do not do any of the following:
Use a single application API key for multiple use cases or multiple application API keys for the same use case.
Feel free to check with Twitter directly via the developer forums. StackOverflow is not really the best place for this question since it is not specifically a coding question.
I've skimmed through the Keywords Performance Report of the API documentation, and couldn't understand whether it would be possible for me to use this report to determine daily keyword costs.
What I want is basically to be able to look for keyword to an API request result and get the cost associated with it. Is such a thing possible? Am I looking in the right place?
Apparently, it's not possible to do so, since all costs on all Display Network items are listed with a special ID (3000000) in costs, meant to capture all GDN displays.
I am aware of the 1M quota for YT API. But is it legal to open two APIs, when they are created for different tasks. Lets say: First API get only the youtube videos statistics and the second is used for YT Player?
It is all one api. But you can have as many hotkeys as you want. However to spread the quote, then you need more projects set up. But I think you are restricted to how many.
YouTube has stated that there's a rate limit for their API. And that's totally fine and understandable. However, it appears that, even respecting their rate limit and following their best practices is insufficient. In the YouTube terms of service, section 4H states that "You agree not to use or launch any automated system, including without limitation, "robots," "spiders," or "offline readers," that accesses the Service in a manner that sends more request messages to the YouTube servers in a given period of time than a human can reasonably produce in the same period by using a conventional on-line web browser"
So YouTube has an API to automate certain actions, but you have to limit yourself to an ill-defined notion of some human equivalent. Would following the best practices (in particular, waiting 10 minutes after any "too_many_recent_calls" 403 suffice to obey 4H?)
In my particular application I intend to upload tens of thousands of videos to my YouTube channel, and I'm concerned that even obeying the best practices will still result in YouTube terminating my account without explanation.
(For those concerned that tens of thousands of videos is spammy and illegitimate, I assure you that this is not the case. These are not advertising any product, and according to a couple hundred test case uploads, these are videos which people like much more often than dislike and which have high audience retention. For an example of such a channel (not mine), see http://www.youtube.com/user/EmmaSaying)
The terms of use you linked (containing the 4H section) is for the YouTube website. The API has a different set of terms and you can check the quota information here.
Do Google Analytics virtual pageviews have to be urls? Are any other string formats valid, and are the allowable formats documented?
As far as the documentation goes, I am not sure if there is one. In fact if there actually needs to be one as there is just one STRING parameter you need to use. And it can be "anything" (for example: '/downloads/dynamic-form') that would supplement a unique URL which cannot due to some technical reason used but would make sense to include as a seperate page-view within Google Analytics report.
Example of use virtual pageviews:
_gaq.push(['_trackPageview', '/downloads/dynamic-form']);
There is however a lot of use-cases where Events would do a better job, as tracking virtual pageviews "will add to your total pageview count" (see the official GA documentation). For example downloads, outbound links, buttons etc.
Be careful with using Events however, as there are some limitations as well and make sure you specify whether it is an interaction or non-interaction event (this has impact on other reports like Bounce Rate and etc.).