We are using the below graph endpoint to get all the channels for a specified team.
GET /v1.0/teams/{id}/channels?$select=id,displayName,description,membershipType
When querying a team with recently added channels, it becomes unreliable. For example, I have a team that had 10 channels and I added 3 more. I get the channels for that team in a loop and sometimes I will get 10 channels, some other time 13 channels, and even some numbers in between. Most queries return the correct number of channels, but I easily get a wrong result within a couple of requests. I have a mix of private and standard channels, I tried with new private and new standard channels and I have the issue either way.
We have no caching whatsoever on out side, we call the endpoint directly. We see in the reponses header (x-ms-ags-diagnostic) that the server instance is sometime different, could it be a caching issue on one of your servers? I still have the issue 24 hours after adding the channels.
We are in the context of a migration so it can be bad since we might not move information and we will not know unless we go check each team once migrated.
Related
In our company, we want to develop a simple feature flag system to work beside our API Gateway. Since we expect not more than 100 req/s on the system we decided that third-party solutions will probably be costly(as they require time and effort to understand) and also it would be a fun effort to do it on our own. Here are the requirements:
We have the list of users and the latest app version they are using. We want to apply three kinds of policies on their usage of the some application feature:
Everybody can use the feature
Only users with a version higher than a given version can use the feature
Some random percentage of users can use the feature( this is for A/B testing in our future plans)
These policies are dynamic and can change (maybe we want to remove the policy or edit it in the future)
So, the first problem is how to store the policies ?
I think We can describe each policy with at most three fields: feature , type , and specification . So for instance something like A 2 1.5 means that feature A is only active for users who use version 1.5 or above. A 3 10 means activate feature A for 10 percent of users at random and finally A 1 means that feature A should be active for all uses.
Although in this way we can describe a policy in a meaningful matter, I can't see how I can use these in a system. For instance, how can I write these policies in an RDBMS or in Redis for instance?
The second problem is how to apply these policies in the real world. In our architecture, we want each request to first visit this service, and then(if authorized) it can proceed to the core services. Ideally, we want to call an endpoint of this service and the service returns a simple yes/no response(boolean). For simplicity lets assume we get a user id and a feature like 146 A (users 146 wants to use feature A), we want to determine whether 146 is allowed to use it or not.
I thought of two approaches, first is real-time processing: as a request comes, we do a database call to fetch enough data to determine the result. The second approach is to precompute these conditions before hand, for instance when a new policy is added we can apply that policy on all user records and store the results somewhere, in this way, when a request comes, we can simply get the list of all features that the uses has access to and check if the requested feature is in the list.
The first approach requires a lot of DB calls and computation per each request, and the second one is probably much more complicated to implement and needs to query all users when a new policy is added.
These are our two main problems, I tried to simplify things so that it could become a more generic problem. I would appreciate if you could share your thoughts on each of one them.
I recently began using the Youtube Data v3 API for a program that I'm writing which is purely for personal use. To give a brief summary of what it does, it checks the the live chat from my most recent (usually ongoing) livestream and performs actions based on certain keywords entered in chat (essentially commands for people to use from live chat). In order to do that, however, I have to constantly send requests to get a refreshed livechat. As it is now, it sends requests on 1 second intervals. I recently did a livestream to test out my program and it only took about 25 minutes for me to reach the daily quota limit of 10,000 units/day.
The request is:youtube.liveChatMessages().list(liveChatId=liveChatId,part="snippet")
It seems like every request I make costs 6 units, according to the math. I want to be able to host livestreams at lengths of up to 3 hours, which would require a significant quota increase. I'm aware that there is an option to fill out a form to request additional quota. However, it asks for business information such as a business name, business website, business mailing address, etc. Like I said before, I'm doing this for my own use only. I'm in no way part of a business, and just made my program as a personal project. Does anyone know if there's any way to apply for additional quota as an individual/hobbyist? If not, do you think just putting n/a in those fields would be acceptable? I did find another post where someone else had the exact same problem, but no one was able to give a helpful answer. Any advice would be greatly appreciated.
Unfortunately, and although only related, it seems as Google is for the money here. I also tried to do something similar myself (a very basic chat bot just reading the chat messages), and, although some other users on the net got some different results, they all have in common that, according to the doc how it should be done, all poll at this interval of about once a second (that's the timeout one get as part of the answer to a poll for new messages). I, along with a few others, got as most as about 5 minutes with polling once a second, some others, like you, got a few more minutes out of it. I changed the interval by hand in incrementing intervals of 5 seconds each: 5, 10, 15, etc... you get the picture. I can't remember on which value I finally tuned in, but I was only able to get about 2 1/2 hours worth with a rather long polling interval of just once every 10 seconds or so - still way enough for a simple chat bot just reading the chat. But also replying would had at least doubled the usage and hence halfed the time.
It's already a pain to get it working as an idividual as just setting up the required OAuth authentication requires one to at least provide basic information like providing a fixed callback and some legal and policy information. I always ended up in had it rejected with this standard reply "Your project seem to be for internal use only.". I even was able to got this G suite working (before it required payment) to set up an "internal" project (only possible if account belongs to a G suite organization account), but after I set up the OAuth login I got the error that my private account I wanted to use the bot on was not part of the organization and hence can't be used. TLDR: Just useless waste of time.
As far as I'm in for this for several months now there's just no way to get it done as a private individual for personal use. Yes, one can just set it up and have the required check rejected (as it uses the YouTube data API scopes), but one still stuck with that 10.000 units / day quota. Building your own powerful tool capable of doing more than just polling once every 10 to 30 seconds with just a minimum of interaction doesn't get you any further than just a few minuts, maybe one or two hours if you're lucky. If you want more you have to set up a business and pay for it - simple and short: Google wants you to pay for that service.
As Mixer got officially announced to be shut down on July 22nd you have exactly these two options:
Use one of the public available services like Streamlabs, Nightbot, etc ... They're backed by their respective "businesses" and by it don't seem to have those quota limits (although I just found some complaints on Streamlabs just from April - so about one month prior to when you posted this question where they admitted to had reached their limits - don't know if they already got it solved).
Don't use YouTube for streaming but rather Twitch - as Twitch doesn't have these limits and anybody is free to set up an API token either on the main account or on a second bot account (which is also explicitly explained in their docs). The downside of this are of course the objective sacrifices one has to suffer: a) viewers only have the quality of the streamer until one reaches at least affiliate b) caped at max 1080p60 with only 6.000kBit/s c) only short time of VOD storage
I myself wanted to use YouTube as my main platform (and currently do, but without my own stuff at the moment) and my own bot stuff and such as streaming on YouTube has some advantages over Twitch, but as YouTube wants me to pay what others (namely: Twitch) offer me for free (although overall not as good quality) it's an easy decision to make. Mixer looked promissing, as it also offered quite some neat features (overall better quality than Twitch, lower latency), but the requirements to get partner status were so high (2.000 followers along with another insane high number to reach) and Mixer itself just so little of a platform (I made the fun to count all the streamers and viewers - only a few hundred streamers with just a few 10.000s viewers the whole platform had less than some big Twitch channels on their own) - and now it's announced soon to be dead anyway.
Hope this may give you some input into what a small streamer has to consider and suffer from when chosing a platform - but after all what I experienced I have these information: Either do it like all the others: Stream on Twitch and use YouTube as an archive to export to from Twitch (although Twitch STILL doesn't have an auto-export of the latest VOD implemented - but I guess that could be done by some small script) - or if you want to stay on YouTube use some existing bot like Nightbot or any of the other services like Streamlabs.
If you get any other information on how to convince Google to increase the limit as an individual please let us know.
We have developed an importing solution for one of our clients. It parses and converts data contained in many OneNote notebooks, to required proprietary data structures, for the client to store and use within another information system.
There is substantial amount of data across many notebooks, requiring a considerable amount of Graph API queries to be performed, in order to retrieve all of the data.
In essence, we built a bulk-importing (batch process, essentially) solution, which goes through all OneNote notebooks under a client's account, parses sections and pages data of each, as well as downloads and stores all page content - including linked documents and images. The linked documents and images require the most amount of Graph API queries.
When performing these imports, the Graph API throttling issue arises. After certain time, even though we are sending queries at a relatively low rate, we start getting the 429 errors.
Regarding data volume, average section size of a client notebook is 50-70 pages. Each page contains links to about 5 documents for download, on average. Thus, it requires up to 70+350 requests to retrieve all the pages content and files of a single notebook section. And our client has many such sections in a notebook. In turn, there are many notebooks.
In total, there are approximately 150 such sections across several notebooks that we need to import for our client. Considering the stats above, this means that our import needs to make a total of 60000-65000 Graph API queries, estimated.
To not flood the Graph API service and keep within the throttling limits, we have experimented a lot and gradually decreased our request rate to be just 1 query for every 4 seconds. That is, at max 900 Graph API requests are made per hour.
This already makes each section import noticeably slow - but it is endurable, even though it means that our full import would take up to 72 continuous hours to complete.
However - even with our throttling logic at this rate implemented and proven working, we still get 429 "too many requests" errors from the Graph API, after about 1hr 10mins, about 1100 consequtive queries. As a result, we are unable to proceed our import on all remaining, unfinished notebook sections. This enables us to only import a few sections consequtively, having then to wait for some random while before we can manually attempt to continue the importing again.
So this is our problem that we seek help with - especially from Microsoft representatives. Can Microsoft provide a way for us to be able to perform this importing of these 60...65K pages+documents, at a reasonably fast query rate, without getting throttled, so we could just get the job done in a continuous batch process, for our client? In example, as either a separate access point (dedicated service endpoint), perhaps time-constrained eg configured for our use within a certain period - so we could within that period, perform all the necessary imports?
For additional information - we currently load the data using the following Graph API URL-s (placeholders of actual different values are brought in uppercase letters between curly braces):
Pages under the notebook section:
https://graph.microsoft.com/v1.0/users/{USER}/onenote/sections/{SECTION_ID}/pages?...
Content of a page:
https://graph.microsoft.com/v1.0/users/{USER}/onenote/pages/{PAGE_ID}/content
A file (document or image) eg link from the page content:
https://graph.microsoft.com/v1.0/{USER}/onenote/resources/{RESOURCE_ID}/$value
which call is most likely to cause the throttling?
What can you retrieve before throttling - just pageids (150 calls total) or pageids+content (10000 calls)? If the latter can you store the results (eg sql database) so that you don't have to call these again.
If you can get pageids+content can you then access the resources using preAuthenticated=true (maybe this is less likely to be throttled). I don't actually offline images as I usually deal with ink or print.
I find the onenote API is very sensitive to multiple calls without waiting for them to complete, I find more than 12 simultaneous calls via a curl multi technique problematic. Once you get throttled if you don't back off immediately you can be throttled for a long, long time. I usually have my scripts bail if I get too many 429 in a row (I have it set for 10 simultaneous 429s and it bails for 10 minutes).
We now have the solution released & working in production. Turns out that indeed adding ?preAuthenticated=true to the page requests returns the page content having resource links (for contained documents, images) in a different format. Then, as it seems, querying these resource links will not impact the API throttling counters - as we've had no 429 errors since.
We even managed to bring the call rate down to 2 seconds from 4, without any problems. So I have marked codeeye's answer as the accepted one.
I am just curious about why does Youtube shows 301+ views. I think there must be some logic behind that. What it could be?
I have seen exact count of views lesser than 300 views as well as having count in several thousands and millions.
It can stay stuck for a while. This is a control procedure for preventing the use of bots, any video getting more than 301 views in a short period gets verified in terms of source of traffic. However views are still getting counted (logged) in the back-end and will appear when YouTube will unlock the view counts.
Answering myself, it is no longer the case and it's been resolved.
So whenever a new video used to upload it receives many likes including from bots. So to verify the legitimacy of likes YouTube counter used to stop after 300 likes to verify sources of likes.
Official release: https://mobile.twitter.com/YTCreators/status/628958720953819136
I'm seeing issues where adding multiple entries to a playlist in a short amount of time seems to fail regularly without any error responses.
I'm using the json-c format with version 2.1 of the api. If I send POST requests to add 7 videos entries to a playlist then I see results of between 3-5 of them actually being added to the playlist.
I am getting back a 201 created response from the api for all requests.
Here's what a request looks like:
{"data":{"position":0,"video":{"duration":0,"id":"5gYXlTe0JTk","itemsPerPage":0,"rating":0,"startIndex":0,"totalItems":0}}}
and here's the response:
{"apiVersion":"2.1","data":{"id":"PLL_faWZNDjUU42ieNrViacdvqvG714P4QjvSDgGRg1kc","position":4,"author":"Lance Andersen","video":{"id":"5gYXlTe0JTk","uploaded":"2012-08-16T19:27:19.000Z","updated":"2012-09-28T20:20:39.000Z","uploader":"usanahealthsciences","category":"Education","title":"What other products does USANA offer?","description":"Discover USANA's other high-quality products: the Sens skin and hair care line, USANA Foods, the RESET weight-management program, and Rev3 Energy.","thumbnail":{"sqDefault":"http://i.ytimg.com/vi/5gYXlTe0JTk/default.jpg","hqDefault":"http://i.ytimg.com/vi/5gYXlTe0JTk/hqdefault.jpg"},"player":{"default":"http://www.youtube.com/watch?v=5gYXlTe0JTk&feature=youtube_gdata_player","mobile":"http://m.youtube.com/details?v=5gYXlTe0JTk"},"content":{"5":"http://www.youtube.com/v/5gYXlTe0JTk?version=3&f=playlists&d=Af8Xujyi4mT-Oo3oyndWLP8O88HsQjpE1a8d1GxQnGDm&app=youtube_gdata","1":"rtsp://v6.cache3.c.youtube.com/CkgLENy73wIaPwk5JbQ3lRcG5hMYDSANFEgGUglwbGF5bGlzdHNyIQH_F7o8ouJk_jqN6Mp3Viz_DvPB7EI6RNWvHdRsUJxg5gw=/0/0/0/video.3gp","6":"rtsp://v7.cache7.c.youtube.com/CkgLENy73wIaPwk5JbQ3lRcG5hMYESARFEgGUglwbGF5bGlzdHNyIQH_F7o8ouJk_jqN6Mp3Viz_DvPB7EI6RNWvHdRsUJxg5gw=/0/0/0/video.3gp"},"duration":72,"aspectRatio":"widescreen","rating":5.0,"likeCount":"6","ratingCount":6,"viewCount":1983,"favoriteCount":0,"commentCount":0,"accessControl":{"comment":"allowed","commentVote":"allowed","videoRespond":"moderated","rate":"allowed","embed":"allowed","list":"allowed","autoPlay":"allowed","syndicate":"allowed"}},"canEdit":true}}
The problem doesn't change if I set the position attribute.
If I send them sequentially with a 5 second delay between them then the results are more reliable with 6 of the 7 usually making it on the playlist.
It seems like there is a race condition happening on the api server side.
I'm not sure how to handle this problem since I am seeing zero errors in the api call responses.
I have considered doing batch processing, but can't find any documentation on it for the json-c format. I'm not sure it that would make a difference anyways.
Is there a solution to reliably adding playlist entries to a playlist?
This was fixed in and update to the youtube data apis around the 25th of October.