Using a single application ID for retrieving hundreds of mailboxes from Microsoft Graph API - microsoft-graph-api

I'm designing a web application that needs to retrieve and organize emails from at least 100 mailboxes from our company's domain. Something like:
abc#company.com
cde#company.com
efg#compnay.com
...
My web app needs to check each of these emails every couple of seconds to retrieve the new emails and index them. However, I don't want to hit the API limit for them.
Reading the official documentation, it seems like I can have a single app ID and use it to retrieve all these informations and won't hit the API limit.
If I have 100 mail boxes, and let's say, check each mailbox every 10 seconds (with maximum 4 concurrent threads), is it safe to say that I won't hit any kind of rate limit?
It might be worth mentioning that I'm going to use the Delta Link feature to check for new emails. This will make things faster and I'm not sure if it has any effect on the rate limits.

You really need to decrease the frequency of direct hits and only do that once you receive a notification from a particular mailbox/folder.
See https://learn.microsoft.com/en-us/graph/api/subscription-post-subscriptions?view=graph-rest-1.0&tabs=http for details on Graph event subscription.

Related

Options for combining multiple Amazon Lex bots

I work in a large enterprise where multiple teams are developing Lex bots (on separate accounts). Each bot supports a different domain or application,. In some cases, it would be nice for a single user interface to ask a question without needing to know which bot to ask. Is there a way to federate bots, or to forward un-recognized intentions to 'backup' bots?
I feel like what I really want to do is treat each bot as a skill is treated in Alexa, except I'm in the position (through entitlements) to know which 'skills' would be appropriate for a given user.
The answer here is that you would need to develop a custom application that delivers a user's input to each of your company's array of bots.
You'd need to look at the NLU Confidence score from each Bot's response to decide which response is the most accurate to return to the user. Would also be worthwhile keeping some state in your app to remember which Bot the user is currently interacting with and defaulting to that Bot for successive user inputs. Should you reach a point where the confidence score is low, it might present a signal to you to test the user's input across the other Bots.
What you'll need to be aware of here is that your costs will increase with each additional Bot that you add. So, assuming you have 5 area-specific Bots, one inbound message from your user could result in 5 Lex calls. As you start moving into significant volumes of interactions, this could start proving to be an obstacle.
An alternative would be to use a custom fallback intent to invoke a Lambda function that calls your Bot orchestration function. Assuming that you're able to find the correct Bot to handle the user's query, you'd need to remember that so succesive messages now get routed to that Bot.

How to account for conversation IDs being the same when retrieving Outlook Messages in Microsoft Graph API

By default, the Graph API for
/me/mailFolders/{folderId}/messages
Will return 10 messages. I want to group together the messages much like the Outlook Web UI does. So if 3 messages are from the same conversation, to the user it'll look like only 7 messages came back. Is there a way to account for this through API? Or should I just bump up the number of messages to make it difficult to really see how many messages came back?
Graph considers each message as a distinct object (which after all technically they are).
From a user experience perspective, I would consider switching to "at least 10". If you pull page 1 and get 7 unique conversations, pull page 2 immediately and consider that a single operation. If you're pulling 10 per page, at most the user would see 19 messages returned from a single pull.

Fetch and refresh Twitter user's friends with Rails

So basically I want to allow my users to connect to my website with Twitter in order to fetch their friends (followings), save these relationships to a table and keep this table updated when there is a change on Twitter (new follow, unfollow).
For the fetch part, I handle it with https://github.com/sferik/twitter. But I don't really know how to get started with the "update" part considering I could have a large users base and Twitter's rate limits. I thought about using a background job and play with rate limits, but it doesn't look like a viable or scalable option.
Any ideas to put me on tracks?
I don't know the twitter api, but it would be magnificent if you could have a webhook that gets to notifications form twitter every time a user of your app gets or looses a follower so that every 100 notifications or so you trigger a background job using active job; although I don't think that's very convenient for twitter either.
If that's not an option maybe you could have a background job for now and have it running on a special dyno or dynos when your users base grows large enough (I can explain myself in heroku terms).

REST API and clients - removing previously seen items - client-side or server-side?

We have a web-app that uses Mahout and CF filtering to generate product recommendations, based on users assigning ratings to items.
There is a iOS application that communicates with the webapp through a REST API, and let's users scroll through items, and assign them ratings.
The iOS application will pull a list of ranked products from the webapp - this is the list that is displayed to the user. As the user scrolls through to the end, we request further down the list.
There is also a requirement that the iOS application not show a user products that they've seen before on that specific device.
My question is - how should we be handling this last requirement?
Should each iOS client be maintaining a list of what they've seen before, and simply remove these from the list that it pulls from the server?
Or should the server maintain a state for each client, and remove them from the list before it sends it?
What pros and cons can you see for either approach?
Cheers,
Victor
First off, if the requirement is to not show a user products they've seen before on any device/platform (for example if they used the app on their iphone then ipad/ipod) then you'd definitely have to do it server side (as the app cannot know what the user has seen on other devices unless you are storing it on the server).
Assuming, it is a device specific requirement, I would think your goal would be to minimize (potentially unreliable or slow) network traffic to optimize the iphone experience. Syncing back and forth with the server will require extra network traffic, which may fail at times. Which would lead to a conclusion of client side storage. Unless your users are going to be seeing some huge number of products that would chew up disk space, but I assume the amount of data you store per user would be nominal.

Searching for a song while using multiple API's

I'm going to attempt to create an open project which compares the most common MP3 download providers.
This will require a user to enter a track/album/artist name i.e. Deadmau5 this will then pull the relevant prices from the API's.
I have a few questions that some of you may have encountered before:
Should I have one server side page that requests all the data and it is all loaded simultaneously. If so, how would you deal with timeouts or any other problems that may arise. Or should the page load, then each price get pulled in one by one (ajax). What are your experiences when running a comparison check?
The main feature will to compare prices, but how can I be sure that the products are the same. I was thinking running time, track numbers but I would still have to set one source as my primary.
I'm making this a wiki, please add and edit any issues that you can think of.
Thanks for your help. Look out for a future blog!
I would check amazon first. they will give you a SKU (the barcode on the back of the album, I think amazon calls it an EAN) If the other providers use this, you can make sure they are looking at the right item.
I would cache all results into a database, and expire them after a reasonable time. This way when you get 100 requests for Britney Spears, you don't have to hammer the other sites and slow down your application.
You should also make sure you are multithreading whatever requests you are doing server side. Curl for instance allows you to pull multiple urls, and assigns a user defined callback. I'd have the callback send a some data so you can update your page with as the results come back. GETTUNES => curl callback returns some data for each url while connection is open that you parse it on the client side.

Resources