How to make a request to get all information of a tenant like microsoft teams does? - microsoft-graph-api

I was reading the Microsoft Graph API Documentation to batch queries right here and did not find what I need.
Basically I need to combine two or more requests but one depends to another value. I know there is a "dependsOn" feature to wait the other request, it is not what I am looking for.
Request one: GET '/me/joinedTeams';
Request two: GET 'teams/{groupId}/channels';
The "Request one" returns an array of groups and inside these array values there's an id property. Can I batch these two requests using the value of ther first one to get the second?
I am searching a way to do a GET and return all values of one teant like the Microsoft Teams Application does, returning all teams, all chats, etc. Batching requests is the more closer we can get it I think.
Or there is another way to generate the token to https://chatsvcagg.teams.microsoft.com/api/v1/teams/users/me url like Microsoft does?

#Gaspar, multiple api calls can be batched using json batching but any interdependent calls batch can not handle.
If you have any dependency, you have to make separate calls.

Related

How to retrieve all threads which have replies since a given timestamp?

I am ideally looking for an API that returns all the messages posted(including replies) since a given timestamp.
conversations.history is supposed to be the one I should be using, but it does not return replies, it only returns parent message (only if the timestamp of the parent message satisfies the "oldest" param I supply - i.e. if the oldest supplied in the query is later than parent's ts but earlier than replies, neither parent nor replies will be returned).
Hence, I am trying to find if there is any other API that returns the threads based on "oldest" timestamp. i.e. all the threads which have replies since a given timestamp.
Note: I looked at conversations.replies, it is only useful if you know which thread's replies you are fetching.
Currently there is no API to do what you aspire to do.
The best work around is manually fetching all threads data in-memory and then applying filter.
Did you find an alternative solution to this question? I have the same use case and when contacting Slack support I received the same response that we need to use the combination of conversations.history & conversations.replies. This will be quite an intensive and continuously growing number of calls if we need to call conversations.replies for all threaded messages just to filter out the timestamps that fit the date range. This would be catastrophic in the long run.
Ideally slack need to update conversations.replies API to support getting all replies between oldest & latest parameter just like in history.
Another alternative I am considering is to change the implementation and use the Events API instead of the Web Client API and use queueing to store all incoming messages then this will make sure that all messages are captured and stored then apply the required filters.

Trying to get analytics on Microsoft teams calls

I am trying to put together analytics on Microsoft teams calls. I would like to get hold times, number of transfers, call time, etc. I came across this call https://graph.microsoft.com/beta/communications/callRecords and it gives me a list of calls with call times, but I can't find a way to get hold times, what line it came in on, etc. Greatly appreciate any pointers.
First of all, this end point https://graph.microsoft.com/beta/communications/callRecords only allows you to query a single call record using its ID, it does not support querying a list of call records for the whole tenant or for a specific user.
The only way currently to find this ID to query the call record is by setting up a web hook to receive change notifications. Refer to the following documentation for more info on change notifications.
To directly address your question, i am not one hundred percent sure but i believe the information you're looking for could be found inside the list of sessions and segments inside a call record.

Get all user (+ direct manager, photo meta data and properties like 'AboutMe')

Until now we used a SharePoint on-premise custom web service which delivered all users (approx 15,000) inclusive properties like aboutMe, skills, etc. and the direct manager. That job took approx. 15 minutes.
All the data was stored in a Lucene search index.
Now we have to switch to O365.
I am able to get all the desired information from Microsoft Graph but it would take way too long (3 - 5 hours):
Fetch all users via /v1.0/users (with paging)
Iterate through the collection and
get manager for given user via /v1.0/[user-id]/manager
get properties like aboutMe, skills for given User via /v1.0/[user-Id]?$select=aboutMe,skills
Is there any efficient way to do that task?
Ideally, you should just call Microsoft Graph for the data you want on-demand rather than attempting to sync it to your own database.
Assuming you can't do that, you can deduce the time this takes using /delta endpoint (Get incremental changes for users). When you use a delta token, you will only get back resources that have changed (adds, deletes, edits) since your previous request. So your first pass might take a few hours, but subsequent passes should take seconds.
You can control which properties you're "tracking changes" against using the $select query parameter. For example, if you only care about changes to the displayName then using /v1.0/users/delta?$select=displayName will ensure you only receive changes to that property. From the documentation:
If a $select query parameter is used, the parameter indicates that the client prefers to only track changes on the properties or relationships specified in the $select statement. If a change occurs to a property that is not selected, the resource for which that property changed does not appear in the delta response after a subsequent request.
Also, consider batching requests to improve your processes' overall performance. Batching allows you to send multiple queries to Microsoft Graph in a single request and get the complete results back in a single response.

Surveymonkey: Get all responses from a single day on a single transaction

Is there a way to get ALL the responses for a single day in one transaction for a specific survey? on the API doc, I know there is the /surveys/{id}/responses/bulk option, and even I can send the start_created_at variable.
But I think that the API response has a max number of records/data it can send, it that case, what could the solution be? Paging through the results?
I'm using the .net API, found at this site, but I can build my own wrapper if necessary.
Reference link to API doc: /Surveys/SURVEY_ID/responses/bulk
Yes you're right the /surveys/{id}/responses/bulk endpoint is what you're looking for, and you can use the start_created_at and end_created_at to filter data to a date range.
The SurveyMonkey API doesn't allow a full dump of all your data, it will always be paginated. By default it'll paginate 50 at a time, but you can change that by using the per_page GET parameter.
That max per_page varies by endpoint, for responses BULK it is 100. So you'll have to fetch 100 at a time, looping through the pages to get all your data.
One alternative is to use webhooks and set up a subscriber, that way you can get new responses in real time and fetch them one by one. That way you can keep your data updated on your side as new responses come in, rather than running a script or endpoint to bulk dump all your data. But this depends on your use case, if you're building something like an export feature, then you'll have to go through the paginated route.

If I call Twitter API to get all of my followers, how many calls to the API is that?

If I want to download a list of all of my followers by calling the twitter API, how many calls is it? Is it one call or is it the number of followers I have?
Thanks!
Sriram
If you just need the IDs of your followers, you can specify:
http://api.twitter.com/1/followers/ids.json?screen_name=yourScreenName&cursor=-1
The documentation for this call is here. This call will return up to 5,000 follower IDs per call, and you'll have to keep track of the cursor value on each call. If you have less than 5,000 followers, you can omit the cursor parameter.
If, however, you need to get the full details for all your followers, you will need to make some additional API calls.
I recommend using statuses/followers to fetch the follower profiles since you can request up to 100 profiles per API call.
When using statuses/followers, you just specify which user's followers you wish to fetch. The results are returned in the order that the followers followed the specified user. This method does not require authentication, however it does use a cursor, so you'll need manage the cursor ID for each call. Here's an example:
http://api.twitter.com/1/statuses/followers.json?screen_name=yourScreenName&cursor=-1
Alternatively, you can user users/lookup to fetch the follower profiles by specifying a comma-separated list of user IDs. You must authenticate in order to make this request, but you can fetch any user profiles you want -- not just those that are following the specified user. An example call would be:
http://api.twitter.com/1/users/lookup.json?user_id=123123,5235235,456243,4534563
So, if you had 2,000 followers, you would use just one call to obtain all of your follower IDs via followers/ids, if that was all you needed. If you needed the full profiles, you would burn 20 calls using statuses/followers, and you would use 21 calls when alternatively using users/lookup due to the additional call to followers/ids necessary to fetch the IDs.
Note that for all Twitter API calls, I recommend using JSON since it is a much more lightweight document format than XML. You will typically transfer only about 1/3 to 1/2 as much data over the wire, and I find that (in my experience) Twitter times-out less often when serving JSON.
http://dev.twitter.com/doc/get/followers/ids
Reading this, it looks like it should only be 1 call since you're just pulling back an xml or json page. Unless you have more than 5000 followers, in which case you would have to make a call for each page of the paginated values.

Resources