I am trying to get only new emails with microsoft graph.
Iam doing this by checking date like
GET https://graph.microsoft.com/v1.0/me/messages?$filter=receivedDateTime+gt+2016-06-06T08:08:08Z
Is there any possibility to build query to get new messages but base on id instead of receivedDateTime? Something like: get messeges until you find id=....?
I think the delta query solution is pretty good (as suggested in a different answer). However, for my purposes, there were two major drawbacks: 1) it's in preview (beta) right now, so it makes it less than ideal for production code and 2) it doesn't seem to support the monitoring of all messages, just those in a particular folder.
I actually prefer the solution you're working with. The timestamp in the header of the response can be used to reset the time field in you query, so that if you have "receivedDateTime gt 12:00:00" and get back the server time of 12:01:00 for your request, you can use "receivedDateTime gt 12:01:00" next time.
The scenario you're looking for is specifically what the new Delta query is designed to support. Deltas allow you to retrieve changes to a given folder (i.e. Inbox) since you last polled that folder. Message IDs not static or consecutive so they're not a suitable property for determining new vs. old messages.
Related
I am migrating from EWS to Microsoft graph and having problem in replacing flow "ExchangeService.syncFolderItems".
ChangeCollection<ItemChange> changedItems = exchangeService.syncFolderItems(calendarFolder.getId(),
FirstClassProperties, null, 512, NormalItems, syncState);
This gives me all the changes since last sync state with change type.
Now I need to replace this with Microsoft graph.
I saw Get delta api in Microsoft graph and also how to call it recursively using stale token. My query is, Get delta api is not returning the change Item type. Could someone suggest me the best way to implement this in Microsoft graph? Maybe the Apis that I need to use for this?
Note:This flow will be called by my service to get changes after fixed interval of time. Also I saw subscriptions ( https://learn.microsoft.com/en-us/graph/api/resources/subscription?view=graph-rest-1.0 )but not sure that could be used in my case as my service will be making a call to get changes for a meeting room after a fixed scheduled interval of time.
I am stuck here. Please help. Thanks in advance.
Deleted events will have a #removed property and only provide the id of the object as denoted here https://learn.microsoft.com/en-us/graph/delta-query-overview#resource-representation-in-the-delta-query-response
Updated objects will only include the updated properties and added objects should include all the data available. It's up to the 3rd party application (your app) to maintain a state on it's side to be able to differentiate between updated and created objects when using Delta queries.
Combining webhooks with Delta to trigger the sync instead of relying on a timer is a good approach to provide an end user experience that feels more real time
We have an OData v4 API that we are putting behind an Azure API Management (AAM) service, but have run into a problem configuring the routes/oerations. In a nutshell, the issue is that AAM will reject a query for a route/operation unless it is explicitly configured (you get a 404 error), but with OData there can be a route for every attribute (property) of every operation (endpoint). The problem quickly becomes unmanageable.
OData allows you to query an individual attribute/property (eg GET ~/api/Person(1234)/FirstName. If we put this behind AAM, we need to define it as an operation. That's OK as long as there are only a few of these, but it potentially means you quickly have to define hundreds/thousands of operations (unless I've missed something). We have an API with about 35 top level operations. Each resource has 20 attributes on average. That's 700 operations we would need to define. Apart from the work involved, that would be a shocking experience for users of the AAM developer portal.
I'm hoping someone can tell me an easy way to get around this problem. I know I can script the creation of these. You an also get around this problem to some degree if you use an OData $select query parameter (which is what I've suggested in the meantime). I can't get over the feeling I've missed something here. Is there a way of defining some sort of wildcard portion to the operation (eg /Person/*)? I can't find anything like that in the AAM documentation.
Try using URL templates instead of writing them explicitly, i.e. define operations for /{entity}/{property} this way it'll match every entity and every property of every entity. And you could use wildcards as well if you want to capture multiple segments at the end of URL.
So having been to Ignite in 2017 I was really excited with the new possibilites the Webhooks in Microsoft Graph and binding to Azure functions.
Recently I got the chance to really explore this for myself.
I am looking at this from the perspective of Identity Management - I really want to see what kind of user onboarding/change management we can react to and process with Graph Webhooks and Azure functions. So I started looking at the beta endpoint and the Webhooks available for "/users"
The first thing that struck me was that in the beta only "updated" or "deleted" is a valid changetype. I really would like "created" - since that is when the most work would get done on a user (for example generating some unique attribute values)
Ok I thought and just tried looking at "/users" and only changetype="updated". I created a subscription and an basic Azure function to handle the requests. Updated a user in Azure AD (just changed "Last name" attribute) and sure enough a trigger was sent to my Azure function
Now comes my biggest problems - this is really unusable in its current form.
his seems to react to the all changes /users and i guess the trigger response could contain several users.
It really would be preferable to get individual triggers for each object changed in /users even though they were changed at the same time
Looking at the actual information sent here lies the BIG problem.
I get the id of the user changed (good, but also expected)
I get the organizationId (ok..)
I get the eventTime (good)
I get a sequenceNumber (unsure what this is?)
I get subscriptionExpirationDateTime (ok fine good to have)
I get subscriptionId for the webhook (ok fine good to have)
... but WHERE is information about what data was changed??? Nowhere to be found is what attributes of the user were changed (i my case "Last name"). This makes the triggers totally unusable for "/users" and I cant really think of anyone who could use this function as is?
Sure I know the object was changed but I have no idea WHAT happened and if the change was relevant to my function
Please tell me there are plans to include the actual changes in the trigger response?
EDIT: ok right, yeah this is more a feature request from the actual developers of ms graph - will look for a better place to get this answered
Please provide feature requests here (ex: richer data in notifications, "created" change type) : https://officespdev.uservoice.com/forums/224641-feature-requests-and-feedback?category_id=101632
Here are answers to other questions.
Microsoft graph doesn't guarantee ordering of events when sending notifications (ex: your webhook endpoint could be down and we will retry events delayed by up to 4 hours or drop if the outage is longer than 4 hours). Hence "SequenceNumber" can be used to track if an event is in order and hence used as is or if it is out of order and needs a query to Graph to get current state.
Currently, we provide Ids of objects and associations (member, manager) that have changed, whether the object/associations is deleted or updated but not details of other properties that were changed. In its current form, webhook is best used with delta query. Instead of polling delta query every X minutes and in most cases receiving zero changes, developers can create a subscription and perform delta query only when a notification is received. This would help scale in case there are many tenants that needs to be polled.
Delta Query: https://developer.microsoft.com/en-us/graph/docs/concepts/delta_query_overview
Also FYI, webhook notifications for user/group is now also available in V1.0
I have a web test for the following request:
{{WebServer1}}/Listing/Details/{{Active Listing IDs.AWE2_RandomActiveAuctionIds#csv.ListingID}}
And I receive the error message:
The maximum number of unique Web test request URLs to report on has been exceeded; performance data for other requests will not be available
because there are thousands of unique URL's (because I'm testing for different values in the URL). Does anyone know how to fix this?
There are a few features within Visual Studio (2010 and above) that will help with this problem. I am speaking for 2010 specifically, and assuming that the same or similar options are available in later versions as well.
1. Maximum Request URLs Reported:
This is a General Option available in the Run Setting Properties of the load test. The default value set here is 1,000. This default value is usually sufficient... but not always high enough. Depending on the load test size, it may be necessary to increase this. Based on personal experience, if you are thinking about adjusting this value, go through your tests first and get an idea of how many requests you are expecting to see reported. For me, a rough guideline that is helpful:
*number_of_request_in_webtest * number_of_users_in_load_test = total_estimated_requests*
If your load test has multiple web tests in it, adjust the above accordingly by figuring out the number of requests in each indvidual test, sum that value up, and multiply by the number of users.
This option is more appropriate for large load tests that are referencing several web tests and/or have a very high user count. One reference for a problem|resolution use-case of this option is here:
https://social.msdn.microsoft.com/Forums/en-US/ffc16064-c6fc-4ef7-93b0-4decbd72167e/maximum-number-of-unique-web-test-requests-exceeded?forum=vstswebtest
In the specific case mentioned in the originally posted question, this would not resolve the problem entirely. In fact, it could create a new one, where you end up running out of virtual memory. Visual Studio will continue to create new request metrics and counters for every single request that has a unique AWE2_RandomActiveAuctionIds.
2. Reporting Name:
Another option, which #AdrianHHH already touched on, is the "Reporting Names" Option. This option is found in the request properties, inside the web test. It defaults to nothing, which in turn, results in Visual Studio trying to create the name that it will use from the request itself. This behavior creates the issue you are experiencing.
This option is the one that will directly resolve the issue of a new request being reported for every unique request report.
If you have a good idea of the expected number of requests to be seen in the load test (and I think it is a good idea to know this information, for debugging purposes, when running into this exception) a debugging step would be to set the "Maximum Request URLs Reported" DOWN to that value. This would force the exception you are seeing to pop up more quickly. If you see it after adjusting this value, then there is likely a request that is having a new reported value generated each time a virtual user executes the test.
Speaking from experience, this debugging step can save you some time and hair when dealing with requests that contain sessionId, GUID, and other similar types of information in them. Unless you are explicitly defining a Reporting Name for every single request... it can be easy to miss a request that has dynamic data in it.
3. Record Results:
Depending on the request, and its importance to your test, you can opt to remove it from your test results by setting this value to false. It is accessed under the request properties, within the web test itself. I personally do not use this option, but it may also be used to directly resolve the issue you are experiencing, given that it would just remove the request from the report all together.
A holy-grail-of-sorts document can be found on Codeplex that covers the Reporting Name option in a small amount of detail. At the time of writing this answer, the following link can be used to get to it:
http://vsptqrg.codeplex.com/releases/view/42484
I hope this information helps.
I'm going to attempt to create an open project which compares the most common MP3 download providers.
This will require a user to enter a track/album/artist name i.e. Deadmau5 this will then pull the relevant prices from the API's.
I have a few questions that some of you may have encountered before:
Should I have one server side page that requests all the data and it is all loaded simultaneously. If so, how would you deal with timeouts or any other problems that may arise. Or should the page load, then each price get pulled in one by one (ajax). What are your experiences when running a comparison check?
The main feature will to compare prices, but how can I be sure that the products are the same. I was thinking running time, track numbers but I would still have to set one source as my primary.
I'm making this a wiki, please add and edit any issues that you can think of.
Thanks for your help. Look out for a future blog!
I would check amazon first. they will give you a SKU (the barcode on the back of the album, I think amazon calls it an EAN) If the other providers use this, you can make sure they are looking at the right item.
I would cache all results into a database, and expire them after a reasonable time. This way when you get 100 requests for Britney Spears, you don't have to hammer the other sites and slow down your application.
You should also make sure you are multithreading whatever requests you are doing server side. Curl for instance allows you to pull multiple urls, and assigns a user defined callback. I'd have the callback send a some data so you can update your page with as the results come back. GETTUNES => curl callback returns some data for each url while connection is open that you parse it on the client side.