MS Graph [V1.0]: $orderBy completely ignores the $skip param - microsoft-graph-api

According to the doc, you should be able to use $top,$skip and $orderBy at the same time.
$top & $skip works as expected, but as soon as you add $orderBy it's ignoring the $skip constraint.
you can reproduce the examples below in the graph explorer:
https://graph.microsoft.com/v1.0/me/messages?$select=id,subject&$orderBy=lastModifiedDateTime%20asc&$top=1&$skip=0
https://graph.microsoft.com/v1.0/me/messages?$select=id,subject&$orderBy=lastModifiedDateTime%20asc&$top=1&$skip=1

Your query string is wrong. You miss the $ before skip. The right query string resembles the following:
https://graph.microsoft.com/v1.0/me/messages?$select=id,subject,bodyPreview&$orderBy=lastModifiedDateTime asc&$top=1&$skip=6
Note:
Use some real data to test(no matter real business data or the data from O365 trail account), the API will work well. Not sure why the default mock data doesn't work, maybe PG limit some mock data query. So mock test data only for reference, developer need to create own data source.

Related

What is the "Alternative Operator" to "Replace Multiple OR statement" in $Filter query of "Microsoft Graph API"

I need to find detail of more than 20 users in one query, so I am trying to use below query, But maximum 15 "or" is allowed in one query. So please let me know operator which I can use instead od using "multiple OR statement"
https://graph.microsoft.com/v1.0/users?$filter=startsWith(userPrincipalName, 'user1') or startsWith(userPrincipalName, 'user2').......
That's a per request threshold and I don't think it can be directly tackled with a drop-in replacement but here's some other ideas for you to consider:
1) If your application stores the user ids you can use the get directory objects by id endpoint which gets up to 1000 users back in one request https://learn.microsoft.com/en-us/graph/api/directoryobject-getbyids?view=graph-rest-1.0&tabs=http
2) Keep your current query but use MS Graph $batch requests to send multiple of these queries in one call to MS Graph. This will require you to construct the JSON batch payload and parse the response. https://learn.microsoft.com/en-us/graph/json-batching

Microsoft Graph API using filter on get sharepoint lists

I'm trying to filter on the sharepoint lists, but the semantics seems to be different to the default semantics.
What I already tried was (with $ and without; single quotes and no quotes):
https://graph.microsoft.com/v1.0/sites/root/lists?filter=name eq 'Something'
https://graph.microsoft.com/v1.0/sites/root/lists?$filter=name eq 'Something'
https://graph.microsoft.com/v1.0/sites/root/lists?filter=id eq 'CFFF1460-B4D7-419C-A921-61B5279BBDDC'
https://graph.microsoft.com/v1.0/sites/root/lists?$filter=id eq 'CFFF1460-B4D7-419C-A921-61B5279BBDDC'
https://graph.microsoft.com/v1.0/sites/root/lists?filter=id eq CFFF1460-B4D7-419C-A921-61B5279BBDDC
https://graph.microsoft.com/v1.0/sites/root/lists?$filter=id eq CFFF1460-B4D7-419C-A921-61B5279BBDDC
But everything returns an array containing all lists and not only the subset matching the desired criteria.
So how can I filter on sharepoint lists?
If you know the id of the list you want to filter and get a response for.
YOu can run a graph API query like this.
https://graph.microsoft.com/v1.0/sites/root/lists/{list-id}
This will give you data about that list.
Let me know if you need further details on this.
Unfortunately (like Marc already mentioned in the comments) it is not possible to filter on SharePoint lists. If you need to, you have to do it on the client side, by reading the list without any filter and make a LINQ statement (or something similar) on the received collection. Be aware that (like all collections in Graph) you don't get always all elements at once. Potentially you have to call the next link request from the last response and wade through more requests and elements till you find what you need.
So when you found what you need, it is a good idea to store the list id within a memory cache or Redis cache for faster lookups the next time, you need this information.

surveymonkey Where is qtype and respondent_id in the get_survey_details extract?

I'm trying to replicate the survey monkey relational database format (A relational database view of your data with a separate file created for each database table. Knowledge of SQL (Structured Query Language) is necessary.) to download responses for our reporting analytics using the Survey Monkey API. However I'm not able to find the QType and respondent_id data in the get_survey_details API extract method. Can someone help?
1.QType is found in the Questions.xls data in the current relational database format download.
I was able to find all of the other data in the Questions.xls data in the get_survey_details API (question_id, page_id, position, heading) but not QType.
2.Respondent_id is found in the Responses.xls data in the the relational database format download.
I can see that respondent_id is in the get_responses API method but that does not have the associated Key1 data that I also need. Key1 data is answer_id data in the get_survey_details API which is why I expected to find the corresponding respondent_id there as well.
SurveyMonkey's deprecated relational database download (RDD) format and API provide data using very different paradigms. Using the API to recreate the RDD format in order to work with an old integration is probably a poor use of time. A more productive idea would be to use the API to build a more modern integration from the ground-up taking advantage of things like real-time data availability to modernize the functionality. But if you're determined:
You will need to map the family and subtype of the question type to the QTypes you're used to. The information you need to build the mapping can be found on SurveyMonkey's developer portal in Data Types.
get_responses returns answer_id as row and/or col. For matrix question types, you will have both which cross reference to and answer and answer items from get_survey_details. For matrix questions, you might consider concatenating the row and col to create a single unique key value like the Key1 you're accustomed to.
I've done this. It got over the immediate need when the RDD format was withdrawn.
Now that I have more time, I'm looking at a better design but as always backwards compatibility with a large code base is the drag.
To answer your question on Qtype, see my reply at
What are the expected values for the various "ENUM" types returned by the SurveyMonkey API?

oData - applying filters to SQL queries

I am relatively new to oData service and I am trying to explore if oData is feasible for my project.
From all the examples / demos that I have come across,every demo always loads up all data into the repository and then oData filters are applied over the data.
Is there a way to not load up all data (apply the filters to SQL from oData) from SQL which will obviously be highly inefficient for N number of requests coming in /second ?
So for example if I had a movies service :
localhost:4502/OdataService/movies(55)
The above example is actually just filtering for movie id 55 from an "entire" set of movies.Is there a way to make this filter happen at SQL level instead of bloating the memory first with all movies and then allowing oData to filter it?
Can anyone guide me in the right direction?
I found out after doing a small POC that Entity framework takes care of building dynamic query based on the request.

Limit serverside results from WebApi controller and ODATA w/ Nhibernate

I've created a WebApi project in VS 2012, using NHibernate as my ORM and I intend to enable Odata support on it. So I've created a test controller with a single Get method that returns a list of entities from a table on my database.
Everything works fine, I can use OData to filter and order my results, etc. The problem is I couldn't find a way to limit the amount of data that's being returned from the database to the controller, and this table has millions of records in it.
Using the PageSize property of the Queryable attribute only seems to be limiting the amount of data returned to the client, but no the amount of Data returned from the DB.
I've tried applying a Take(n) on the IQueryable inside the get method before returning it, and it limits the results brought back from the DB, but it breaks the OData filtering, since if you try to query an entity that's not in the first n results, it just returns an empty collection.
I know you can use the $Top parameter on OData to accomplish this, but I would like not to depend on the client/consumer providing it in order to ensure that I'm not unnecessarily bringing thousands or even million of records that I'm not going to use.
I've also tried to manually check if the client provided a Top parameter on the query string, apply the OData transformation to my Queryable and then applying the Take(n) method over the transformed query. This approach enabled me to filter for any entity through OData, but it breaks pagination, because if I use the $Skip=n parameter, it again returns an empty collection.
So, is there any way to reliably limit the results fetched from the DB while not breaking the OData support?
We recently found that too. We are not applying a Take(pageSize) when server driven paging is enabled as we have to figure out if a next page link should be generated or not. We just enumerate the result set for pageSize number of entities and check if there are more entities or not. We thought that most providers generally bring a partial set of results as IQueryable is generally a lazy implementation. Turns out that is not true. Also, the database can optimize the query if it knows only pageSize number of results are required.
This is the issue that was opened for it. Good news is Youssef fixed it already :). This is the commit that fixed it. So, if you grab the nightly builds you should be good.

Resources