Counting ALL rows in Dynamics CRM Online web api (ODATA) - odata

Is it possible to count all rows in a given entity, bypassing the 5000 row limit and bypassing the pagesize limit?
I do not want to return more than 5000 rows in one request, but only want the count of all the rows in that given entity.
According to Microsoft, you cannot do it in the request URI:
The count value does not represent the total number of entities in the system.
It is limited by the maximum number of entities that can be returned.
I have tried this:
GET [Organization URI]/api/data/v9.0/accounts/?$count=true
Any other way?

Use function RetrieveTotalRecordCount:
If you want to retrieve the total number of records for an entity beyond 5000, use the RetrieveTotalRecordCount Function.
Your query will look like this:
https://<your api url>/RetrieveTotalRecordCount(EntityNames=['accounts'])

Update:
Latest release v9.1 has the direct function to achieve this - RetrieveTotalRecordCount
————————————————————————————
Unfortunately we have to pick one of this route to identify the count of records based on expected result within the limits.
1. If less than 5000, use this: (You already tried this)
GET [Organization URI]/api/data/v9.0/accounts/?$count=true
2. Less than 50,000, use this:
GET [Organization URI]/api/data/v8.2/accounts?fetchXml=[URI-encoded FetchXML query]
Exceeding limit will get error: AggregateQueryRecordLimit exceeded. Cannot perform this operation.
Sample query:
<fetch version="1.0" mapping="logical" aggregate="true">
<entity name="account">
<attribute name="accountid" aggregate="count" alias="count" />
</entity>
</fetch>
Do a browser address bar test with URI:
[Organization URI]/api/data/v8.2/accounts?fetchXml=%3Cfetch%20version=%221.0%22%20mapping=%22logical%22%20aggregate=%22true%22%3E%3Centity%20name=%22account%22%3E%3Cattribute%20name=%22accountid%22%20aggregate=%22count%22%20alias=%22count%22%20/%3E%3C/entity%3E%3C/fetch%3E
The only way to get around this is to partition the dataset based on some property so that you get smaller subsets of records to aggregate individually.
Read more
3. The last resort is iterating through #odata.nextLink and counting the records in each page with a code variable (code example to query the next page)

The XrmToolBox has a counting tool that can help with this .
Also, we here at MetaTools Inc. have just released an online tool called AggX that runs aggregates on any number of records in a Dynamics 365 Online org, and it's free during the beta release.

You may try OData's $inlinecount query option.
Adding only $inlinecount=allpages in the querystring will return all records, so add $top=1 in the URI to fetch only one record along with count of all records.
You URL will look like /accounts/?$inlinecount=allpages&$top=1
For example, click here and the response XML will have the count as <m:count>11</m:count>
Note: This query option is only supported in OData version 2.0 and
above

This works:
[Organization URI]/api/data/v8.2/accounts?$count

Related

Google Ads API: getMetrics doesn't get results for all passed keywords

I'm on migration from the old Google AdWords API to the new Google Ads API, using PHP-SDK by Google.
This is the use case, where I'm stuck:
I feed an amount of keywords (paginating them by keyword plans a 10k) to generateHistoricalMetrics($keywordPlanResource) and collect the results.
To do so I followed instructions at https://developers.google.com/google-ads/api/docs/keyword-planning/generate-historical-metrics and, especially, https://developers.google.com/google-ads/api/docs/keyword-planning/generate-historical-metrics#mapping_to_the_ui, with using of KeywordPlanAdGroupKeywords (with a single ad group) and avoiding to pass a specific date range for now, relying on the default value.
Further I had to apply some filters on my keywords because of KEYWORD_HAS_INVALID_CHARS and KEYWORD_TEXT_TOO_LONG, but all the errors which I'm aware of are gone now.
Now, I found out, that the KeywordPlanHistoricalMetrics object does not contain any keyword id (of the form customers//keywordPlanAdGroupKeywords/) So, I have to rely on the correct ordering. This is ok as it seems, that the original ordering of keywords is preserved within the results, as here https://developers.google.com/protocol-buffers/docs/encoding#optional
But still I have the problem, that
count($keywordPlanServiceClient->generateHistoricalMetrics($keywordPlanResource)->getMetrics()) is lower then count($passedKeywords), where each of $passedKeywords where passed to
new KeywordPlanAdGroupKeyword([
'text' => $passedKeyword,
'match_type' => KeywordMatchType::EXACT
'keyword_plan_ad_group' => $planAdGroupResource
]);
Q: So I have two questions here:
Why getMetrics() does not yield the same amount of results as the amount of passed keywords?
I'm struggling with debugging at this moment: Say, I want to know which keywords are let out. Either for providing more information at this place or just to skip them, and let my customer know, that these particular keywords were not queried. How to do this, when although I have a keyword id for every passed keyword I cannot match the returned metrics to them, because the KeywordPlanHistoricalMetrics object does not contain any keyword id.
Detail: While testing I found out, that the reducing of an amount of queried keywords reduces the amount of lost keyword data:
10k of queried keywords - 4,72% loss,
5k - 2,12%,
2,5k - 0,78%,
1,25k - 0,43%,
625 - 0,3%,
500 - 0,24%,
250 - 0,03%
200 - 0,03% of lost keywords.
But I can't imagine, that keywords should be queried one by one.

Combined Select and Filter MS-Graph query parameters not working as expected for signInActivity/lastSignInDateTime

Query: https://graph.microsoft.com/beta/users?$select=id,displayName,signInActivity&$filter=signInActivity/lastSignInDateTime le 2020-03-01T00:00:00Z
I am trying to query for users based on "lastSignInDateTime". When I do this , the response gives all the properties for every user returned. I then try to reduce this response by adding a "select" parameter to reduce the properties returned but it seems to have no effect. Is it possible to combine the "Filter" and "Select" query Parameter's?
We have a bug for collection enumeration in that beta endpoint. Due to be fixed within next couple of months. AS a workaround you can export your dataset into data structure and filter in memory (preferred) or you can query specific users (expensive and not recommended)

Why limited number of next page tokens?

Through a script I can collect a sequence of videos that search list returns. The maxresults variable was set to 50. The total number items are big in number but the number of next page tokens are not enough to retrieve all the desired results. Is there any way to take all the returned items or it is YouTube restricted?
Thank you.
No, retrieving the results of a search is limited in size.
The total results that you are allowed to retrieve seems to have been reduced to 500 (in the past it was limited to 1000). The api does not allow you to retrieve more from a query. To try to get more, try using a number of queries with different parameters, like: publishedAfter, publishedBefore, order, type, videoCategoryId, or vary the query tags and keep track of getting different video id's returned.
See for a reference:
https://code.google.com/p/gdata-issues/issues/detail?id=4282
BTW. "totalResults" is an estimation and its value can change on the next page call.
See: YouTube API v3 totalResults field is returning 1 000 000 when it shoudn't

YQL, returning only 100 values. Can I get more?

I'm using YQL with JSON in order to retrieve a Twitter search. It only returns 100 values. Any chance to get more than that?
Doesn't look good, friend: "The maximum number of results that can be returned by a YQL query on this table is 100, which is defined by the attribute max."
From: http://developer.yahoo.com/yql/guide/yql-tutorials.html
The maximum number of items returned by a SELECT statement with YQL is 5,000. If the table in query does not give enough results by default (assuming there are more available), you can ask for more results by using a remote limit.
select * from twitter.search(250) where q="lol"
For more details, see Paging and Table Limits in the YQL Guide.
Be aware that many data providers will rate limit queries against their services, Twitter certainly does.

Twitter search API results

I'm using the Twitter API atom format
http://search.twitter.com/search.atom?q=Name&:)&since:year-month-date&rpp=1500
but it's only returning 100 tweets, I tried using the JSON format as well, but it only returned 100 results. Is there anything that I'm doing wrong to only get 100 results?
Yes, you're limited on the number of results per page. In order to get more results, you have to use the page parameter like so:
http://search.twitter.com/search.atom?q=Name&:)&since:year-month-date&rpp=1500&page=2
EDIT
rpp: the number of tweets to return
per page, up to a max of 100. E.g.,
http://search.twitter.com/search.atom?lang=en&q=devo&rpp=15
page: the page number to return, up to
a max of roughly 1500 results (based
on rpp * page)
Source: http://search.twitter.com/api/
In other words your rpp won't work as you expect because the max is 100.
My sugestion.
Make a request to your API and retrieve 100 results by time.
Use a loop to check if your result count is set to 100.
if true, do a new request to page 2.
test again and check the number of itens until the resultset is lower than 100.
The Twitter Search API has changed, including in the naming of the parameters: for instance, rpp is now count and the page parameter was removed in favor of max_id, a parameter based on a timeline concept:
"To use max_id correctly, an application’s first request to a
timeline endpoint should only specify a count. When processing this
and subsequent responses, keep track of the lowest ID received. This
ID should be passed as the value of the max_id parameter for the next
request, which will only return Tweets with IDs lower than or equal to
the value of the max_id parameter."
https://developer.twitter.com/en/docs/tweets/timelines/guides/working-with-timelines
The updated link to the Twitter search api is:
https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html
Remember that not all tweets are indexed and if you are using the non-commercial version, you are limited to a 7-day search.

Resources