I have any API which exports .xlsx data.
The conditions for SQL need 2 ids and I also need to pass the column name/s. I pass the ids via query-parameters along with the column names in a GET request, the column names can be 1 or 10 or 100 depending on the data collected. So it may become a long URL.
Do I modify it to a POST and send the columns in the body or should I send it the way I am right now?
If there is possiblity of URL taking more than 100 characters, you can better go for POST request rather than GET.
you can refer to a similar question asked here:
How long may parameters in a get request be?
Related
I am trying to build a Paginated report with many number of parameters [8] and with huge number values [100-1000] for each parameter. Because of the complexity in the UI, I intend to develop two reports.
Report [.pbix] where the user can select the parameters and values
The paginated report [.rdl] that's the actual result with pages of data.
The Report 1 will be calling the report 2, based on a generated URL. THis works for a limited number of parameters but since the list is huge, the reports are not generated because of the limitation on the URL length on the browser[chrome&IE]. I am looking for a solution that can work with an indefinite number of parameters. Trying to make this work with FORM and POST method so the parameters can be sent within the header itself.
Have looked into the https://community.powerbi.com/t5/Service/Paginated-Report-Using-URL-Parameters-and-Select-ALL/td-p/8... but the solution doesnt always work as the URL length is huge.
Is there a solution that can work with any number of parameters. preferably something that uses FORM/POST method. Open to any other suggestions - please let me know.
Regards,
Sasi.
Unfortunately I am sorry to inform you that according to Microsoft there are some strong limitations with transferring selected parameters when directing the user into Report Builder.
you can define no more than 10 filters conditions.
URL bytes length is very limited
For example, even if you use only one slicer, if the user selects 24 different values in that slicer it will break the URL toward Report Builder.
What is the exact Business Need? Either the business accept some strong limitations or you need to think of another implementation.
An example of possible limitation to propose the business is: Power BI passes only Year and Month to Report Builder, next, the user in Report Builder need to re-select the additional parameters.
Is it possible to count all rows in a given entity, bypassing the 5000 row limit and bypassing the pagesize limit?
I do not want to return more than 5000 rows in one request, but only want the count of all the rows in that given entity.
According to Microsoft, you cannot do it in the request URI:
The count value does not represent the total number of entities in the system.
It is limited by the maximum number of entities that can be returned.
I have tried this:
GET [Organization URI]/api/data/v9.0/accounts/?$count=true
Any other way?
Use function RetrieveTotalRecordCount:
If you want to retrieve the total number of records for an entity beyond 5000, use the RetrieveTotalRecordCount Function.
Your query will look like this:
https://<your api url>/RetrieveTotalRecordCount(EntityNames=['accounts'])
Update:
Latest release v9.1 has the direct function to achieve this - RetrieveTotalRecordCount
————————————————————————————
Unfortunately we have to pick one of this route to identify the count of records based on expected result within the limits.
1. If less than 5000, use this: (You already tried this)
GET [Organization URI]/api/data/v9.0/accounts/?$count=true
2. Less than 50,000, use this:
GET [Organization URI]/api/data/v8.2/accounts?fetchXml=[URI-encoded FetchXML query]
Exceeding limit will get error: AggregateQueryRecordLimit exceeded. Cannot perform this operation.
Sample query:
<fetch version="1.0" mapping="logical" aggregate="true">
<entity name="account">
<attribute name="accountid" aggregate="count" alias="count" />
</entity>
</fetch>
Do a browser address bar test with URI:
[Organization URI]/api/data/v8.2/accounts?fetchXml=%3Cfetch%20version=%221.0%22%20mapping=%22logical%22%20aggregate=%22true%22%3E%3Centity%20name=%22account%22%3E%3Cattribute%20name=%22accountid%22%20aggregate=%22count%22%20alias=%22count%22%20/%3E%3C/entity%3E%3C/fetch%3E
The only way to get around this is to partition the dataset based on some property so that you get smaller subsets of records to aggregate individually.
Read more
3. The last resort is iterating through #odata.nextLink and counting the records in each page with a code variable (code example to query the next page)
The XrmToolBox has a counting tool that can help with this .
Also, we here at MetaTools Inc. have just released an online tool called AggX that runs aggregates on any number of records in a Dynamics 365 Online org, and it's free during the beta release.
You may try OData's $inlinecount query option.
Adding only $inlinecount=allpages in the querystring will return all records, so add $top=1 in the URI to fetch only one record along with count of all records.
You URL will look like /accounts/?$inlinecount=allpages&$top=1
For example, click here and the response XML will have the count as <m:count>11</m:count>
Note: This query option is only supported in OData version 2.0 and
above
This works:
[Organization URI]/api/data/v8.2/accounts?$count
I want to build an action in Zapier that winds up sending a random email. The body of the email would be randomly pulled from any source that would be appropriate for storing HTML. I was thinking Google Sheets or Knack.
My problem is, I can't figure out how to ask Zapier to get a random record from the source.
Have any of you done something similar?
David here, from the Zapier Platform team.
It really depends on how you store that data. Your first step is probably to add a code step that generates a random integer. From there, you need a way to turn that id into a piece of data. In the case of sheets, you could put each HTML in a row and give it an "id" of an integer. Then, do a search to find the row with the random ID and pull the data out of that row.
Alternatively (depending on how much data you're storing), you could use Storage by Zapier and a code step to do the whole process.
Hopefully that points you in the right direction!
I want to make some Rails API for a mobile app, and there is the following situation: my app will authorize using a phone number (like Viber / WhatsApp); also it can automatically detect which contacts from a phone book also have my app installed. If I understand right I should create some GET method to take array of numbers and return numbers of users which have been in my system already. There are no problem with GET method for me and arrays in GET params, but phone book of users can be very big, and sending all numbers in GET params is not good solution. How can I do it right? Should I divide numbers in parts and send first 10 numbers, then next 10 numbers etc? Thanks in advance.
Just use a POST request instead. You don't have to always use GET when you're searching for stuff.
You could also optimize your query string:
?ph=5551112222,5552223333...
This at least minimizes the request size. I think Rails should give you params[:ph] as an array. If not then splitting the string on a comma is just one extra line of code.
I am currently trying to pull data about videos from a YouTube user upload feed. This feed contains all of the videos uploaded by a certain user, and is accessed from the API by a request to:
http://gdata.youtube.com/feeds/api/users/USERNAME/uploads
Where USERNAME is the name of the YouTube user who owns the feed.
However, I have encountered problems when trying to access feeds which are longer than 1000 videos. Since each request to the API can return 50 items, I am iterating through the feed using max_length and start_index as follows:
http://gdata.youtube.com/feeds/api/users/USERNAME/uploads?start-index=1&max-results=50&orderby=published
http://gdata.youtube.com/feeds/api/users/USERNAME/uploads?start-index=51&max-results=50&orderby=published
And so on, incrementing start_index by 50 on each call. This works perfectly up until:
http://gdata.youtube.com/feeds/api/users/USERNAME/uploads?start-index=1001&max-results=50&orderby=published
At which point I receive a 400 error informing me that 'You cannot request beyond item 1000.' This confused me as I assumed that the query would have only returned 50 videos: 1001-1051 in the order of most recently published. Having looked through the documentation, I discovered this:
Limits on result counts and accessible results
...
For any given query, you will not be able to retrieve more than 1,000
results even if there are more than that. The API will return an error
if you try to retrieve greater than 1,000 results. Thus, the API will
return an error if you set the start-index query parameter to a value
of 1001 or greater. It will also return an error if the sum of the
start-index and max-results parameters is greater than 1,001.
For example, if you set the start-index parameter value to 1000, then
you must set the max-results parameter value to 1, and if you set the
start-index parameter value to 980, then you must set the max-results
parameter value to 21 or less.
I am at a loss about how to access a generic user's 1001st last uploaded video and beyond in a consistent fashion, since they cannot be indexed using only max-results and start-index. Does anyone have any useful suggestions for how to avoid this problem? I hope that I've outlined the difficulty clearly!
Getting all the videos for a given account is supported, but you need to make sure that your request for the uploads feed is going against the backend database and not the search index. Because you're including orderby=published in your request URL, you're going against the search index. Search index feeds are limited to 1000 entries.
Get rid of the orderby=published and you'll get the data you're looking for. The default ordering of the uploads feed is reverse-chronological anyway.
This is a particularly easy mistake to make, and we have a blog post up explaining it in more detail:
http://apiblog.youtube.com/2012/03/keeping-things-fresh.html
The nice thing is that this is something that will no longer be a problem in version 3 of the API.