I have a need to be able to edit multiple (10-20) noncontiguous rows in an Excel table via the Microsoft Graph API. My application receives a list of 10-20 strings as input. It then needs to be able to find the rows of data associated with those strings (they are all in the same column) and update each row (separate column) with different values. I am able to update the rows using individual PATCH requests that specify the specific row index to update, however, sending 10-20 separate HTTP requests is not sufficient due to performance reasons.
Here is what I have tried so far:
JSON batching. I created a JSON batch request where each request in the batch updates a row of data at a specific row index. However, only a few of the calls actually succeed while the rest of them fail due to being unable to acquire a lock to edit the Excel document. Using the dependsOn feature in JSON batching fixed the issue, but performance was hardly better than sending the update requests separately.
Concurrent PATCH requests. If I use multiple threads to make the PATCH requests concurrently I run into the same issue as above. A few of them succeed while the others fail as they can not acquire a lock to edit the Excel document.
Filtering/sorting the table in order to perform a range update on the specific rows currently visible. I was able to apply a table filter using the Microsoft Graph API, however, it appears that you can only define two criterion to filter on and I need to be able to filter the data on 10-20 different values. Thus it does not seem like I will be able to accomplish this using a range update since I cannot filter on enough values at the same time and the rows cannot be sorted in such a way that would leave them all in a contiguous block.
Is there any feature in the Microsoft Graph API I am not aware of that would enable me to do what I am proposing? Or any other idea/approach I am not thinking of? I would think that making bulk edits to noncontiguous rows in a range/table would be a common problem. I have searched through the API documentation/forums/etc. and cannot seem to find anything else that would help.
Any help/information in the right direction would be greatly appreciated!
After much trial and error I was able to solve my problem using filtering. I stumbled across this readme on filter apply: https://github.com/microsoftgraph/microsoft-graph-docs/blob/master/api-reference/v1.0/api/filter_apply.md which has an example request body of:
{
"criteria": {
"criterion1": "criterion1-value",
"criterion2": "criterion2-value",
"color": "color-value",
"operator": {
},
"icon": {
"set": "set-value",
"index": 99
},
"dynamicCriteria": "dynamicCriteria-value",
"values": {
},
"filterOn": "filterOn-value"
}
}
Although this didn't help me immediately, it got me thinking in the right direction. I was unable to find any more documentation about how the request format works but I started playing with the request body until finally I got something working. I changed "values" to an array of String and "filterOn" to "values". Now rather than being limited to criterion1 and criterion2 I can filter on whatever values I pass in the "values" array.
{
"criteria": {
"values": [
"1",
"2",
"3",
"4",
"5"
],
"filterOn": "values"
}
}
After applying the filter I retrieve the visibleView range, which I discovered here: https://developer.microsoft.com/en-us/excel/blogs/additions-to-excel-rest-api-on-microsoft-graph/, like this:
/workbook/tables('tableName')/range/visibleView?$select=values
Lastly, I perform a bulk edit on the visibleView range with a PATCH request like this:
/workbook/tables('tableName')/range/visibleView
and a request body with a "values" array that matches the number of columns/rows I am updating.
Unfortunately this simple task was made difficult by a lack of Microsoft Graph API documentation, but hopefully this information here is able to help someone else.
Related
I'm on migration from the old Google AdWords API to the new Google Ads API, using PHP-SDK by Google.
This is the use case, where I'm stuck:
I feed an amount of keywords (paginating them by keyword plans a 10k) to generateHistoricalMetrics($keywordPlanResource) and collect the results.
To do so I followed instructions at https://developers.google.com/google-ads/api/docs/keyword-planning/generate-historical-metrics and, especially, https://developers.google.com/google-ads/api/docs/keyword-planning/generate-historical-metrics#mapping_to_the_ui, with using of KeywordPlanAdGroupKeywords (with a single ad group) and avoiding to pass a specific date range for now, relying on the default value.
Further I had to apply some filters on my keywords because of KEYWORD_HAS_INVALID_CHARS and KEYWORD_TEXT_TOO_LONG, but all the errors which I'm aware of are gone now.
Now, I found out, that the KeywordPlanHistoricalMetrics object does not contain any keyword id (of the form customers//keywordPlanAdGroupKeywords/) So, I have to rely on the correct ordering. This is ok as it seems, that the original ordering of keywords is preserved within the results, as here https://developers.google.com/protocol-buffers/docs/encoding#optional
But still I have the problem, that
count($keywordPlanServiceClient->generateHistoricalMetrics($keywordPlanResource)->getMetrics()) is lower then count($passedKeywords), where each of $passedKeywords where passed to
new KeywordPlanAdGroupKeyword([
'text' => $passedKeyword,
'match_type' => KeywordMatchType::EXACT
'keyword_plan_ad_group' => $planAdGroupResource
]);
Q: So I have two questions here:
Why getMetrics() does not yield the same amount of results as the amount of passed keywords?
I'm struggling with debugging at this moment: Say, I want to know which keywords are let out. Either for providing more information at this place or just to skip them, and let my customer know, that these particular keywords were not queried. How to do this, when although I have a keyword id for every passed keyword I cannot match the returned metrics to them, because the KeywordPlanHistoricalMetrics object does not contain any keyword id.
Detail: While testing I found out, that the reducing of an amount of queried keywords reduces the amount of lost keyword data:
10k of queried keywords - 4,72% loss,
5k - 2,12%,
2,5k - 0,78%,
1,25k - 0,43%,
625 - 0,3%,
500 - 0,24%,
250 - 0,03%
200 - 0,03% of lost keywords.
But I can't imagine, that keywords should be queried one by one.
I have an app, which manages Google drive structure and updates existing Google sheets. On all of those Sheets documents, I came across strange behaviour. Once I add some amount of data (at around 80-100 entered rows), the spreadsheet stops applying some formatting over data.
App is C# / .NET Standard (4.7.2), using Google.Apis.Sheets.v4 nuget and GoogleSheetsApi-V4 as endpoint.
Application creates and send batch update which contains both data and formatting (styles). Batch update is specific request as it is transactional - if any of the sub-requests will not be applied (e.g. request is not parsed via Google's server), whole request will not be applied (meaning all sub-requests get dropped).
I'm aware of request-count limits and I'm not hitting those. This behaviour occurrs on all tested files (around 50 in total) - first 80-100 rows are formatted without issues, the rest has problems.
Reply from the Google's API (after batch update request) is empty and doesn't report any error.
For the formatting I'm using batchRequest with several requests. I'm including them in the order as they are send and applied:
UpdateCells with CellData.UserEnteredValue
MergeCells
UpdateCells with CellData.UserEnteredFormat
UpdateBorders
RepeatCell
...
Does Google spreadsheets API have some limits for formatting? What might be the cause that formatting is lost at this part? Is there something I should be aware about that might be limiting this behaviour?
Screenshot of bad formatting at certain amount of data (e.g. WORD_WRAP, TEXT_ALIGNMENT is not set, but cells are still "bordered" and background color is set for the columns):
I believe your goal as follows.
From the formatting, which is not applied, is "last request" in the batch update. in your comment.
You want to reflect WRAP of wrapStrategy to the column 3 of the rows from 3 to 96 using the method of batchUpdate of Sheets API.
Modification points:
When I saw your request body, I thought that range might be not correct. For example, if you want to reflect WRAP of wrapStrategy as follows.
From 3 to 96 for rows.
Column 3.
Please modify range as follows.
Modified request body:
Please modify repeatCell of the last request in your request body as follows, and test it again.
{
"repeatCell": {
"cell": {
"userEnteredFormat": {
"horizontalAlignment": "CENTER",
"verticalAlignment": "MIDDLE",
"wrapStrategy": "WRAP"
}
},
"fields": "userEnteredFormat.wrapStrategy,userEnteredFormat.verticalAlignment,userEnteredFormat.horizontalAlignment",
"range": {
"startRowIndex": 3,
"endRowIndex": 96,
"startColumnIndex": 3,
"endColumnIndex": 4,
"sheetId": ### <--- Please set sheet ID.
}
}
}
Note:
By the way, about 2nd request of mergeCells in your request body, in this case, I think that the situation is not changed. Because the single cell is used. Please be careful this.
References:
Method: spreadsheets.batchUpdate
GridRange
I want to get only results that are related to health and for that I used below api.
https://maps.googleapis.com/maps/api/place/search/json?location=23.0225,72.5714&radius=500&types=hospital&sensor=false&key="API_KEY"
Above API gives me the results related to Health but I don't want to pass location lat long Parameter.
Actually I want to search it with "input" Param like below.
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=Ahmedabad&types=hospital&radius=500&key="API_KEY"
But above give me error like below
{
"predictions" : [],
"status" : "INVALID_REQUEST"
}
How can I get this type of results?
Thanks in Advance.
The places autocomplete request from your question has several issues.
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=Ahmedabad&types=hospital&radius=500&key=API_KEY
If you remove the location parameter, you should remove the radius parameter as well. It doesn't make sense without location.
hospital is not allowed value in types filter of autocomplete. If you check the documentation, you will see that the only possible values are:
geocode
address
establishment
(regions)
(cities)
Place autocomplete might return only 5 suggestions. I think you are looking for something different.
Also, be aware that radar search mentioned in the comments is now deprecated and will stop working in June 2018.
https://maps-apis.googleblog.com/2017/06/announcing-deprecation-of-place-add.html
I would suggest having a look at Places API text search. Your query with Places API text search might be
https://maps.googleapis.com/maps/api/place/textsearch/json?query=Ahmedabad&type=hospital&key=YOUR_API_KEY
The text search can return up to 60 results divided into pages of 20 results. For your particular example, I got the following output
Hope this helps!
I'm using Firebase for my iOS application and I'm having trouble implement infinite scroll and filtering data together.
What I need to do is:
Display items with order/filter on multiple property (location, category, status . . .)
Implement infinite scroll when the user scrolled to bottom of the screen.
I tried to think about some solutions:
The first, I think that I'll query the data with the necessary conditions then limit the number of records by use queryLimitedToFirst(N), and increase N when need to load the next items. But because Firebase can only filter on one property at a time and it's also a waste to reload data. So, I was thinking about the second solution.
As approaches are suggested from Frank van Puffelen (Query based on multiple where clauses in firebase):
filter most on the server, do the rest on the client
Yes, exactly like that. I'll execute queryOrderedByKey, queryStartingAtValue, queryEndingAtValue to implement infinite scroll, pull down the remaining data and filter that on client. But there is one problem that is I would not have enough items to display for the user if execute filter on the client.
For example: each time run the query, I receive 10 items. After data filtering process on the client, I just left 5 (can be 0) items meet the conditions to display to the user.
I don't want this because user may think there is a problem
Can I please get some pointers on this? If I didn't structured the data properly, can I also get some tips there?
jqGrid is powered by remote json data in ASP .NET MVC2 application.
On page load two requests are sent to server: one to retrieve whole html page with colmodel and second invoked by jqgrid to retrieve data.
colmodel is stored in database and depends on user rights and user configuration. Creating colmodel requires number of sql server calls which take a while.
Both request require building colmodel in server. For data retrieval colmodel is required to get correct number of columns to build select statement.
Currently this colmodel is built two times for every request. Also total number of recods is required to be returned which is slow on large data (causes whole result scan in PostgreSql server).
How to speed the things up ?
How to build colmodel only once and send it and data in same request?
I agree that extension of jqGrid to support the loading of some parts of colModel per one Ajax will be very helpful. For about a year I posted the feature request for example.
What you can do now:
If I correct understand your requirements you need to show the user the subset of the columns depend on the user's permissions. One can implement the requirement in one Ajax request. What you can do is: first don't send the "hidden" data or send there as empty string. Seconds you can send the list of columns, which should be hidden, to the client. In the case you can implement the variable number of columns in jqGrid. You can send the information inside of userData part of JSON response for example. To have better performance with many hidden columns I would recommend you to call showCol or hideCol inside of beforeProcessing and hide/show the columns on the empty grid. It will speed up the performance of showCol or hideCol dramatically. If it's needed you can include additional call of clearGridData.
You have to optimize the retrieval colModel. I don't understand why it should be slow. All depends from your implementation. In any way I am sure that one can make the retrieval really quickly.
To improve the performance of request of data from the database you can consider don't fill records field of the JSON response and set total to page + 1. It enebles the "Next" button of the pager. You should set total equal to page only if you returns less rows as the rows (number of rows per page). In the most cases it will be good criteria to detect the last page. You can additionally hide some field on the pager lake the "Last" button and the sp_1_... span which shows the total number of pages. You can do this by the usage of pgtext : "Page {0}" option or the usage of pginput: false to have no pager input at all. The viewrecords should be false (its default value). After all the customization you will don't need to calculate the total number of records and in the way improve performance of the database request in case of large data.