Grails ExtJS pagination - grails

I am using ExtJS 3.3.1 with Grails 2.0 to do pagination on screen, but it does not work as I expect.
I followed the tip posted here: Grails extJS grid paging
JS page
paramNames: {start:'offset',limit:'max',sort:'sort',dir:'order'},
baseParams: {offset:0,max:10},
The pagingToolbar:
this.gridBBar = new Ext.PagingToolbar({
pageSize : 10,
store : this.gridStore,
displayInfo : true,
displayMsg : 'Hiển thị {0} - {1} mục tìm được của {2} kết quả',
emptyMsg : 'Không tìm thấy dữ liệu',
});
Controller:
def result = Floor.createCriteria().list(
max:params.int('max')?:100,
offset:params.int('offset')?:0
)
render ([count:result.totalCount,data:result] as JSON)
but the paging button (Next) was disabled because the store just contained only 10 item, no more to retrieve.
When I change the offset to 10:
paramNames: {start:'offset',limit:'max',sort:'sort',dir:'order'},
baseParams: {offset:10,max:10},
the pagination is work well, except one strange thing: the grid always display the next 10 results (10th-20th record for 1st click, 20th-30th record for 2nd), not the current first 10 results. I don't know what the correct usage of pagination combined from ExtJs and Grails is. If you have experience in this problem, could you please share me some information?
Thank you so much.

Oh how lucky I am. I've got it!
Based on these 2 articles:
1. http://grails.1312388.n4.nabble.com/Find-Count-for-pagination-And-Objects-for-Criteria-td1368528.html
and
2. http://blog.jeffshurts.com/2010/04/grails-pagination-and-criteriabuilder/
I have found the explanation for this. Because the "count" property returned in JSON is got from result.size(), so it always equals to the pageSize of PagingToolbar of the grid's store, so the store will understand that, there is no more any result to retrieve and it will disable the navigation buttons.
The key here is returning the real total result of the query (without attaching pagination constraints). As normal, the createCriteria().list {} will return an ArraysList. But by passing paging params to list as below: (refer to link 1)
DomainClass.createCriteria().list(max : x, offset : y) {
// not pass max : x, offset : y to here, inside the body
}
Grails will return the result as a PagedResultList implicitly (refer to link 2), and it provide us the getTotalCount(). There's no any official documentation of Grails mention this magical issue.
And my problem was solved.

Related

Vaadin Flow Grid with Row-Index

How do I add a row-index column to a grid, that will not be sorted along user-sorting of rows?
The solution must not include changes to any polymer template, but should rather be done in java.
Index starting at 0
grid.addColumn(TemplateRenderer.of("[[index]]"));
this works, because in the frontend part of the grid there already is an index property available for each row.
Index starting at 1
Edit: This is actually a much simpler way to achieve this than the way I proposed before. You can set a client side renderer for the Web Component with executeJS.
Yes it's still a bit "hacky", but it's still miles better than my own approach.
grid.addColumn(item -> "").setKey("rowIndex");
grid.addAttachListener(event -> {
grid.getColumnByKey("rowIndex").getElement().executeJs(
"this.renderer = function(root, column, rowData) {root.textContent = rowData.index + 1}"
);
});
Related github and vaadin-forum threads:
https://vaadin.com/forum/thread/17471146/grid-start-row-count-from-1,
https://github.com/vaadin/vaadin-grid/issues/1386,
https://vaadin.com/forum/thread/18287678/vaadin-grid-exclude-specific-column-from-sorting,
https://github.com/vaadin/vaadin-grid-flow/issues/803

Spring-data-elasticsearch: Result window is too large (index.max_result_window)

We retrieve information from Elasticsearch 2.7.0 and we allow the user to go through the results. When the user requests a high page number we get the following error message:
Result window is too large, from + size must be less than or equal to:
[10000] but was [10020]. See the scroll api for a more efficient way
to request large data sets. This limit can be set by changing the
[index.max_result_window] index level parameter
The thing is we use pagination in our requests so I don't see why we get this error:
#Autowired
private ElasticsearchOperations elasticsearchTemplate;
...
elasticsearchTemplate.queryForPage(buildQuery(query, pageable), Document.class);
...
private NativeSearchQuery buildQuery() {
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
boolQueryBuilder.should(QueryBuilders.boolQuery().must(QueryBuilders.termQuery(term, query.toUpperCase())));
NativeSearchQueryBuilder nativeSearchQueryBuilder = new NativeSearchQueryBuilder().withIndices(DOC_INDICE_NAME)
.withTypes(indexType)
.withQuery(boolQueryBuilder)
.withPageable(pageable);
return nativeSearchQueryBuilder.build();
}
I don't understand the error because we retreive pageable.size (20 elements) everytime... Do you have any idea why we get this?
Unfortunately, Spring data elasticsearch even when paging results searchs for a much larger result window in the elasticsearch. So you have two options, the first is to change the value of this parameter.
The second is to use the scan / scroll API, however, as far as I understand, in this case the pagination is done manually, as it is used for infinite sequential reading (like scrolling your mouse).
A sample:
List<Pessoa> allItens = new ArrayList<>();
String scrollId = elasticsearchTemplate.scan(build, 1000, false, Pessoa.class);
Page<Pessoa> page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
while (true) {
if (!page.hasContent()) {
break;
}
allItens.addAll(page.getContent());
page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
}
This code, shows you how to read ALL the data from your index, you have to get the requested page inside scrolling.

Is there any limit for source in typeahead?

I am using JQuery typeahed from RunningCoder. Typeahead works well if I have few records in my source, but does not work if my source has around 500 records.
It is not related to the result count, which can be managed by maxItem parameter. Also, there is no issue in getting the JSON string from the server, as I can print it without any issue.
I know ideally, I should not have them in my page pre-loaded and search it based on the input, but in my case hitting the server for search is not an option and I want to perform the search from the static data I have in my view. Here is my code:
$.typeahead({
input: "#List .typeahead",
minLength: 3,
templateValue: "{{Text}}",
display: ["Text", "Subtext"],
emptyTemplate: 'No results for "{{query}}"',
template: '<span>' +
'<span class="result" id="{{Value}}">{{Text}}</span>' +
'</span>',
source: {
Issuer: {
data: #Html.Raw(Model.EveryThing)
}
}
});
In my code above if Model.Everything has 40-50 records then it works fine, but does not work for around 500 records.
ADDITIONAL INFO:
After figuring out the issue, would like to explain it a bit as this may help someone. By using above code, you can search the list based on two fields i.e. Text and Subtext, but the user will see only Text in the result and then can select from matching options. This will be very useful if you want to perform the search on more than one field but show just one field.
Figured it out after creating sample data on my own, rather than relying on server response. The issue is not with the length of the result, but null entries in the result.
In my data, there are few objects with Subtext as NULL, and that causes the issue, I fixed it by replacing the NULL with an empty string, and this works as expected now.

Swagger UI Sort not working for version v2.1.4

Hi i have implemented Swagger UI using JSON object but the issue is that the "paths" (API calls) are not showing in alphanumeric order instead i have given the "apisSorter" as alpha in JavaScript.
JSON output is showing correctly in web console as sorted when i use debug tool but while showing in web page its showing in order which i mentioned in annotations page which are not in alphanumeric way.
Below is the code:
docExpansion: "none",
jsonEditor: false,
apisSorter: "alpha",
defaultModelRendering: 'schema',
showRequestHeaders: false
Even i tried to sort JSON from server side but no help.
Below is the code:
usort($swg_result, function($a, $b) { //Sort the array using a user defined function
return $b->paths > $a->paths ? -1 : 1; //Compare the scores
});
Any help is appreciated.
changing to this solved the issue:
apisSorter: "alpha",
operationsSorter : "alpha",

How to implement pagination when using amazon Dynamo DB in rails

I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length". 
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}

Resources