I am using PagedList to display paging on my search payment results page. I want to display only 5 payments on each page. The search criteria I am testing returns 15 records. I am expecting only 5 records on first page with page numbers 1,2,3 at bottom. I see the page numbers as expected at the bottom but all 15 records get displayed on every page. I have debugged the code and found out that StaticPagedList function is returning 15 records instead of 5. My controller action code is as given below:
public ViewResult ViewPayment(int? billerId, int? billAccount, int? page)
{
var pageIndex = (page ?? 1) - 1;
var pageSize = 5;
List<Payment> paymentList = new List<Payment>();
paymentList = _paymentBusiness.GetPayments(billerId, billAccount);
var paymentsAsIPagedList = new StaticPagedList<Payment>(paymentList, pageIndex + 1, pageSize, paymentList.Count);
ViewBag.OnePageOfPayments = paymentsAsIPagedList;
return View(paymentList);
}
Please let me know if I have mistaken anything.
You should be querying only 5 records from your business layer. Right now you are not passing the page number or anything there. It's a bit of a waste to query all of them if you are going to only display some of them anyway.
public ViewResult ViewPayment(int? billerId, int? billAccount, int? page)
{
int pageNum = page ?? 1;
int pageSize = 5;
IPagedList<Payment> paymentPage = _paymentBusiness.GetPayments(billerId, billAccount, page, pageSize);
return View(paymentPage);
}
// Business layer
public IPagedList<Payment> GetPayments(int? billerId, int? billAccount, int page, int pageSize)
{
IQueryable<Payment> payments = db.Payments.Where(p => ....).OrderBy(p => ...);
return new PagedList<Payment>(payments, page, pageSize);
}
I would suggest you do something like the above. Change it so the business/data layer gives you back the paged list. It can get the 5 results, and the total count with two queries, and return your controller the page model.
The example gets a page using PagedList<T> which runs Skip() and Take() internally. Remember to order your results before creating the page.
Importantly, now we do not fetch all the items from the database, only the small subset we are interested in.
If you are using e.g. ADO.NET that requires you to use plain SQL, you can use a query like:
SELECT * FROM Payments ORDER BY id OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY;
Offset should be set to (page - 1) * pageSize, and the number after FETCH NEXT is the page size. Note this only works on SQL Server 2012+. Other databases have similar abilities.
Also, with ADO.NET you will have to make the two queries needed manually (page + total count), and use StaticPagedList instead of PagedList, which allows you to give it the subset directly.
An alternate approach to using PagedList (which does not provide async methods), is to use DataTables.net (https://datatables.net).
It is a client side javascript framework for paged tables and can be configured down to very low levels. This would allow you to do what you need, and also have the ability for custom sorting, caching, searching, and many other features out of the box.
Just a suggestion, as I have used PagedList library myself in the past, and since discovering DataTables.Net, I have not looked back. Great library, and makes your life easy.
Related
We retrieve information from Elasticsearch 2.7.0 and we allow the user to go through the results. When the user requests a high page number we get the following error message:
Result window is too large, from + size must be less than or equal to:
[10000] but was [10020]. See the scroll api for a more efficient way
to request large data sets. This limit can be set by changing the
[index.max_result_window] index level parameter
The thing is we use pagination in our requests so I don't see why we get this error:
#Autowired
private ElasticsearchOperations elasticsearchTemplate;
...
elasticsearchTemplate.queryForPage(buildQuery(query, pageable), Document.class);
...
private NativeSearchQuery buildQuery() {
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
boolQueryBuilder.should(QueryBuilders.boolQuery().must(QueryBuilders.termQuery(term, query.toUpperCase())));
NativeSearchQueryBuilder nativeSearchQueryBuilder = new NativeSearchQueryBuilder().withIndices(DOC_INDICE_NAME)
.withTypes(indexType)
.withQuery(boolQueryBuilder)
.withPageable(pageable);
return nativeSearchQueryBuilder.build();
}
I don't understand the error because we retreive pageable.size (20 elements) everytime... Do you have any idea why we get this?
Unfortunately, Spring data elasticsearch even when paging results searchs for a much larger result window in the elasticsearch. So you have two options, the first is to change the value of this parameter.
The second is to use the scan / scroll API, however, as far as I understand, in this case the pagination is done manually, as it is used for infinite sequential reading (like scrolling your mouse).
A sample:
List<Pessoa> allItens = new ArrayList<>();
String scrollId = elasticsearchTemplate.scan(build, 1000, false, Pessoa.class);
Page<Pessoa> page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
while (true) {
if (!page.hasContent()) {
break;
}
allItens.addAll(page.getContent());
page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
}
This code, shows you how to read ALL the data from your index, you have to get the requested page inside scrolling.
the ASP.NET_SessionState table grows all the time, already at 18GB, not a sign of ever deleting expired sessions.
we have tried to execute DynamoDBSessionStateStore.DeleteExpiredSessions, but it seems to have no effect.
our system is running fine, sessions are created and end-users are not aware of the issue. however, it doesn't make sense the table keeps growing all the time...
we have triple checked permissions/security, everything seems to be in order. we use SDK version 3.1.0. what else remains to be checked?
With your table being over 18 GB, which is quite large (in this context), it does not surprise me that this isn't working after looking at the code for the DeleteExpiredSessions method on GitHub.
Here is the code:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName)
{
LogInfo("DeleteExpiredSessions");
Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1);
ScanFilter filter = new ScanFilter();
filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now);
ScanOperationConfig config = new ScanOperationConfig();
config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID };
config.Select = SelectValues.SpecificAttributes;
config.Filter = filter;
DocumentBatchWrite batchWrite = table.CreateBatchWrite();
Search search = table.Scan(config);
do
{
List<Document> page = search.GetNextSet();
foreach (var document in page)
{
batchWrite.AddItemToDelete(document);
}
} while (!search.IsDone);
batchWrite.Execute();
}
The above algorithm is executed in two parts. First it performs a Search (table scan) using a filter is used to identify all expired records. These are then added to a DocumentBatchWrite request that is executed as the second step.
Since your table is so large the table scan step will take a very, very long time to complete before a single record is deleted. Basically, the above algorithm is useful for lazy garbage collection on small tables, but does not scale well for large tables.
The best I can tell is that the execution of this is never actually getting past the table scan and you may be consuming all of the read throughput of your table.
A possible solution for you would be to run a slightly modified version of the above method on your own. You would want to call the the DocumentBatchWrite inside of the do-while loop so that records will start to be deleted before the table scan is concluded.
That would look like:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName)
{
LogInfo("DeleteExpiredSessions");
Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1);
ScanFilter filter = new ScanFilter();
filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now);
ScanOperationConfig config = new ScanOperationConfig();
config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID };
config.Select = SelectValues.SpecificAttributes;
config.Filter = filter;
Search search = table.Scan(config);
do
{
// Perform a batch delete for each page returned
DocumentBatchWrite batchWrite = table.CreateBatchWrite();
List<Document> page = search.GetNextSet();
foreach (var document in page)
{
batchWrite.AddItemToDelete(document);
}
batchWrite.Execute();
} while (!search.IsDone);
}
Note: I have not tested the above code, but it is just a simple modification to the open source code so it should work correctly, but would need to be tested to ensure the pagination works correctly on a table whose records are being deleted as it is being scanned.
I am not using Entity framework, i have created custom Data Access Layer.
i want to do custom filtering and custom pagination.
does grid.mvc supports custom pagination and custom filtering?
i have checked http://gridmvc.codeplex.com/documentation documentation, but can not find how to do it.
i have tried
EnablePaging = true;
Pager.PageSize = 10;
but it will do pagination on already paged data.
if anybody had done it, please suggest how it can be done.
Thanks in advance.
I haven't worked with grid.mvc, but you can do paging and filtering on virtually any sequence in .NET using LINQ.
var items = new List<string>();
// Add 10 items for testing
for (int i = 0; i < 10; i++)
{
items.Add("item" + i.ToString());
}
var pagedAndFilteredItems = items
.Where(x => x != "item0") // Filter the first item from the sequence
.Skip(3) // Number of items to skip over
.Take(3) // Number of items to put into the current view.
.ToList(); // Executes the query
// pagedAndFilteredItems now contains
// "item4"
// "item5"
// "item6"
// Which corresponds to the second page if your page size is 3
You could then only provide 1 page at a time to grid.mvc. You might have to build your own next and back links, but if you are pulling data from an external source, this is far more scalable than providing the entire dataset to the grid.
Of course, another option is to do the filtering in LINQ as shown with the Where clause and then have grid.mvc do the pagination.
I have an app using Breeze to query the data. I want to first check the local cache and then the server cache if no results are returned (I followed John Papa's SPA jumpstart course). However, I have found a flaw in my logic which I am not sure how to fix. Assuming I have 10 items that match my query.
Situation 1 (which works): I go to list page (Page A) displaying all 10. Hits server as cache is empty and adds all 10 to the cache. Then go to page displaying 1 result (Page B) which is found in the cache. So all good.
Situation 2 (the problem): I go to the page displaying 1 record first (Page B). Then I go to my list page (Page A) which checks the cache and finds 1 record and because of this line ( if (recordsInCache.length > 0)) it exits and only shows that 1 record.
I somehow need to know that there are more records on the server (9) that are NOT in the cache, ie. the total records for this query is actually 10, I have 1 therefore I have to hit server for the other 9.
Here is my query for Page A:
function getDaresToUser(daresObservable, criteria, forceServerCall)
{
var query = EntityQuery.from('Dares')
.where('statusId', '!=', enums.dareStatus.Deleted)
.where('toUserId', '==', criteria.userId)
.expand("fromUser, toUser")
.orderBy('deadlineDate, changedDate');
return dataServiceHelper.executeQuery(query, daresObservable, false, forceServerCall);
}
and here is my query for Page B (single item)
function getDare(dareObservable, criteria, forceServerCall)
{
var query = EntityQuery.from('Dares')
.expand("fromUser, toUser")
.where('dareId', '==', criteria.dareId);
return dataServiceHelper.executeQuery(query, dareObservable, true, forceServerCall);
}
function executeQuery(query, itemsObservable, singleEntity, forceServerCall)
{
//check local cache first
if (!manager.metadataStore.isEmpty() && !forceServerCall)
{
var recordsInCache = executeLocalQuery(query, itemsObservable, singleEntity);
if (recordsInCache.length > 0)
{
callCompleted();
return Q.resolve();
}
}
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
}
function executeLocalQuery(query, itemsObservable, singleEntity)
{
var recordsInCache = manager.executeQueryLocally(query);
if (recordsInCache.length > 0)
{
processQueryResults(recordsInCache, itemsObservable, singleEntity, true);
}
return recordsInCache;
}
Any advice appreciated...
If you want to just hit the server for comparison purposes then at some point (either when loading up your app or when you hit the list page) call inlineCount to compare total on server vs what you already have like shown in this answer stackoverflow.com/questions/16390897/counts-in-breeze-js/…
A way you can use this creatively while you are querying for the single record would be like this -
Set some variable in your view model or somewhere equal to total count
var totalCount = 0;
When you query the single record get the inline count -
var query = EntityQuery.from('Dares')
.expand("fromUser, toUser")
.where('dareId', '==', criteria.dareId)
.inlineCount(true);
and set totalCount = data.inlineCount; Same thing when you get the total items list, just set the totalCount to inlineCount then too so you always know if you have all of the entities.
I’ve been thinking about this problem more in the last year (and have since moved from Durandal + Breeze to Angular + Breeze : In Angular you can cache the service call easily using
return $resource(xyz + params}, {'query': { method:'GET', cache: true, isArray:true }}).query(successArrayDataLoad, errorDataLoad);
I guess Angular caches the params of this query and knows when it has it already. So when I switch this method to use Breeze I lose this functionality and all my List calls are hit on every time.
So the real problem here is List data. Single Entities can always check the local cache and if nothing is returned then check the server (because you expect exactly 1).
However, List data varies by params. For example, if I have a GetGames call which takes in a CreatedByUserId, every time I supply a new CreatedByUserId I have to go back to the server.
So I think what I really need to do here to cache my List calls is to cache the Key for each call which is a combination of the QueryName and the Params.
For example, GetGames1 for UserID 1 and then GetGames2 for UserId 2.
The logic would be: Check the Angular cache to see if this call has been made before in this session. If it has, then check the local cache first. If nothing is returned, check the server.
If it has not, check the server as the local cache MIGHT have some data in it for this query but it's not guaranteed to be the full set.
The only other way around it would be to hit the server each time first to get a count for that Query + Params and then hit the local cache and compare the count, but that is more inefficient.
Thoughts?
I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length".
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}