Sales order total different with actual total - erp

Just need to know any one of you experiencing this issue with sales order document in acumatica ERP 4.2,
The header level total is wrong when compared to the total of lines. Is there any way we can recalculate the totals in code as i couldn't find fix from acumatica yet?

If document is not yet closed, you can just modify qty or add/remove line.
If document is closed i do not see any possible ways except changing data in DB.

I am adding my recent experience to this topic in hopes it might help others.
Months ago, I wrote the code shown below anticipating its need when called by RESTful services. It was clearly not needed, and even worse, merely written and forgotten...
The code was from a SalesOrderEntryExt graph extension.
By removing the code block, the doubling of Order Total was resolved.
It's also an example of backing out custom code until finding the problem.
protected void _(Events.RowInserted<SOLine> e, PXRowInserted del)
{
// call the base BLC event handler...
del?.Invoke(e.Cache, e.Args);
SOLine row = e.Row;
if (!Base.IsExport) return;
if (row != null && row.OrderQty > 0m)
{
// via RESTful API, raise event
SOLine copy = Base.Transactions.Cache.CreateCopy(row) as SOLine;
copy.OrderQty = 0m;
Base.Transactions.Cache.RaiseRowUpdated(row, copy);
}
}

Related

Why is the server response after a Drag&Drop so large and slow

I'm having some performance issues dragging cards between grids. From a backend perspective, storing the data from the grids after a change takes about 200ms.
But then, when the backend work seems to be done, it takes another 2,5 seconds for the frontend to get the response from the request. The request that's taking so long contact 2 rpc events: grid-drop and grid-dragend.
The response is also unusually large I think. Just to give you an idea, see screenshot ... notice the tiny scrollbar at the right. 🙂
TTFB is 2,42s, download size about half a MB.
Any ideas what's going on here and how I can eliminate this?
I'm using Vaadin 21.0.4, spring boot 2.5.4.
Steps I've taken to optimise performance:
Optimize db query + indexing
Use #cacheable where possible
Implemented the cards using LitElement
This is the drop listener:
ComponentEventListener<GridDropEvent<Task>> dropListener = event -> {
if (dragSource != null) {
// The item ontop or below where the source item is dropped. Used to calculate the index of the newly dropped item(s)
Optional<Task> targetItem = event.getDropTargetItem();
// if the item is dropped on an existing row and the dragged item contains the same items that's being dropped.
if (targetItem.isPresent() && draggedItems.contains(targetItem.get())) {
return;
}
// Add dragged items to the grid of the target room
Grid<Task> targetGrid = event.getSource();
Optional<Room> room = dayPlanningView.getRoomForGrid(targetGrid);
// The items of the target Grid. Using listdataview so this would not retrigger the query
List<Task> targetItems = targetGrid.getListDataView().getItems().toList();
// Calculate the position of the dropped item
int index = targetItem.map(task -> targetItems.indexOf(task)
+ (event.getDropLocation() == GridDropLocation.BELOW ? 1 : 0))
.orElse(0);
room.ifPresent(r -> service.plan(draggedItems, r, index, dayPlanningView.getSelectedDate()));
// send event to update other users
Optional<ScheduleUpdatedEvent> scheduleUpdatedEvent = room.map(r -> new ScheduleUpdatedEvent(PlanningMasterDetailView.this, r.getId()));
scheduleUpdatedEvent.ifPresent(Broadcaster::broadcast);
// remove items from the source grid. using list provider so items can be removed without DB round-trip.
productionOrderGrid.getListDataView().removeItems(draggedItems);
}
};
I'm a bit stuck now, as I'm kinda out of ideas 😦
Thanks
You should use the TemplateRenderer/LitRenderer instead of the ComponentRenderer because the generated server-side components are affecting the performance:
Read more here: https://vaadin.com/blog/top-5-most-common-vaadin-performance-pitfalls-and-how-to-avoid-them

Acumatica Purchase Receipt Return: Open PO Line Default to false

After clicking the "Return" action on a selected item from a completed Purchase Receipt, we're trying to get the "Open PO Line" value to default to false. Anyone know how customize this?
The field defaulting seems to be overwritten when we press the "Return" button. We've tried several different events in the grid but none of the seem to work.
The desired result is to default the "Open PO Line" to false after a return and once the return is released the Purchase Order line associated with the return should remain completed.
Research
The AllowOpen field on POReceiptLine is a PXBool. This means that the value must be populated via a PXDBScalar, PXFormula, etc. or via some business logic in the graph. To see what is happening, let's look at the DAC for POReceiptLine...
#region AllowOpen
public abstract class allowOpen : PX.Data.BQL.BqlBool.Field<allowOpen> { }
protected Boolean? _AllowOpen;
[PXBool()]
[PXUIField(DisplayName = "Open PO Line", Visibility = PXUIVisibility.Service, Visible = true)]
public virtual Boolean? AllowOpen
{
get
{
return this._AllowOpen;
}
set
{
this._AllowOpen = value;
}
}
#endregion
As you can see, there isn't any logic to this field in the DAC, so we need to turn to the graph to see how it is used. (Even if there were logic in the DAC, we would need to see if the graph does something. However, logic in the DAC might have been an easy override with CacheAttached - unfortunately, not in this case.)
Let's turn to POReceiptEntry where the return is handled. We find AllowComplete and AllowOpen being set in the POReceiptLine_RowSelected event, as we would expect since it must be populated on the graph side of code having no logic in the DAC.
if ((row.AllowComplete == null || row.AllowOpen == null) && fromPO)
{
POLineR source = PXSelect<POLineR,
Where<POLineR.orderType, Equal<Required<POLineR.orderType>>,
And<POLineR.orderNbr, Equal<Required<POLineR.orderNbr>>,
And<POLineR.lineNbr, Equal<Required<POLineR.lineNbr>>>>>>
.Select(this, row.POType, row.PONbr, row.POLineNbr);
// Acuminator disable once PX1047 RowChangesInEventHandlersForbiddenForArgs [Legacy, moved the exception here from PX.Objects.acuminator because the condition was changed]
row.AllowComplete = row.AllowOpen = (row.Released == true) ?
(row.ReceiptType != POReceiptType.POReturn ? source?.Completed == true : source?.Completed != true) :
(source?.AllowComplete ?? false);
The field is populated in the row.AllowComplete = row.AllowOpen = (row.Released == true) ?... section of code.
Subsequently, we see that the CopyFromOrigReceiptLine method sets this value to false on the "destLine" being created.
destLine.AllowOpen = false;
As that isn't "true" then we know this isn't our spot. Continuing on, we see in UpdatePOLineCompletedFlag that AllowComplete and AllowOpen are being set. This could be our spot (or one of them).
row.AllowComplete = row.AllowOpen = poLineCurrent.AllowComplete;
Side note: It is worth noting that this line appears twice in an if then else. In both cases it is going to be executed, therefore it would be better coding practice to place this identical statement AFTER the if then else since both the if and else conditions will execute this same statement regardless of the if.
This particular use appears to be pulling the value from the AllowComplete field of the PO Line being received. At this point, you should consider if you need to look upstream at the PO Line to see if the field there needs to be manipulated as well. I cannot answer that for you as your business case will drive the decision.
There also is a line in the Copy method that sets AllowComplete and AllowOpen.
aDest.AllowComplete = aDest.AllowOpen = (aSrc.CompletePOLine == CompletePOLineTypes.Quantity);
The Copy method is overloaded, and the other signature of the method sets the values to true.
aDest.AllowComplete = true;
aDest.AllowOpen = true;
Both of these cases may need customization as well, but I don't think it's the primary issue.
Next Steps
At this point, we see that either the field is being set in UpdatePOLineCompletedFlag or in methods that seem related to copying records. You will need to investigate further if the copy related methods warrant a change as well. However, I think the initial focus should be on the UpdatePOLineCompletedFlag method.
If we find the other points identified require customization, we likely will handle them all the same way... Override the base method/event, invoke the original method/event in our override, and then force the values to fit our business case. Careful testing will be needed since forcibly altering these values may create unforeseen negative ripples.
Something to try
We want to update (or create) a graph extension for POReceiptEntry to override the UpdatePOLineCompleteFlag method. This compiles, but it is completely untested on my part. We need to create a delegate and specify the PXOverride attribute. Then we want to execute the base method before we override the field(s) in question.
Note the extra code (commented out) as a reminder that you need to be careful of methods (typically events) updating our record in the cache and needing to be located so that we don't use a stale copy of the record. I don't think that's necessary in this case, but it seems to be somewhat obscure in code samples that I see. Of course, that is because I'm always looking at the code repository which rarely has graph extensions overriding event handlers!
#region CreateReceiptEntry Override
public delegate void UpdatePOLineCompleteFlagDelegate(POReceiptLine row, bool isDeleted, POLine aOriginLine);
[PXOverride]
public virtual void UpdatePOLineCompleteFlag(POReceiptLine row, bool isDeleted, POLine aOriginLine, UpdatePOLineCompleteFlagDelegate baseMethod)
{
//Execute original logic
baseMethod(row, isDeleted, aOriginLine);
/* If the base method has updated the cache, then we would need to locate the updated record in the cache to proceed
* This tends to be the case more often with event handlers, so it probably isn't needed in this case.
* This is just for reference as a training reminder
//If row has been updatd in the baseMethod, let's go get the updated cache values
POReceiptLine locateRow = Base.transactions.Locate(row);
if (locateRow != null) row = locateRow;
*/
//Override the fields to false - need to test to see if this creates any issues with breaking existing business logic
row.AllowComplete = row.AllowOpen = false;
}
#endregion
If this doesn't get you the specific answer you need, I hope it at least gives you some insight into how to hunt down "the spot" to change. I suspect you may need to update the POLine for a complete solution as hinted above. (See the event handler POReceiptLine_AllowOpen_FieldUpdated for the code that leads me to that conclusion.)
Good luck with your customization, and happy coding!

BigQueryIO loads not offloading rows to GCS when early trigger occurs

I'm playing around with BigQueryIO write using loads. My load trigger is set to 18 hours. I'm ingesting data from Kafka with a fixed daily window.
Based on https://github.com/apache/beam/blob/v2.2.0/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BatchLoads.java#L213-L231 it seems that the intended behavior is to offload rows to the filesystem when at least 500k records are in a pane
I managed to produce ~ 600K records and waited for around 2 hours to see if the rows were uploaded to gcs, however, nothing was there. I noticed that the "GroupByDestination" step in "BatchLoads" shows 0 under "Output collections" size.
When I use a smaller load trigger all seems fine. Shouldn't the AfterPane.elementCountAtLeast(FILE_TRIGGERING_RECORD_COUNT)))) be triggered?
Here is the code for writing to BigQuery
BigQueryIO
.writeTableRows()
.to(new SerializableFunction[ValueInSingleWindow[TableRow], TableDestination]() {
override def apply(input: ValueInSingleWindow[TableRow]): TableDestination = {
val startWindow = input.getWindow.asInstanceOf[IntervalWindow].start()
val dayPartition = DateTimeFormat.forPattern("yyyyMMdd").withZone(DateTimeZone.UTC).print(startWindow)
new TableDestination("myproject_id:mydataset_id.table$" + dayPartition, null)
}
})
.withMethod(Method.FILE_LOADS)
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withSchema(BigQueryUtils.schemaOf[MySchema])
.withTriggeringFrequency(Duration.standardHours(18))
.withNumFileShards(10)
The job id is 2018-02-16_14_34_54-7547662103968451637. Thanks in advance.
Panes are per key per window, and BigQueryIO.write() with dynamic destinations uses the destination as key under the hood, so the "500k elements in pane" thing applies per destination per window.

Observed Lists and Maps, and Firebase, oh my. How can I improve this mess?

So this is what I ended up with to get realtime starring/liking (of communities, in my case) working, with a Firebase datastore. It's a mess and surely I'm missing some fundamentals.
Here my element gets communities, each as a Map community stored in an observed List communities. It has to rewrite that List several times as it changes each community Map based on the results of the changed star count and the user's starred state, and some other fun:
getCommunities() {
// Since we call this method a second time after user
// signed in, clear the communities list before we recreate it.
if (communities.length > 0) { communities.clear(); }
var firebaseRoot = new db.Firebase(firebaseLocation);
var communityRef = firebaseRoot.child('/communities');
// TODO: Undo the limit of 20; https://github.com/firebase/firebase-dart/issues/8
communityRef.limit(20).onChildAdded.listen((e) {
var community = e.snapshot.val();
// snapshot.name is Firebase's ID, i.e. "the name of the Firebase location",
// so we'll add that to our local item list.
community['id'] = e.snapshot.name();
print(community['id']);
// If the user is signed in, see if they've starred this community.
if (app.user != null) {
firebaseRoot.child('/users/' + app.user.username + '/communities/' + community['id']).onValue.listen((e) {
if (e.snapshot.val() == null) {
community['userStarred'] = false;
// TODO: Add community star_count?!
} else {
community['userStarred'] = true;
}
print("${community['userStarred']}, star count: ${community['star_count']}");
// Replace the community in the observed list w/ our updated copy.
communities
..removeWhere((oldItem) => oldItem['alias'] == community['alias'])
..add(community)
..sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
});
}
// If no updated date, use the created date.
if (community['updatedDate'] == null) {
community['updatedDate'] = community['createdDate'];
}
// Handle the case where no star count yet.
if (community['star_count'] == null) {
community['star_count'] = 0;
}
// The live-date-time element needs parsed dates.
community['updatedDate'] = DateTime.parse(community['updatedDate']);
community['createdDate'] = DateTime.parse(community['createdDate']);
// Listen for realtime changes to the star count.
communityRef.child(community['alias'] + '/star_count').onValue.listen((e) {
int newCount = e.snapshot.val();
community['star_count'] = newCount;
// Replace the community in the observed list w/ our updated copy.
// TODO: Re-writing the list each time is ridiculous!
communities
..removeWhere((oldItem) => oldItem['alias'] == community['alias'])
..add(community)
..sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
});
// Insert each new community into the list.
communities.add(community);
// Sort the list by the item's updatedDate, then reverse it.
communities.sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
});
}
Here we toggle the star, which again replaces the observed communities List a few times as we update the count in the affected community Maps and thus rewrite the List to reflect that:
toggleStar(Event e, var detail, Element target) {
// Don't fire the core-item's on-click, just the icon's.
e.stopPropagation();
if (app.user == null) {
app.showMessage("Kindly sign in first.", "important");
return;
}
bool isStarred = (target.classes.contains("selected"));
var community = communities.firstWhere((i) => i['id'] == target.dataset['id']);
var firebaseRoot = new db.Firebase(firebaseLocation);
var starredCommunityRef = firebaseRoot.child('/users/' + app.user.username + '/communities/' + community['id']);
var communityRef = firebaseRoot.child('/communities/' + community['id']);
if (isStarred) {
// If it's starred, time to unstar it.
community['userStarred'] = false;
starredCommunityRef.remove();
// Update the star count.
communityRef.child('/star_count').transaction((currentCount) {
if (currentCount == null || currentCount == 0) {
community['star_count'] = 0;
return 0;
} else {
community['star_count'] = currentCount - 1;
return currentCount - 1;
}
});
// Update the list of users who starred.
communityRef.child('/star_users/' + app.user.username).remove();
} else {
// If it's not starred, time to star it.
community['userStarred'] = true;
starredCommunityRef.set(true);
// Update the star count.
communityRef.child('/star_count').transaction((currentCount) {
if (currentCount == null || currentCount == 0) {
community['star_count'] = 1;
return 1;
} else {
community['star_count'] = currentCount + 1;
return currentCount + 1;
}
});
// Update the list of users who starred.
communityRef.child('/star_users/' + app.user.username).set(true);
}
// Replace the community in the observed list w/ our updated copy.
communities.removeWhere((oldItem) => oldItem['alias'] == community['alias']);
communities.add(community);
communities.sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
print(communities);
}
There's also some other craziness where we have to get the list of communities again when app.changes because we only load app.user after the app and list initially load, and now that we have the user we need to turn on the appropriate stars. So my attached() looks like:
attached() {
app.pageTitle = "Communities";
getCommunities();
app.changes.listen((List<ChangeRecord> records) {
if (app.user != null) {
getCommunities();
}
});
}
There, it seems I could just be getting the stars and updating said each affected community Map, then repopulating the observed communities List, but that's the least of it.
The full thing: https://gist.github.com/DaveNotik/5ccdc9e74429cf87d641
How can I improve all this Map/List management, e.g. where every time I change a community Map, I have to rewrite the whole communities List? Should I be thinking of it differently?
What about all this querying Firebase? Surely, there's a better way, but it seems I need to do a lot to keep it realtime, and also the element gets attached and detached, so it seems I need to run getCommunities() each time. Unless the OOP way is objects get created, and they're always there to be observed whenever the element is attached? I'm missing those fundamentals.
This app.changes business to handle the case where we load the list before we have the app.user (which then means we want to load her stars) - is there a better way?
Other ridiculousness?
Big question, I know. Thank you for helping me get a handle on the right approach as I move forward!
I think there is two different ways to choose, if you want to keep a data of your application in real time sync with server database:
1 Polling (pull method ie. a client pulls the data from server)
Application polls ie. requests the updated data from the server. Polling can be automatic (for example with interval of 60s) or requested by user (= refresh). The short automatic interval will cause high load on server and with long interval you lose real time feeling.
2 Full-duplex (push method ie. server can push the data to the client)
An application and a server have full-duplex connection in between and server is able to send the data or a notification of the data available to the client. Then the client can decide whether or not to retrieve the data.
This method is a modern one, because it'll keep the net traffic and the server load in minimum and yet providing a real time updates.
The firebase boasts with this kind of updates, but I'm not sure is it full-dublex or just a clever way of polling. Websocket protocol is a real full-duplex connection and dart server supports it.
The updated data from a server can include:
1 A full dataset
Basically the server sends a full dataset (=initial query) and the server doesn't "know" anything about updated data. This is easiest way to go, if you have reasonable small datasets. Many times you'll have a very small datasets among the big ones, so this way can be useful.
2 A dataset including a new data only
The server can send a dataset based on modified timestamp ie. every time a record in the database changes, a timestamp for update will be saved and the query can be filtered based on this timestamp. In other words application knows when it has last updated the data and then requests newer data.
3 A changed record
A server keeps a track of updated data and sends it to the application. The data can be sent record by record when changes occurs or server can collect the data for a bigger chunks to be sent. This method requires a server to keep a track of every client connected in order to send a correct data to each client. When you add an authentication process for clients ie. not every data can be send to all, it can get quite complicated.
I think the easiest way is to use the method number 2 for updated data.
Last thing...
What to do with the data received?
1 Handle everything as a new
If application receives an updated data, it will destroy/clear all the lists and maps and recreate/refill them with the new data. Typical problems with this are a user loses a current position on a page or the data user were looking jumps around. If application has modified or extended an old data for some reason, all those modifications will be lost. This method works ok, if a user requests a refresh.
2 Update only the changed data
The application never clears initial list or maps, it just updates them with a new received data. Typically you will construct a new combined map from queried data for your specific need (for example a certain view). The combined map has already all information you want to show in the specific view (default values even if the initial queries didn't had the data for the field) and you just update a new values in it.
If the updated information needs a new member in the list you just add it in the end.
If the updated information requires a deletion from the list, it might be a good thing to use extra field "active" and filter the list/map with it. With filtering you won't lose any referencies or so.
If you need to sort a data or filter it, it should be done by a view or user request. Basically the data is stored in the application and updated as needed. When a user needs to see the data in a specific way, the view should show the data a proper way. This is called model-controller-view and the main idea is to separate the data from the view.
I'm sorry this long answer didn't answer any of your questions, but I tried to cut this challenge to a smaller chunks. Many times you can see an interface between these chunks and you can design and organize your code nicely by using these interfaces.

How to implement pagination when using amazon Dynamo DB in rails

I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length". 
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}

Resources