How do I reset the ScanStreamTransformer Accumulator? - dart

I am writing a Flutter application using the BLOC pattern. I'm currently trying to search a database and return a list of results on the screen.
The first search will complete fine. The issue is when I hit the back button and try to do a second search. The first search results aren't cleared out. Instead they remain on the screen and the new search results are placed below them.
In a nutshell here's what I'm doing.(I'm using the RxDart library)
1.) Defining input and output streams:
PublishSubject<SearchRowModel> _searchFetcher =
PublishSubject<BibSearchRowModel>();
BehaviorSubject<Map<int, SearchRowModel>> _searchOutput =
BehaviorSubject<Map<int, SearchRowModel>>();
2.) Piping the streams together in the class constructor.
_searchFetcher.stream.transform(_resultsTransformer()).pipe(_searchOutput);
3.) Adding the results to the fetcher stream
results.searchRows.forEach((SearchRowModel row) {
_searchFetcher.sink.add(row);
});
4.) Using a ScanStreamTransformer to create a map of the results.
_resultsTransformer() {
return ScanStreamTransformer(
(Map<int, SearchRowModel> cache, SearchRowModel row, index) {
cache[index] = row;
return cache;
},
<int, SearchRowModel>{},
);
}
From my debugging, I've found that the code in step 4 appears to be the issue. That cache (or the Accumulator) isn't getting reset between searches. It just keeps adding additional results to the map.
I've yet to find a way of resetting the map in the accumulator / cache that works. I even tried completely destroying the streams and recreating them, but the original search data in the SearchstreamTransformer accumulator still persisted.

Related

Why is the server response after a Drag&Drop so large and slow

I'm having some performance issues dragging cards between grids. From a backend perspective, storing the data from the grids after a change takes about 200ms.
But then, when the backend work seems to be done, it takes another 2,5 seconds for the frontend to get the response from the request. The request that's taking so long contact 2 rpc events: grid-drop and grid-dragend.
The response is also unusually large I think. Just to give you an idea, see screenshot ... notice the tiny scrollbar at the right. 🙂
TTFB is 2,42s, download size about half a MB.
Any ideas what's going on here and how I can eliminate this?
I'm using Vaadin 21.0.4, spring boot 2.5.4.
Steps I've taken to optimise performance:
Optimize db query + indexing
Use #cacheable where possible
Implemented the cards using LitElement
This is the drop listener:
ComponentEventListener<GridDropEvent<Task>> dropListener = event -> {
if (dragSource != null) {
// The item ontop or below where the source item is dropped. Used to calculate the index of the newly dropped item(s)
Optional<Task> targetItem = event.getDropTargetItem();
// if the item is dropped on an existing row and the dragged item contains the same items that's being dropped.
if (targetItem.isPresent() && draggedItems.contains(targetItem.get())) {
return;
}
// Add dragged items to the grid of the target room
Grid<Task> targetGrid = event.getSource();
Optional<Room> room = dayPlanningView.getRoomForGrid(targetGrid);
// The items of the target Grid. Using listdataview so this would not retrigger the query
List<Task> targetItems = targetGrid.getListDataView().getItems().toList();
// Calculate the position of the dropped item
int index = targetItem.map(task -> targetItems.indexOf(task)
+ (event.getDropLocation() == GridDropLocation.BELOW ? 1 : 0))
.orElse(0);
room.ifPresent(r -> service.plan(draggedItems, r, index, dayPlanningView.getSelectedDate()));
// send event to update other users
Optional<ScheduleUpdatedEvent> scheduleUpdatedEvent = room.map(r -> new ScheduleUpdatedEvent(PlanningMasterDetailView.this, r.getId()));
scheduleUpdatedEvent.ifPresent(Broadcaster::broadcast);
// remove items from the source grid. using list provider so items can be removed without DB round-trip.
productionOrderGrid.getListDataView().removeItems(draggedItems);
}
};
I'm a bit stuck now, as I'm kinda out of ideas 😦
Thanks
You should use the TemplateRenderer/LitRenderer instead of the ComponentRenderer because the generated server-side components are affecting the performance:
Read more here: https://vaadin.com/blog/top-5-most-common-vaadin-performance-pitfalls-and-how-to-avoid-them

Spring-data-elasticsearch: Result window is too large (index.max_result_window)

We retrieve information from Elasticsearch 2.7.0 and we allow the user to go through the results. When the user requests a high page number we get the following error message:
Result window is too large, from + size must be less than or equal to:
[10000] but was [10020]. See the scroll api for a more efficient way
to request large data sets. This limit can be set by changing the
[index.max_result_window] index level parameter
The thing is we use pagination in our requests so I don't see why we get this error:
#Autowired
private ElasticsearchOperations elasticsearchTemplate;
...
elasticsearchTemplate.queryForPage(buildQuery(query, pageable), Document.class);
...
private NativeSearchQuery buildQuery() {
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
boolQueryBuilder.should(QueryBuilders.boolQuery().must(QueryBuilders.termQuery(term, query.toUpperCase())));
NativeSearchQueryBuilder nativeSearchQueryBuilder = new NativeSearchQueryBuilder().withIndices(DOC_INDICE_NAME)
.withTypes(indexType)
.withQuery(boolQueryBuilder)
.withPageable(pageable);
return nativeSearchQueryBuilder.build();
}
I don't understand the error because we retreive pageable.size (20 elements) everytime... Do you have any idea why we get this?
Unfortunately, Spring data elasticsearch even when paging results searchs for a much larger result window in the elasticsearch. So you have two options, the first is to change the value of this parameter.
The second is to use the scan / scroll API, however, as far as I understand, in this case the pagination is done manually, as it is used for infinite sequential reading (like scrolling your mouse).
A sample:
List<Pessoa> allItens = new ArrayList<>();
String scrollId = elasticsearchTemplate.scan(build, 1000, false, Pessoa.class);
Page<Pessoa> page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
while (true) {
if (!page.hasContent()) {
break;
}
allItens.addAll(page.getContent());
page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
}
This code, shows you how to read ALL the data from your index, you have to get the requested page inside scrolling.

AWS DynamoDB session table keeps growing, can't delete expired sessions

the ASP.NET_SessionState table grows all the time, already at 18GB, not a sign of ever deleting expired sessions.
we have tried to execute DynamoDBSessionStateStore.DeleteExpiredSessions, but it seems to have no effect.
our system is running fine, sessions are created and end-users are not aware of the issue. however, it doesn't make sense the table keeps growing all the time...
we have triple checked permissions/security, everything seems to be in order. we use SDK version 3.1.0. what else remains to be checked?
With your table being over 18 GB, which is quite large (in this context), it does not surprise me that this isn't working after looking at the code for the DeleteExpiredSessions method on GitHub.
Here is the code:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName)
{
LogInfo("DeleteExpiredSessions");
Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1);
ScanFilter filter = new ScanFilter();
filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now);
ScanOperationConfig config = new ScanOperationConfig();
config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID };
config.Select = SelectValues.SpecificAttributes;
config.Filter = filter;
DocumentBatchWrite batchWrite = table.CreateBatchWrite();
Search search = table.Scan(config);
do
{
List<Document> page = search.GetNextSet();
foreach (var document in page)
{
batchWrite.AddItemToDelete(document);
}
} while (!search.IsDone);
batchWrite.Execute();
}
The above algorithm is executed in two parts. First it performs a Search (table scan) using a filter is used to identify all expired records. These are then added to a DocumentBatchWrite request that is executed as the second step.
Since your table is so large the table scan step will take a very, very long time to complete before a single record is deleted. Basically, the above algorithm is useful for lazy garbage collection on small tables, but does not scale well for large tables.
The best I can tell is that the execution of this is never actually getting past the table scan and you may be consuming all of the read throughput of your table.
A possible solution for you would be to run a slightly modified version of the above method on your own. You would want to call the the DocumentBatchWrite inside of the do-while loop so that records will start to be deleted before the table scan is concluded.
That would look like:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName)
{
LogInfo("DeleteExpiredSessions");
Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1);
ScanFilter filter = new ScanFilter();
filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now);
ScanOperationConfig config = new ScanOperationConfig();
config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID };
config.Select = SelectValues.SpecificAttributes;
config.Filter = filter;
Search search = table.Scan(config);
do
{
// Perform a batch delete for each page returned
DocumentBatchWrite batchWrite = table.CreateBatchWrite();
List<Document> page = search.GetNextSet();
foreach (var document in page)
{
batchWrite.AddItemToDelete(document);
}
batchWrite.Execute();
} while (!search.IsDone);
}
Note: I have not tested the above code, but it is just a simple modification to the open source code so it should work correctly, but would need to be tested to ensure the pagination works correctly on a table whose records are being deleted as it is being scanned.

API to modify Firefox downloads list

I am looking to write a small firefox add-on that detects when files that were downloaded are (or have been) deleted locally and removes the corresponding entry in the firefox download list.
Can anybody point me to the relevant api to manipulate the download list? I cannot seem to find it.
The relevant API is PlacesUtils which abstracts the complexity of the Places database.
If your code runs in the context of a chrome window then you get a PlacesUtils glabal variable for free. Otherwise (bootstrapped, Add-on SDK, whatever) you have to import PlacesUtils.jsm.
Cu.import("resource://gre/modules/PlacesUtils.jsm");
As far as Places is concerned, downloaded files are nothing more than a special kind of visited pages, annotated accordingly. It's a matter of just one line of code to get an array of all downloaded files.
var results = PlacesUtils.annotations.getAnnotationsWithName("downloads/destinationFileURI");
Since we asked for the destinationFileURI annotation, each element of the resultarray holds the download location in the annotationValue property as a file: URI spec string.
With that you can check if the file actually exists
function getFileFromURIspec(fileurispec){
// if Services is not available in your context Cu.import("resource://gre/modules/Services.jsm");
var filehandler = Services.io.getProtocolHandler("file").QueryInterface(Ci.nsIFileProtocolHandler);
try{
return filehandler.getFileFromURLSpec(fileurispec);
}
catch(e){
return null;
}
}
getFileFromURIspec will return an instance of nsIFile, or null if the spec is invalid which shouldn't happen in this case but a sanity check never hurts. With that you can call the exists() method and if it returns false then the associated page entry in Places is eligible for removal. We can tell which is that page by its uri, which conveniently is also a property of each element of the results.
PlacesUtils.bhistory.removePage(result.uri);
To sum it up
var results = PlacesUtils.annotations.getAnnotationsWithName("downloads/destinationFileURI");
results.forEach(function(result){
var file = getFileFromURIspec(result.annotationValue);
if(!file){
// I don't know how you should treat this edge case
// ask the user, just log, remove, some combination?
}
else if(!file.exists()){
PlacesUtils.bhistory.removePage(result.uri);
}
});

How to cache many images(in loop) properly using forge.file.cacheURL?

I have a list of products that I download as json file from server. Each item contains a link to the image stored on server.
Now I want to be able to see the products when offline, so I store downloaded json file into forge.prefs http://docs.trigger.io/en/v1.3/modules/prefs.html and pull it out to display items on screen. It works nice but I also need to store images localy to be displayed correctly.
To achive this, I'm trying to use forge.file.cacheURL http://docs.trigger.io/en/v1.3/features/cache.html but can't handle the correct order of operations. To cache images I run the json file and for each line I call forge.file.cacheURL and store the url back to JSON item. But here is the problem as forge.file.cacheURL runs asynchronously so my loop running over the items and gathering the local images finishes and my code continues to display images(view items) but meantime the forge.file.cacheURL still gathers and caches the images because its asynchronous operation. I need somehow to detect that last item is being cached and then refresh the view on screen to use correct image urls ... or something else that will lead to what I need.
Hopefully you understand the concept. How should I handle this properly ?
Since v1.4.26 you've been able to permanently store images (see http://docs.trigger.io/en/v1.4/release-notes.html#v1-4-26), rather than just cache them. Depending on your needs, that might be a better option than forge.file.cacheURL and forge.file.isFile.
I don't follow exactly the situation you describe, but something like this will let you wait for several asynchronous things to finish before doing something:
// e.g.
var jsonCache = {
one: "http://example.com/one.jpg",
two: "http://example.com/two.jpg",
three: "http://example.com/three.jpg"
};
var cacheCount = 0;
for (var name in jsonCache) {
if (jsonCache.hasOwnProperty(name)) {
var imageURL = jsonCache[name];
cacheCount += 1;
forge.file.cacheURL(imageURL, function (file) {
forge.prefs.set(name, file, function () {
cacheCount -= 1; // race condition, but should be fine (!)
if (cacheCount <= 0) {
alert('all cached');
}
});
});
}
}

Resources