Streams in dart lag a step behind - dart

I read an article by Filip Hracek #filiph that
One disadvantage that stems from the asynchronicity of streams is that when you build a StreamBuilder, it always shows initialData first (because it is available synchronously) and only then does it show the first event coming from the Stream. This can lead to a flash (one frame) of missing data. There are ways to prevent this — stay tuned for a more detailed article.
I am facing this issue, and I need to know the ways to prevent the flash. It would be great if someone knows the way. I am adding new data to my sink, and then accessing my stream builder snapshot to get the data. However, it returns the initial data which is null or some seed value if I use rxdart. I want the latest data, I tried using async- await, but it didn't help. I am setting the snapshot.data value in a Map, with index associated with my itemBuilder.
Here is a sample code
return ListView.builder(
itemBuilder: (context, int index){
return checkboxListTile(
title: Text(xyz)
value: valueMap[key[index]];
onChanged:(value){bloc.myController.sink.add(value);
valueMap[key[index]] = snapshot.data;}
);
}
);
The snapshot.data returns old values. So it basically lags one step. I understand its an asynchronous and synchronous issue. But I need a way around.

In my case it was a hit or miss because I was relying on data being updated by Firestore cloud functions!!!!!!!!!
Depending on how quickly the function executed, the data was not always as expected.

Related

How should BloC be structured in Flutter?

I'm new to BloC and Flutter. For a single simple screen it should works without problem. But let's take a look at my case, I'm confused on how to use the BloC pattern.
I have a single screen called Container, which container a PageView of Content screen. Say I have 5 pages within that PageView. These pages count is dynamic. The pages only differ the data.
I'm thinking about 2 ways of implementing this:
1/ Using ONE single bloc and pass it to my 5 child Content.
2/ Using one bloc for the Container and another bloc for the Content. So this seems to be nested blocs. the ContainerBloc will contains the list of ContentBloc.
For the 1st approach. The problem that I see is the re-rendering problem. I will create a list of data of each page:
List<List<String>> allData = [];
BehaviorSubject<List<List<String>>> _allData = BehaviorSubject<List<List<String>>>();
Observable<List<String>> getData(index) => _allData.stream.map((list) => list[index]); //This stream returns the list at the index
and each page will listen to the data by:
//StreamBuilder in the UI
stream: widget.bloc.getData(index);
and the update method for the data should be like:
void updateData(int index, List<String> newData) {
List<String> temp = allData[index];
temp.add(newData);
allData[index] = temp;
_allData.sink.add(allData);
}
As I understand, once one page is updated. All other page will re-render because they all listen to the getData(index) stream which will be triggered by the_allData.sink.add(allData);
So I think all pages will be re-rendered even though the data of that page is not changed.
For the 2nd approach. I don't know if it would be best practice to have nested bloc like that. There maybe some case when the ContainerBloc must listen to some of the ContentBloc output.
I'm kinda confused now.
Thank you for your time.

How to reload a ResourceTable programmatically in Laravel Nova?

I have a custom resource-tool (ledger entry tool) that modifies values of a resource as well as insert additional rows into related resources.
"Account" is the main resources.
"AccountTransaction" and "AccountLog" both get written to when a ledger entry is created. And through events, the account.balance value is updated.
After a successful post of a ledger entry (using Nova.request) in the resource-tool, I would like the new balance value updated in the account detail panel, as well as the new entries in AccountTransaction and AccountLog to be visible.
The simple way would be to simply reload the page, but I am looking for a more elegant solution.
Is it possible to ask these components to refresh themselves from within my resource-tool vue.js component?
Recently had the same issue, until I referred to this block of code
Nova has vuex stores modules, where they have defined storeFilters.
Assigning filters an empty object and then requesting them again "reloads" the resources. Haven't done much more research on this matter, but if you are looking for what I think you are looking for, this should be it.
async reloadResources() {
this.resourceName = this.$router.currentRoute.params.resourceName || this.$router.currentRoute.name;
if (this.resourceName) {
let filters_backup = _.cloneDeep(this.$store.getters[`${this.resourceName}/filters`]);
let filters_to_change = _.cloneDeep(filters_backup);
filters_to_change.push({});
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_to_change);
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_backup);
}
},

Operations on a stream produce a result, but do not modify its underlying data source

Unable to understand how "Operations on a stream produce a result, but do not modify its underlying data source" with reference to java 8 streams.
shapes.stream()
.filter(s -> s.getColor() == BLUE)
.forEach(s -> s.setColor(RED));
As per my understanding, forEach is setting the color of object from shapes then how does the top statement hold true?
The value s isn't being altered in this example, however no deep copy is taken, and there is nothing to stop you altering the object referenced.
Are able to can alter an object via a reference in any context in Java and there isn't anything to prevent it. You can only prevent shallow values being altered.
NOTE: Just because you are able to do this doesn't mean it's a good idea. Altering an object inside a lambda is likely to be dangerous as functional programming models assume you are not altering the data being process (always creating new object instead)
If you are going to alter an object, I suggest you use a loop (non functional style) to minimise confusion.
An example of where using a lambda to alter an object has dire consequences is the following.
map.computeIfAbsent(key, k -> {
map.computeIfAbsent(key, k -> 1);
return 2;
});
The behaviour is not deterministic, can result in both key/values being added and for ConcurrentHashMap, this will never return.
As mentioned Here
Most importantly, a stream isn’t a data structure.
You can often create a stream from collections to apply a number of functions on a data structure, but a stream itself is not a data structure. That’s so important, I mentioned it twice! A stream can be composed of multiple functions that create a pipeline that data that flows through. This data cannot be mutated. That is to say the original data structure doesn’t change. However the data can be transformed and later stored in another data structure or perhaps consumed by another operation.
AND as per Java docs
This is possible only if we can prevent interference with the data
source during the execution of a stream pipeline.
And the reason is :
Modifying a stream's data source during execution of a stream pipeline
can cause exceptions, incorrect answers, or nonconformant behavior.
That's all theory, live examples are always good.
So here we go :
Assume we have a List<String> (say :names) and stream of this names.stream(). We can apply .filter(), .reduce(), .map() etc but we can never change the source. Meaning if you try to modify the source (names) you will get an java.util.ConcurrentModificationException .
public static void main(String[] args) {
List<String> names = new ArrayList<>();
names.add("Joe");
names.add("Phoebe");
names.add("Rose");
names.stream().map((obj)->{
names.add("Monika"); //modifying the source of stream, i.e. ConcurrentModificationException
/**
* If we comment the above line, we are modifying the data(doing upper case)
* However the original list still holds the lower-case names(source of stream never changes)
*/
return obj.toUpperCase();
}).forEach(System.out::println);
}
I hope that would help!
I understood the part do not modify its underlying data source - as it will not add/remove elements to the source; I think you are safe since you alter an element, you do not remove it.
You ca read comments from Tagir and Brian Goetz here, where they do agree that this is sort of fine.
The more idiomatic way to do what you want, would be a replace all for example:
shapes.replaceAll(x -> {
if(x.getColor() == BLUE){
x.setColor(RED);
}
return x;
})

Relationship between GraphQL and database when using connection and pagination?

It is very easy to set up pagination with Relay however there's a small detail that is unclear to me.
both of the relevant parts in my code are marked with comments, other code is for additional context.
const postType = new GraphQLObjectType({
name: 'Post',
fields: () => ({
id: globalIdField('Post'),
title: {
type: GraphQLString
},
}),
interfaces: [nodeInterface],
})
const userType = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: globalIdField('User'),
email: {
type: GraphQLString
},
posts: {
type: postConnection,
args: connectionArgs,
resolve: async (user, args) => {
// getUserPosts() is in next code block -> it gets the data from db
// I pass args (e.g "first", "after" etc) and user id (to get only user posts)
const posts = await getUserPosts(args, user._id)
return connectionFromArray(posts, args)
}
},
}),
interfaces: [nodeInterface],
})
const {connectionType: postConnection} =
connectionDefinitions({name: 'Post', nodeType: postType})
exports.getUserPosts = async (args, userId) => {
try {
// using MongoDB and Mongoose but question is relevant with every db
// .limit() -> how many posts to return
const posts = await Post.find({author: userId}).limit(args.first).exec()
return posts
} catch (err) {
return err
}
}
Cause of my confusion:
If I pass the first argument and use it in db query to limit returned results, hasNextPage is always false. This is efficient but it breaks hasNextPage (hasPreviousPage if you use last)
If I don't pass the first argument and don't use it in db query to limit returned results, hasNextPage is working as expected but it will return all the items I queried (could be thousands)
Even if database is on same machine (which isn't the case for bigger apps), this seems very, very, very inefficient and awful. Please prove me that Im wrong!
As far as I know, GraphQL doesn't have any server-side caching therefore there wouldn't be any point to return all the results (even if it did, users don't browse 100% content)
What's the logic here?
One solution that comes to my mind is to add +1 to first value in getUserPosts, it will retrieve one excess item and hasNextPage would probably work. But this feels like a hack and there's always excess item returned - it would grow relatively quickly if there are many connections and requests.
Are we expected to hack it like that? Is it expected the return all the results?
Or did I misunderstand the whole relationship between database and GrahpQL / Relay?
What if I used FB DataLoader and Redis? Would that change anything about that logic?
Cause of my confusion
The utility function connectionFromArray of graphql-relay-js library is NOT the solution to all kinds of pagination needs. We need to adapt our approach based on our preferred pagination models.
connectionFromArray function derives the values of hasNextPage and hasPrevisousPage from the given array. So, what you observed and mentioned in "Cause of my confusion" is the expected behavior.
As for your confusion whether to load all data or not, it depends on the problem at hand. Loading all items may make sense in several situations such as:
the number of items is small and you can afford the memory required to store those items.
the items are frequently requested and you need to cache them for faster access.
Two common pagination models are numbered pages and infinite scrolling. The GraphQL connection specification is not opinionated about pagination model and allows both of them.
For numbered pages, you can use an extra field totalPost in your GraphQL type, which can be used to display links to numbered pages on your UI. On the back-end, you can use feature like skip to fetch only the needed items. The field totalPost and the current page number eliminates the dependency on hasNextPage or hasPreviousPage.
For infinite scrolling, you can use the cursor field, which can be used as the value for after in your query. On the back-end, you can use the value of cursor to retrieve the next items (value of first). See an example of using cursor in Relay documention on GraphQL connection. See this answer about GraphQL connection and cursor. See this and this blog posts, which will help you better understand the idea of cursor.
What's the logic here?
Are we expected to hack it like that?
No, ideally we're not expected to hack and forget about it. That will leave technical debt in the project, which is likely to cause more problems in the long term. You may consider implementing your own function to return a connection object. You will get ideas of how to do that in the implementation of array-connection in graphql-relay-js.
Is it expected the return all the results?
Again, depends on the problem.
What if I used FB DataLoader and Redis? Would that change anything about that logic?
You can use facebook dataloader library to cache and batch-process your queries. Redis is another option for caching the results. If you load (1) all items using dataloader or store all items in Redis and (2) the items are lightweight, you can easily create an array of all items (following KISS principle). If the items are heavy-weight, creating the array may be an expensive operation.

How to wait for Angular2 to finish rendering

In case it matters, here's the reason I want to do this:
My application will have many elements in various places on the same page that each trigger individually tiny http requests for small (less than a kilobyte, sometimes only tens of bytes) pieces of data. To avoid excessive per-request overhead, I want to combine them all into one larger request. I've got the code for combining and handling the combined request and response done, but I'm not sure how to tell when the requests have stopped coming and it's time to send it.
The idea I'm using right now is to use new Future.value().whenComplete() (I'm coding in Dart) to simply wait for the event loop to run, but I don't know whether Angular2's rendering spans multiple iterations of the event loop or not. Is this enough to guarantee Angular2 has invoked every property binding on the page before my http request goes out, and if not how can I get such a guarantee?
I don't think there is a better way.
Instead of
new Future.value().whenComplete()
You can just use
new Future(() {
// delayed code here
});
or to delay a bit more
new Future.delayed(const Duration(milliseconds: 10), () {
// delayed code here
});

Resources