I'm trying to understand how Relay works.
1.) Let's say I have UpdateProductMutation (FIELDS_CHANGE type) that updates fields of single product i.e. 'title', 'description', etc. I can send mutation with those fields changes and it works. (Say the product here is Banana)
2.) Now I add product_categories field of type ProductCategoryConnection to the Product type. And in the UpdateProductMutation mutation, I send "catIds" array as additional inputField of the mutation. (i.e. catIds: {type: new GraphQLList(GraphQLID)}) It also successfully mutates the Product with the product_categories field.
3.) To make it easier to follow, let's say Banana product had Fruit and Yellow categories. And I've added Healthy in step 2.)
4.) Problem is, if I've queried Healthy page before step 2. and Relay already has the cache in store, visiting Healthy page again after step 2. doesn't show newly associated product, Banana.
This is my mutation config and fat query.
getFatQuery() {
return Relay.QL`
fragment on UpdateProductPayload {
updatedProduct,
}
`;
}
getConfigs() {
console.warn('getConfigs', this.props);
return [{
type: 'FIELDS_CHANGE',
fieldIDs: {updatedProduct: this.props.id},
}];
}
I'm not sure if I understand it totally wrong. It seems like FIELDS_CHANGE is not the correct type to use here because the mutation works fine and the mutation payload gives the changed product category edge. The problem occurs only when I go to the previously fetched related product category page and don't see the updated product there. (Relay doesn't even send the query again). From my understanding, RANGE_ADD or RANGE_DELETE don't allow you to change the normal field either (i.e. 'title', 'description', etc.) Also in my case, catIds array could either cause RANGE_ADD or RANGE_DELETE, depends on the selection on the page.
Any advise on which direction I should take?
Related
I'm using Angular 5 and mat-accordion to show a list of authors. Each author has written multiple books and articles. The author's name appears in the panel-header and the content of the panel shows all of the books, articles, etc.
Because I want to display 100+ authors each with 50+ entries, I don't want to populate the entire accordion and content at once. What I'd like to have happen is that when the user clicks on an author, it kicks off a service that queries the database and then fills the panel content as appropriate. If the panel is closed, the content should remain so re-expanding the panel doesn't kick off another database query.
So when I visit the page, I see the authors Alice, Bob, and Eve. When click on Alice, the app queries the database, gets back Alice's entries, renders the content, then the accordion expands. When I click on Eve, the app should close Alice's panel, query the db, get Eve's entries, render the content, and finally expand the panel.
If I click on Alice again Eve's panel closes, but since the content is already there for Alice, there is no db query or rendering. It just expands. The docs say to use ng-template, but I'm not sure how to do that, and really not sure how to do it so the content remains after the panel is closed. I'm not worried about there being a change to the data that would require getting Alice's data again in case there was a change.
Any examples out there of the best way to handle this?
thanks!
G. Tranter's answer was correct, I was on the right path. If anyone else ends up on this page, here is what I ended up doing.
ngOnInit(){
this.authorsRetrieved.subscribe( authors => {
this.allAuthors = authors as Array;
this.authorsRetrieved = new Array(
Math.max.apply(Math, this.allTrainers.map(function(t){ return t.trainer_id; }))
);
// as authors are added and deleted, the author_id won't equal the number of
// authors, so get the highest id number, create an array that long
// then fill it with blanks so the keys have some value
this.authorsRetrieved.fill([{}]);
});
showAuthorsWorks(authorID: Number = -1){
if(authorID > this.authorsRetrieved.length){
const tempArray = new Array(authorID - this.authorsRetrieved.length + 1);
tempArray.fill([{}]);
this.authorsRetrieved = this.authorsRetrieved.concat(tempArray);
}
// only make the network call if we have to
// because we filled the id array, we can't just use length
if(typeof(this.authorsRetrieved[authorID][0]['manuscript_id']) === 'undefined'){
this.authorWorksService.getAuthorWorks(authorID).subscribe( works => {
this.worksRetrieved.splice(authorID, 0, works as Array<any>);
});
}
I added a check for the almost impossible situation where the array length is less than the max author_id. You have to create an empty array of N elements, then fill that array. If you don't, the length of the empty array is 0, and you can't push data to an array element that doesn't exist. Even though at the chrome console it says the length is N and the elements are there, just empty.
Thanks again!
If you are referring to the MatExpansionPanelContent directive used with ng-template, all that does is delay loading content until the panel is opened. It doesn't know whether or not it has already been loaded. So if you are using a bound expression for content such as {{lazyContent}} that will be evaluated every time the tab is opened. You need to manage content caching yourself. One easy way to do that is via a getter.
In your component:
_lazyContent: string;
get lazyContent() {
if (!this._lazyContent) {
this._lazyContent = fetchContent();
}
return this._lazyContent;
}
Plus in your HTML:
<mat-expansion-panel>
...
<ng-template matExpansionPanelContent>
{{lazyContent}}
</ng-template>
....
</mat-expansion-panel>
So the ng-template takes care of the lazy loading, and the getter takes care of caching the content.
It is very easy to set up pagination with Relay however there's a small detail that is unclear to me.
both of the relevant parts in my code are marked with comments, other code is for additional context.
const postType = new GraphQLObjectType({
name: 'Post',
fields: () => ({
id: globalIdField('Post'),
title: {
type: GraphQLString
},
}),
interfaces: [nodeInterface],
})
const userType = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: globalIdField('User'),
email: {
type: GraphQLString
},
posts: {
type: postConnection,
args: connectionArgs,
resolve: async (user, args) => {
// getUserPosts() is in next code block -> it gets the data from db
// I pass args (e.g "first", "after" etc) and user id (to get only user posts)
const posts = await getUserPosts(args, user._id)
return connectionFromArray(posts, args)
}
},
}),
interfaces: [nodeInterface],
})
const {connectionType: postConnection} =
connectionDefinitions({name: 'Post', nodeType: postType})
exports.getUserPosts = async (args, userId) => {
try {
// using MongoDB and Mongoose but question is relevant with every db
// .limit() -> how many posts to return
const posts = await Post.find({author: userId}).limit(args.first).exec()
return posts
} catch (err) {
return err
}
}
Cause of my confusion:
If I pass the first argument and use it in db query to limit returned results, hasNextPage is always false. This is efficient but it breaks hasNextPage (hasPreviousPage if you use last)
If I don't pass the first argument and don't use it in db query to limit returned results, hasNextPage is working as expected but it will return all the items I queried (could be thousands)
Even if database is on same machine (which isn't the case for bigger apps), this seems very, very, very inefficient and awful. Please prove me that Im wrong!
As far as I know, GraphQL doesn't have any server-side caching therefore there wouldn't be any point to return all the results (even if it did, users don't browse 100% content)
What's the logic here?
One solution that comes to my mind is to add +1 to first value in getUserPosts, it will retrieve one excess item and hasNextPage would probably work. But this feels like a hack and there's always excess item returned - it would grow relatively quickly if there are many connections and requests.
Are we expected to hack it like that? Is it expected the return all the results?
Or did I misunderstand the whole relationship between database and GrahpQL / Relay?
What if I used FB DataLoader and Redis? Would that change anything about that logic?
Cause of my confusion
The utility function connectionFromArray of graphql-relay-js library is NOT the solution to all kinds of pagination needs. We need to adapt our approach based on our preferred pagination models.
connectionFromArray function derives the values of hasNextPage and hasPrevisousPage from the given array. So, what you observed and mentioned in "Cause of my confusion" is the expected behavior.
As for your confusion whether to load all data or not, it depends on the problem at hand. Loading all items may make sense in several situations such as:
the number of items is small and you can afford the memory required to store those items.
the items are frequently requested and you need to cache them for faster access.
Two common pagination models are numbered pages and infinite scrolling. The GraphQL connection specification is not opinionated about pagination model and allows both of them.
For numbered pages, you can use an extra field totalPost in your GraphQL type, which can be used to display links to numbered pages on your UI. On the back-end, you can use feature like skip to fetch only the needed items. The field totalPost and the current page number eliminates the dependency on hasNextPage or hasPreviousPage.
For infinite scrolling, you can use the cursor field, which can be used as the value for after in your query. On the back-end, you can use the value of cursor to retrieve the next items (value of first). See an example of using cursor in Relay documention on GraphQL connection. See this answer about GraphQL connection and cursor. See this and this blog posts, which will help you better understand the idea of cursor.
What's the logic here?
Are we expected to hack it like that?
No, ideally we're not expected to hack and forget about it. That will leave technical debt in the project, which is likely to cause more problems in the long term. You may consider implementing your own function to return a connection object. You will get ideas of how to do that in the implementation of array-connection in graphql-relay-js.
Is it expected the return all the results?
Again, depends on the problem.
What if I used FB DataLoader and Redis? Would that change anything about that logic?
You can use facebook dataloader library to cache and batch-process your queries. Redis is another option for caching the results. If you load (1) all items using dataloader or store all items in Redis and (2) the items are lightweight, you can easily create an array of all items (following KISS principle). If the items are heavy-weight, creating the array may be an expensive operation.
We are running into an issue with updating non scalar complex type.
Our simplified metadata looks like this
Table
shortName: 'Table',
defaultResourceName: "/table",
dataProperties: {
name: { max: 50, required: true },
description: { max: 500},
columns: {complexType: 'Column', isScalar: false}
...
}
Column
shortName: 'Column',
isComplexType: true,
dataProperties: {
name: { max: 50, required: true },
description: { max: 500 },
...
}
Our backend is document based nosqldb.
In the UI, we display all the tables and its columns in a tree like structure. User can click on a Table name or column and edit it's properties in a property editor. Table and Column have separate editors, even though internally they are part of the same object.
Initially when we display the tree we only fetch limited no. of dataproperties of the json, only the ones that are required to be displayed in the tree. But when the user clicks on a particular table or a column we fetch the entire table (columns are complex types) from the server and update the cache to display them in the editor.
We are using knockout as the model library so all the properties are knockout observables. We are passing the same breeze entity objects to viewmodels of these editors as well as the tree.
If the user clicks on a table in the tree, and edits name or description in the editor, we are seeing table name in the tree is also changing as it should, since they are both the same observable. But when I click on a column name in the tree, and edit the column name in the editor, I do not see the change getting reflected in the tree.
What I think is happening here is, when I am re-querying the same object from the server to display in the editor, table name which a simple property is being updated with the new value from the server, but column name which is part of complex object, is actually getting replaced with new values from server, since each complex object in the array itself is being replaced. So it seems like tree and editor has different obsrevables.
Funny thing is, if after clicking on a particular column and having it displayed in the editor, I move to a different module (we are using spa), and then come back to the original module again, then if I click on the same column again and update the name, this time the change is getting reflected in the tree. But not the first time.
Could this be a bug, or am I missing something? Is there a workaround?
We are using breeze js 1.5.6
Thanks!
Lets say there is a mutation which updates more than one node serverside.
What is the preferred way to let Relay update the nodes in the local store?
In other words:
Is it possible to return a list of updated nodes and tell Relay to apply some kind of FIELDS_CHANGE for every returned node?
Yes, it is possible to update multiple nodes and to have Relay update its client store accordingly. There are two cases:
1) Updated nodes belong to the same parent:
Each of the changed nodes does not need to be updated individually with FIELDS_CHANGE mutator configuration. We just need to update the parent of those nodes with FIELDS_CHANGE. The child fields, which should be fetched and updated, are specified in the fat query.
A good example is MarkAllTodosMutation of Relay's todo example, where viewer is parent of todos connection type (todo nodes). Since this mutation changes multiple nodes under viewer, in the getConfig() function, we specify that viewer should be updated:
getConfigs() {
return [{
type: 'FIELDS_CHANGE',
fieldIDs: {
viewer: this.props.viewer.id,
},
}];
}
In the getFatQuery() function, we specify todos and completedCount fields of viewer should be fetched and updated in the store. The fields of todo to be updated in the store are specified in fragments or in the fat query.
getFatQuery() {
return Relay.QL`
fragment on MarkAllTodosPayload #relay(pattern: true) {
viewer {
completedCount,
todos,
},
}
`;
}
2) Updated nodes belong to different parents:
On the client-side mutation, the nodes to be updated are passed as props to the mutation. In the getConfig() function, each node needs to be configured with FIELDS_CHANGE. getFatQuery function specifies which fields of the changed nodes should be updated in the store. Ideally, it should have all the fields which can be affected by that mutation.
I have a Firebase array called products that contains items like this one:
JhMIecX5K47gt0VaoQN: {
brand: "Esprit"
category: "Clothes",
gender: "Boys",
name: "Pants",
}
Is it possible to query products from this array using multiple filters with the Firebase API. For example I might want to filter by brand and category (all "Pants" by "Esprit"). So far I've tried ordering by child key and then limiting the start and end of this ordering, but I can't figure out how to apply more filters.
I'm using the iOS SDK.
Firebase can only order/filter on one property (or value) at a time. If you call a orderBy... method multiple times in a single query it will raise an error to indicate this is not allowed.
Actually, in Firebase, when you order by a specific key, there is another index involved : push IDs (if you use them) They are almost perfectly ordered chronologically, so you order by field XXX plus Time.
More details here
In your case, this is not about time, so the only solution is to use additional composite indexes :
JhMIecX5K47gt0VaoQN: {
brand: "Esprit"
category: "Clothes",
brandcategoryIndex: "EspritClothes",
categorybrandIndex: "ClothesEsprit",
brandnameIndex: "EspritPants",
namebrandIndex: "PantsEsprit",
gender: "Boys",
name: "Pants",
}
In the above example, you can query by :
Category > Brand (and the other way)
Name > Brand (and the other way)
Just make sure you add those fields to your Firebase index in the rules section, and that you maintain them anytime the items are modified.
(this process of adding a cost of redundant data at the benefit of performance, which is forced by Firebase in this case, is called denormalization)