I have two Relay mutations that I'm nesting to first add an object then set its name. I believe what I'm passing to the second mutation is in fact data fetched by Relay, but it appears to disagree with me. The code in the React view is as follows:
Relay.Store.update(
new AddCampaignFeatureLabelMutation({
campaign: this.props.campaign
}),
{
onSuccess: (data) => {
Relay.Store.update(
new FeatureLabelNameMutation({
featureLabel: data.addCampaignFeatureLabel.featureLabelEdge.node,
name: this.addLabelInputField.value
})
);
},
onFailure: () => {}
}
);
This does work, but gives me a warning:
Warning: RelayMutation: Expected prop `featureLabel` supplied to `FeatureLabelNameMutation` to be data fetched by Relay. This is likely an error unless you are purposely passing in mock data that conforms to the shape of this mutation's fragment.
Why does Relay think the data isn't fetched? Do I maybe need to explicitly return the new featureLabel in the payload somehow?
I ran into the same problem and it took me some time to figure out what was going on, so this might help others:
As the warning says, you have to provide an entity to the mutation that was fetched by Relay. BUT what the warning does not say is that it has to be fetched with the mutation in mind.
So basically you have to add the mutation you are going to execute on it in the future in the initial query like this:
fragment on Person {
firstname,
lastname,
language,
${UpdatePersonMutation.getFragment('person')}
}
This will add the necessary pieces to the entity in the store which are needed by the mutation.
In you case what you have to do is to add the FeatureLabelNameMutation getFragment to your AddCampaignFeatureLabelMutation query. This will bring back your featureLabel entity with the necessary information for the FeatureLabelNameMutation to succeed without warning.
The Relay documentation is very very poor on this and many other areas.
Relay expects any fragments for your mutation to come from your props. Since you're using data coming from your callback and not something from your container props Relay raises that warning.
Take a look at the source: https://github.com/facebook/relay/blob/master/src/mutation/RelayMutation.js#L289-L307
Related
I am playing around with a OData service and I am very confused when to use this
var oModel = new sap.ui.model.odata.ODataModel("proxy/http/services.odata.org/V3/(S(k42qhed3hw4zgjxfnhivnmes))/OData/OData.svc");
this.getView().setModel(oModel);
vs
var oModel = new sap.ui.model.odata.ODataModel("odatserviceurl", true);
var productsModel = new JSONModel();
oModel.read("/Products",
null,
null,
false,
function _OnSuccess(oData, response) {
var data = { "ProductCollection" : oData.results };
productsModel.setData(data);
},
function _OnError(error) {
console.log(error);
}
);
this.getView().setModel(productsModel);
I have two working example using both approach but I am not able to figure out why using read method if I can achieve same with first version. Please explain or guide me to the documentation which can clear my confusion.
Ok, lets start with the models:
JSON Model : The JSON model is a client-side model and, therefore, intended for small datasets, which are completely available on the client. The JSON model supports two-way binding. NOTE: no server side call is made on filtering, searching, refresh.
OData Model : The OData model is a server-side model: the dataset is only available on the server and the client only knows the currently visible rows and fields. This also means that sorting and filtering on the client is not possible. For this, the client has to send a request to the server. Meaning searching/filtering calls odata service again.
Now, lets look at scenarios where we will use these models:
Scenario 1: Showing data to user in a list/table/display form. Data manipulation is limited to searching and filtering. Here, I would use oData model directly to controls as only fetching of data is required.( your method 1) (NOTE: One way binding). Remember here all changes require a call to server.
Scenario 2: I have an application which has multiple inputs, user can edit changes, also some fields are calculated and mandatory. All in all, many user changes are done which may be temporary and user might not want to save them. Here, you dont want to send these temporary changes to backend as yet. You way want to manipulate, validate data before sending. Here, we will use JSON Model after reading data from odata model ( your method 2). Store the changes in local JSON model, validate and manipulate them and finally send the data using Odata create/update. Remember here all changes DO NOT require a call to server as data is present in local JSON MODEL.
Let me know if this helps you. :)
EDIT : Additional Information :
As per your comment :
Documentation says oModel.read' trigger get request but new sap.ui.model.odata.ODataModel("proxy/http/services.odata.org/V3/(S(k42qhed3hw4zgjxfnhivnmes))/OData/OData.svc")` does the same thing so why and when to use oModel.read
Here, is where you misunderstood. The code
new sap.ui.model.odata.ODataModel("proxy/http/services.odata.org/V3/(S(k42qhed3hw4zgjxfnhivnmes))/OData/OData.svc") will NOT send a read/get Request. It calls the odata services and fetches the metadata of the service. A service can have multiple entities.
For example: the service :http://services.odata.org/Northwind/Northwind.svc/ has mutiple entity sets such as Categories, Customers, Employees etc. So, when I declare : new sap.ui.model.odata.ODataModel("http://services.odata.org/Northwind/Northwind.svc/") it will fetch the metadata for service (not actual data). Only when you call the desired entity set, it will fetch the data. The Entity set is specified :
When you call the read method ( like you have specified '/Products')
Bind the entity set name directly to control like to List,Table etc ( items='{/Products}' )
It is very easy to set up pagination with Relay however there's a small detail that is unclear to me.
both of the relevant parts in my code are marked with comments, other code is for additional context.
const postType = new GraphQLObjectType({
name: 'Post',
fields: () => ({
id: globalIdField('Post'),
title: {
type: GraphQLString
},
}),
interfaces: [nodeInterface],
})
const userType = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: globalIdField('User'),
email: {
type: GraphQLString
},
posts: {
type: postConnection,
args: connectionArgs,
resolve: async (user, args) => {
// getUserPosts() is in next code block -> it gets the data from db
// I pass args (e.g "first", "after" etc) and user id (to get only user posts)
const posts = await getUserPosts(args, user._id)
return connectionFromArray(posts, args)
}
},
}),
interfaces: [nodeInterface],
})
const {connectionType: postConnection} =
connectionDefinitions({name: 'Post', nodeType: postType})
exports.getUserPosts = async (args, userId) => {
try {
// using MongoDB and Mongoose but question is relevant with every db
// .limit() -> how many posts to return
const posts = await Post.find({author: userId}).limit(args.first).exec()
return posts
} catch (err) {
return err
}
}
Cause of my confusion:
If I pass the first argument and use it in db query to limit returned results, hasNextPage is always false. This is efficient but it breaks hasNextPage (hasPreviousPage if you use last)
If I don't pass the first argument and don't use it in db query to limit returned results, hasNextPage is working as expected but it will return all the items I queried (could be thousands)
Even if database is on same machine (which isn't the case for bigger apps), this seems very, very, very inefficient and awful. Please prove me that Im wrong!
As far as I know, GraphQL doesn't have any server-side caching therefore there wouldn't be any point to return all the results (even if it did, users don't browse 100% content)
What's the logic here?
One solution that comes to my mind is to add +1 to first value in getUserPosts, it will retrieve one excess item and hasNextPage would probably work. But this feels like a hack and there's always excess item returned - it would grow relatively quickly if there are many connections and requests.
Are we expected to hack it like that? Is it expected the return all the results?
Or did I misunderstand the whole relationship between database and GrahpQL / Relay?
What if I used FB DataLoader and Redis? Would that change anything about that logic?
Cause of my confusion
The utility function connectionFromArray of graphql-relay-js library is NOT the solution to all kinds of pagination needs. We need to adapt our approach based on our preferred pagination models.
connectionFromArray function derives the values of hasNextPage and hasPrevisousPage from the given array. So, what you observed and mentioned in "Cause of my confusion" is the expected behavior.
As for your confusion whether to load all data or not, it depends on the problem at hand. Loading all items may make sense in several situations such as:
the number of items is small and you can afford the memory required to store those items.
the items are frequently requested and you need to cache them for faster access.
Two common pagination models are numbered pages and infinite scrolling. The GraphQL connection specification is not opinionated about pagination model and allows both of them.
For numbered pages, you can use an extra field totalPost in your GraphQL type, which can be used to display links to numbered pages on your UI. On the back-end, you can use feature like skip to fetch only the needed items. The field totalPost and the current page number eliminates the dependency on hasNextPage or hasPreviousPage.
For infinite scrolling, you can use the cursor field, which can be used as the value for after in your query. On the back-end, you can use the value of cursor to retrieve the next items (value of first). See an example of using cursor in Relay documention on GraphQL connection. See this answer about GraphQL connection and cursor. See this and this blog posts, which will help you better understand the idea of cursor.
What's the logic here?
Are we expected to hack it like that?
No, ideally we're not expected to hack and forget about it. That will leave technical debt in the project, which is likely to cause more problems in the long term. You may consider implementing your own function to return a connection object. You will get ideas of how to do that in the implementation of array-connection in graphql-relay-js.
Is it expected the return all the results?
Again, depends on the problem.
What if I used FB DataLoader and Redis? Would that change anything about that logic?
You can use facebook dataloader library to cache and batch-process your queries. Redis is another option for caching the results. If you load (1) all items using dataloader or store all items in Redis and (2) the items are lightweight, you can easily create an array of all items (following KISS principle). If the items are heavy-weight, creating the array may be an expensive operation.
I have a custom entity definition like:
var Card = function () {};
var cardInitializer = function (card) {
// card.fields is defined in the metadata.
// card._cfields is an in-memory only field
// that breeze will not, and should not, track.
// Thus it is being added in the initializer
card._cfields = card.fields.slice();
};
When the data loads from the server everything is fine. The card.fields array has the corresponding data.
EDITED: Added more info and code of how manager is being set up
But when the data is round-tripped in local storage via .exportEntities and importEntities, the child data defined in the metadata, represented by the property card.fields in this example, is not loaded (the Array has length 0) during the initializer call, though it is subsequently available on the entity after load has completed.
Here is how the manager is being initialized:
var metadataStore = new breeze.MetadataStore();
metadataStore.importMetadata(options.metadata);
var queryOptions = new breeze.QueryOptions( {
fetchStrategy: breeze.FetchStrategy.FromLocalCache
});
var dataService = new breeze.DataService({
serviceName: "none",
hasServerMetadata: false
});
manager = new breeze.EntityManager({
dataService: dataService,
metadataStore: metadataStore,
queryOptions: queryOptions
});
entityExtensions.registerExtensions(manager, breeze);
var entities = localStorage[storage];
if(entities && entities !== 'null'){
manager.importEntities(entities);
}
Wow. You ask for free support from the harried developer of a free OSS product that you presumably value and then you shit on him because you think he was being flippant? And downgrade his answer.
Could you have responded more generously. Perhaps you might recognize that your question was a bit unclear. I guess that occurred to you because you edited your question such that I can see what you're driving at.
Two suggestions for next time. (1) Be nice. (2) Provide a running code sample that illustrates your issue.
I'll meet you half way. I wrote a plunker that I believe demonstrates your complaint.
It shows that the navigation properties may not be wired up when importEntities calls an initializer even though the related entities are in cache.
They do appear to be wired up during query result processing when the initializer is called.
I cannot explain why they are different in this respect. I will ask.
My personal preference is to be consistent and to have the entities wired up. But it may be that there are good reasons why we don't do that or why it is indeterminate even when processing query results. I'll try to get an answer as I said.
Meanwhile, you'll have to work around this ... which you can do by processing the values returned from the import:
var imported = em2.importEntities(exported);
FWIW, the documentation is silent on this question.
Look at the "Extending Entities" documentation topic again.
You will see that, by design, breeze does not know about any properties created in an initializer and therefore ignores such properties during serialization such as entity export. This is a feature not a limitation.
If you want breeze to "know" about an unmapped property you must define it in the entity constructor (Card)... even if you later populate it in the initialized function.
Again, best to look at the docs and at examples before setting out on your own.
The object extraMetadata is undefined and throwing an error on line 247 of breeze.labs.dataservice.sharepoint.js
rawEntity.__metadata = { 'type': aspect.extraMetadata.type };
I suspect it is because I have not defined the type found in __metadata object on my entity definitions for breeze. Any suggestions on how to define my type correctly would be very welcome! Here is my type definition for one of the objects.
models.Project = {
name: 'Project',
defaultResourceName: 'getbytitle(\'Projects\')/items',
dataProperties: {
ID: {
type: breeze.DataType.Int32
},
Title: {
nullable: false
},
StatusId: {
type: breeze.DataType.Int32,
nullable: false
},
SelectedApproverId: {
type: breeze.DataType.Int32,
nullable: false
},
Created: {
type: breeze.DataType.DateTime
},
Modified: {
type: breeze.DataType.DateTime
}
},
navigationProperties: {
Status: {
type: "Status",
foreignKeyNames: ['StatusId'],
hasMany: false
},
SelectedApprover: {
type: "User",
foreignKeyNames: ["SelectedApproverId"]
}
}
};
UPDATE: 11/11/2013
If I run the following query:
return breeze.EntityQuery
.from(metadataStore.getEntityType('Project').defaultResourceName)
.orderBy('Created desc')
.using(manager)
.execute()
.then(function (data) {
console.log(data.results);
return data.results;
});
the results are an array of simple JavaScript objects, not Breeze Entities, that lack an __metadata properties. I'm trying to figure out why this is the case.
Update: 11/12/2014
I have confirmed that this issue presents itself when I have multiple entities defined under navigationProperties.
Please be sure you are using BreezeJS v.1.4.12 or later.
To be clear, the code to which you refer is on line 147 (not 247) of the breeze.labs.dataservice.sharepoint.js file in my possession.
It's located within the _createChangeRequest where it is preparing to save a modified entity. I'll assume that you have queried a Product entity, made changes to it, and are saving it back when the error occurs.
I don't believe the problem will be traced to how you defined the metadata for your Product type.
You should NOT define a __metadata property for your type. The __metadata property is something we expect SharePoint (any OData source in fact) to add to the JSON entity data that it sends to the client when you query that OData source for entities.
__metadata wouldn't be defined for results returned by a projection but then your issue concerns a modified entity so I'm assuming that you acquired this entity through a normal query ... one that did not have a select clause.
I'd like to know if you see the __metadata property in the JSON payload of a query that retrieved the entity you were modifying. Please examine the network traffic from the query request. If you don't see it there, we have to find out why the server didn't send it.
Background
The __metadata property on the JSON node is a crucial part of the contract with the SharePoint OData server. That's how the Breeze client learns about the entity's type and its etag.
Look at the jsonResultsAdapter.visitNode and updateEntityNode methods. You'll see how the adapter uses __metadata to determine the EntityType for that data. You'll also see that the adapter moves the __metadata to the adapter result's extraMetadata property. BreezeJS subsequently moves that "extra metadata" from this result object to the entity's entityAspect.extraMetadata property.
Does this seem tortured? It is tortured. OData requires extra information to be carried around with the entity (specifically the etag) without which the server simply will not update or delete the entity. We have to squirrel that info away somewhere, out of your hair, and then bring it back when we make save requests to the server. We put it on the entityAspect in keeping with that property's role as the keeper of the "entity-ness" that has nothing to do with your object's business purpose and everything to do with how it is persisted.
So much for the why. Where is the bug?
The bug
The underlying bug is that this __metadata from the SharePoint OData source has disappeared. We don't know how it disappeared yet. But we're in big trouble without it.
The sharepoint adapter should give a better message when extraMetadata is missing. We actually look for that problem a few lines later; see adjustUpdateDeleteRequest:
var extraMetadata = aspect.extraMetadata;
if (!extraMetadata) {
throw new Error("Missing the extra metadata for an update/delete entity");
}
That test appears too late. I'll make a note to move the test up.
But such a fix will only cause the save to fail with a better message. It won't tell you how to fix it.
So let's work on finding where the __metadata disappeared ... starting with whether it ever arrived in the first place.
I await your report.
Update 17 July 2014
I'm still waiting to hear if you are seeing the __metadata property in the payload of the response to the original entity query.
Meanwhile, I checked the OData specs (plural) for references to the __metadata property. It appears that the __metadata property has always been optional. It follows that an OData provider need not send or honor the etag ... and we know that this is possible because Web API 2 OData didn't support etags ... a defect soon to be corrected.
See the OData v.2 spec where it describes JSON format. Search for the term "__metadata".
The OData v.3 spec also calls for the __metadata property in a JSON response (at least a JSON verbose response).
But ... heavy sigh ... it appears that the __metadata property is gone from the v.4 spec and that the metadata information is supplied entirely through JSON annotations. The DataJS library (used by many but not all BreezeJS OData adapters) may map those annotations into the node's __metadata property but I can't confirm it yet. We have some work to do coping with all of these variations.
In the meanwhile, I think all BreezeJS OData dataservice adapters should take a more defensive position regarding extra metadata and should simply ignore the omission rather than throw exceptions.
We'll make these defensive changes very soon.
Of course the server will reject your update or delete request if the OData service actually requires an etag or other metadata. I don't know what we can do about that just yet.
There hasn't been a post in a while but I am going to share what I found as the problem and how I resolved it for me (because it took me a long time).
Basically the breeze.labs.dataservice.sharepoint adapter has a function serverTypeNameToClientDefault() that expects the SharePoint custom list type as returned by REST/OData in the __metadata "type" field to be in the exact format of:
SP.Data.**mylistname**sListItem** (notice the "sListItem" suffix; ; Ex. SP.Data.CustomersListItem)
This function does a string regex to extract the Breeze entity name from the SharePoint type and uses that name to look up the entity in the metadata store ("Customer" in the above example). If there is no match, Breeze will not find your entity and will return a basic object instead of a Breeze entity. Therefore your REST JSON result returned from SharePoint, even though it does have the __metadata property is not converted into a Breeze entity that contains the property entityAspect.extraMetadata, among other things. This is what leads to the error "Unable to get property 'type' of undefined or null reference"
For my solution, since in my case I don't care as much what the URL of my custom lists are, I just made sure that when my custom list was provisioned by SharePoint that it resulted in a name according to what Breeze expects. You accomplish this by setting the Url attribute of the ListInstance element like this:
<ListInstance
Title="My Customers"
OnQuickLaunch="TRUE"
TemplateType="10000"
Url="Lists/Customers" <!-- List/Customer will not work -->
Description="My List Instance">
...
The better solution would be to make the serverTypeNameToClientDefault() function more robust or fix it to my needs locally but hopefully this can be addressed in a future version of the adapter.
Note that I have tested this solution with the following configurations (not all dependencies listed):
Breeze.Client 1.4.9 with Breeze.DataService.SharePoint 0.2.3
Breeze.Client 1.5.0 with Breeze.DataService.SharePoint 0.3.2
Also note that the 0.3.2 version of the adapter now displays a better error message when this happens as mentioned above -- "Missing the extra metadata for an update/delete entity"; but it doesn't fix the problem.
Hope this helps someone.
For breeze v1.4.14 and breeze labs sharepoint 2013 v0.2.3 i am using small fix in file breeze.labs.dataservice.sharepoint.js.
At the end of function
function visitNode(node, mappingContext, nodeContext)
just before
return result;
i just set property extraMetadata like this:
result.extraMetadata = node.__metadata;
This seems to fix problem when i try to save modified entity back to sharepoint.
Sorry folks for the long overdue aspect of this, but I got the bug with the extra "s" resolved today... FINALLY. You can track the issue here: https://github.com/andrewconnell/breeze.js.labs/issues/6
This all stemmed from a very incorrect assumption I made. It's been fixed in version 0.6.2 of the data service adapter for SharePoint. Note that you MUST use the same name for your entity when creating it in the metadata store as the list where the data is coming from.
I resolved my issue with multiple navigationProperties on an entity by editing line 319 of breeze.labs.dataservice.sharepoint.js v.0.10.0
I changed:
if (entityType._mappedPropertiesCount <= Object.keys(node).length - 1)
to:
if (entityType.dataProperties.length <= Object.keys(node).length - 1)
It looks like the _mappedPropertiesCount includes the navigationProperties count too. e.g. dataProperties.length + navigationProperties.length
The query node was then thought to not contain a full set of properties for the entity (it was assumed to be the result of a partial projection).
It therefore wasn't being treated as an entity, its metadata wasn't being set, and it ultimately wasn't being added to the cache.
It worked with only one navigationProperty as there were two extra items in Object.keys(node), __Metadata and ID. So it would still pass the test with one navigationProperty, but not two or more.
I have a view model that will always create a new entity (Score); it doesn't need to wait for nor query the repository to know this.
I (currently) want to create the new entity as the page loads and use it to populate the (KnockoutJS) view model.
I believe that the entity manager lazily populates the metadata and have spoofed the behavior I want by making an unnecessary query purely to force the metadata population. The API docs don't cover this:
http://www.breezejs.com/sites/all/apidocs/classes/EntityManager.html#property_metadataStore
Question
Is there a way to force the manager to populate the metadata without issuing a redundant query?
Here's the spoofed code flattened to show its intent:
manager.executeQuery(redundantQuery).then(function(data) {
var viewScore = manager.metadataStore.getEntityType("Score").createEntity();
viewScore.ID(breeze.core.getUuid());
viewScore.Value(57);
ko.applyBindings(viewScore, $ViewScore);
}).fail(function(e){
...
})
I'd be happy with:
manager.metadataStore.then(function() {
})...
Okay, lesson learned.... try it before asking.
This actually works:
manager.metadataStore.fetchMetadata(manager.dataService).then(function () {
var viewScore = manager.metadataStore.getEntityType("Score").createEntity();
viewScore.ID(breeze.core.getUuid());
viewScore.Value(57);
ko.applyBindings(viewScore, $ViewScore);
}).fail(function (e) {
...
});
My only remaining question is why is it necessary to pass the manager.dataService?
A single MetadataStore can store metadata for multiple DataServices. The manager.dataService is simply the 'default' DataService for an EntityManager, there may be others.