I'm an experienced developer, but rather new to GraphQL.
I use this Neo4j-Tutorial as a basis for orientation and am developing in typescript (backend and frontend).
Can anyone tell me how to somehow rename the generated Endpoints from #neo4j/graphql? The corresponding docs don't help with that.
I use the following GraphQL-Schema for demonstration (it's german: "Person" means "person" and "Tier" means "animal"):
type Person {
name: String
}
type Tier {
name: String
}
I process the schema using the gql-Function (from apollo-server) first, the result of that does look OK.
Afterwards, when I call const neoSchema = new Neo4jGraphQL({ typeDefs, driver: drv }) (with drv being my Neo4j-Driver) the resulting neoSchema.schema shows up some unexpected naming in the generated endpoints, e.g.:
CreatePeopleMutationResponse: CreatePeopleMutationResponse,
UpdatePeopleMutationResponse: UpdatePeopleMutationResponse,
Somehow the german "Person" (plural being "Personen") got implicitly translated to "people". I can also think of some "false friends" where this effect is even worse.
Well, the API should be in my own language (following DDD), so that's a smell I want to rename also having the api-users and generated clients in mind. How is that possible?
Thank you!
Related
I have the following structure in my RTDB (I'm using typescript interface notation to communicate the structure):
interface MyDB {
customers: {
[id: string]: {
firstName: string;
lastName: string;
};
};
projects: {
[id: string]: {
created: string;
customerId: string;
phase: string;
};
};
}
Given that I have two "tables" or document nodes, I'm not certain what the correct format for getting a project, as well as it's associated customer, should be.
I was thinking this:
db.ref('projects').once(projects => {
const customers = db.ref('customers').once(customers => {
const project = projects[SOME_PROJECT_ID];
const customer = customers[project.customerId];
// Proceed to do cool stuff with our customer and project...
});
});
Now, there are plenty of ways to express this. To be honest I did it this way in this example for simplicity, but I would actually not serialize the db.ref calls - I would put them in a combined observable and have them go out in parallel but that doesn't really matter because the inner code wouldn't change.
My question is -- is this how it is expected that one handle multi-document lookups that need to be joined in realtime database, or is there a "better" more "RTDB-y" way of doing it?
The issue I see here is the understanding I have is that we're selecting ALL projects and ALL customers. If I want to only get customers that have associated projects, is there a more efficient way to do that? I have seen that you might want to track project id's on each customer and do a filter there. But, I'm not sure the best way to track multiple project IDs (as a string with some kind of separater, or is there an array search function, etc?)
Thanks
Firebase Realtime Database doesn't offer any type of SQL-like join. If you have two locations to read, it requires two queries. How you do those two queries is entirely up to you. The database and its SDK is not opinionated about how you do that. If what you have works, then go with it.
Our system consists of multiple microservices that emit and consume events encoded in avro format (see schema at the bottom). A particular use case is the following: Service A emits an event (of type InvoiceEvents) on topic T1 and Services B and C (different dev teams) are consuming from T1. E.g. Service B is part of the Tax team, while Service C is part of the Product Fulfilment team.
I was expecting the following to be true (but it seems not to be):
The schema could evolve from version 1 (v1) to version 2 (v2) by adding a new union type (i.e. InvoiceCreated for field "payload") - Check out sample schemas at the bottom.
The producing Service A to upgrade to v2 (i.e. producing events that follow v2)
Some consuming services (e.g. Service C) could still use v1, as they are not interested in the new event type (i.e. InvoiceCreated). In this case, the "payload" field will use the default (null) value when de-serialised.
Eventually and only if required for business reasons service C can upgrade to use v2, if there is a requirement to react on the new event type (i.e. InvoiceCreated).
But Service C cannot de-serialize new events of type InvoiceCreated. Specifically it is throwing:
org.apache.avro.AvroTypeException: Found com.elsevier.q2c.schema.avro.invoice.InvoiceCreated, expecting unionorg.apache.avro.AvroTypeException: Found com.elsevier.q2c.schema.avro.invoice.InvoiceCreated, expecting union at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:292) at
Are avro union types not forward compatible (as described above)? Are they only backwards compatible as implied by the Confluent Schema Registry tests. What is the proposed way to avoid the coupling of microservices? I guess avro unions cannot be used..
Thanks!!
Related link with no definite answer: Avro-union-compatibility-mode-enhancement-proposal
schema v1:
[
...
{
"type":"record",
"name":"InvoiceEvents",
"namespace":"bla.bla.schema.avro.invoice",
"fields":[
{
"name":"payload",
"type":[
"null",
"bla.bla.schema.avro.invoice.InvoiceDrafted"
],
"default":null
}
]
}
]
schema v2 (added new Union type: InvoiceCreated):
[
...
{
"type":"record",
"name":"InvoiceEvents",
"namespace":"bla.bla.schema.avro.invoice",
"fields":[
{
"name":"payload",
"type":[
"null",
"bla.bla.schema.avro.invoice.InvoiceDrafted",
"bla.bla.schema.avro.invoice.InvoiceCreated",
],
"default":null
}
]
}
]
After some thought we will probably go for option 3 as not skipping/ losing events is more important to the project than decoupling:
Handle exception in custom derserialiser and skip event (may lose interesting events - For not losing events all consuming services must be upgraded before all the producing services)
Convert all custom record unions to separate optional fields (may lose interesting events, as change is forward compatible and consuming services will not block)
Accept de-serialisation error/ block consumption and bump version in all consuming services that use schema on new custom record type (this guarantees that no interesting event is lost).
Please comment if there is a better option out there and I have missed it!
UPDATE:
It seems that option (2) is now possible in a much cleaner way, as now you can have multi-type topics (https://github.com/confluentinc/schema-registry/pull/680). This means that a topic can have different value types (e.g. InvoiceCreated, InvoiceEdited, ...) without using an avro union, while each different type will have its own evolution line!
The object extraMetadata is undefined and throwing an error on line 247 of breeze.labs.dataservice.sharepoint.js
rawEntity.__metadata = { 'type': aspect.extraMetadata.type };
I suspect it is because I have not defined the type found in __metadata object on my entity definitions for breeze. Any suggestions on how to define my type correctly would be very welcome! Here is my type definition for one of the objects.
models.Project = {
name: 'Project',
defaultResourceName: 'getbytitle(\'Projects\')/items',
dataProperties: {
ID: {
type: breeze.DataType.Int32
},
Title: {
nullable: false
},
StatusId: {
type: breeze.DataType.Int32,
nullable: false
},
SelectedApproverId: {
type: breeze.DataType.Int32,
nullable: false
},
Created: {
type: breeze.DataType.DateTime
},
Modified: {
type: breeze.DataType.DateTime
}
},
navigationProperties: {
Status: {
type: "Status",
foreignKeyNames: ['StatusId'],
hasMany: false
},
SelectedApprover: {
type: "User",
foreignKeyNames: ["SelectedApproverId"]
}
}
};
UPDATE: 11/11/2013
If I run the following query:
return breeze.EntityQuery
.from(metadataStore.getEntityType('Project').defaultResourceName)
.orderBy('Created desc')
.using(manager)
.execute()
.then(function (data) {
console.log(data.results);
return data.results;
});
the results are an array of simple JavaScript objects, not Breeze Entities, that lack an __metadata properties. I'm trying to figure out why this is the case.
Update: 11/12/2014
I have confirmed that this issue presents itself when I have multiple entities defined under navigationProperties.
Please be sure you are using BreezeJS v.1.4.12 or later.
To be clear, the code to which you refer is on line 147 (not 247) of the breeze.labs.dataservice.sharepoint.js file in my possession.
It's located within the _createChangeRequest where it is preparing to save a modified entity. I'll assume that you have queried a Product entity, made changes to it, and are saving it back when the error occurs.
I don't believe the problem will be traced to how you defined the metadata for your Product type.
You should NOT define a __metadata property for your type. The __metadata property is something we expect SharePoint (any OData source in fact) to add to the JSON entity data that it sends to the client when you query that OData source for entities.
__metadata wouldn't be defined for results returned by a projection but then your issue concerns a modified entity so I'm assuming that you acquired this entity through a normal query ... one that did not have a select clause.
I'd like to know if you see the __metadata property in the JSON payload of a query that retrieved the entity you were modifying. Please examine the network traffic from the query request. If you don't see it there, we have to find out why the server didn't send it.
Background
The __metadata property on the JSON node is a crucial part of the contract with the SharePoint OData server. That's how the Breeze client learns about the entity's type and its etag.
Look at the jsonResultsAdapter.visitNode and updateEntityNode methods. You'll see how the adapter uses __metadata to determine the EntityType for that data. You'll also see that the adapter moves the __metadata to the adapter result's extraMetadata property. BreezeJS subsequently moves that "extra metadata" from this result object to the entity's entityAspect.extraMetadata property.
Does this seem tortured? It is tortured. OData requires extra information to be carried around with the entity (specifically the etag) without which the server simply will not update or delete the entity. We have to squirrel that info away somewhere, out of your hair, and then bring it back when we make save requests to the server. We put it on the entityAspect in keeping with that property's role as the keeper of the "entity-ness" that has nothing to do with your object's business purpose and everything to do with how it is persisted.
So much for the why. Where is the bug?
The bug
The underlying bug is that this __metadata from the SharePoint OData source has disappeared. We don't know how it disappeared yet. But we're in big trouble without it.
The sharepoint adapter should give a better message when extraMetadata is missing. We actually look for that problem a few lines later; see adjustUpdateDeleteRequest:
var extraMetadata = aspect.extraMetadata;
if (!extraMetadata) {
throw new Error("Missing the extra metadata for an update/delete entity");
}
That test appears too late. I'll make a note to move the test up.
But such a fix will only cause the save to fail with a better message. It won't tell you how to fix it.
So let's work on finding where the __metadata disappeared ... starting with whether it ever arrived in the first place.
I await your report.
Update 17 July 2014
I'm still waiting to hear if you are seeing the __metadata property in the payload of the response to the original entity query.
Meanwhile, I checked the OData specs (plural) for references to the __metadata property. It appears that the __metadata property has always been optional. It follows that an OData provider need not send or honor the etag ... and we know that this is possible because Web API 2 OData didn't support etags ... a defect soon to be corrected.
See the OData v.2 spec where it describes JSON format. Search for the term "__metadata".
The OData v.3 spec also calls for the __metadata property in a JSON response (at least a JSON verbose response).
But ... heavy sigh ... it appears that the __metadata property is gone from the v.4 spec and that the metadata information is supplied entirely through JSON annotations. The DataJS library (used by many but not all BreezeJS OData adapters) may map those annotations into the node's __metadata property but I can't confirm it yet. We have some work to do coping with all of these variations.
In the meanwhile, I think all BreezeJS OData dataservice adapters should take a more defensive position regarding extra metadata and should simply ignore the omission rather than throw exceptions.
We'll make these defensive changes very soon.
Of course the server will reject your update or delete request if the OData service actually requires an etag or other metadata. I don't know what we can do about that just yet.
There hasn't been a post in a while but I am going to share what I found as the problem and how I resolved it for me (because it took me a long time).
Basically the breeze.labs.dataservice.sharepoint adapter has a function serverTypeNameToClientDefault() that expects the SharePoint custom list type as returned by REST/OData in the __metadata "type" field to be in the exact format of:
SP.Data.**mylistname**sListItem** (notice the "sListItem" suffix; ; Ex. SP.Data.CustomersListItem)
This function does a string regex to extract the Breeze entity name from the SharePoint type and uses that name to look up the entity in the metadata store ("Customer" in the above example). If there is no match, Breeze will not find your entity and will return a basic object instead of a Breeze entity. Therefore your REST JSON result returned from SharePoint, even though it does have the __metadata property is not converted into a Breeze entity that contains the property entityAspect.extraMetadata, among other things. This is what leads to the error "Unable to get property 'type' of undefined or null reference"
For my solution, since in my case I don't care as much what the URL of my custom lists are, I just made sure that when my custom list was provisioned by SharePoint that it resulted in a name according to what Breeze expects. You accomplish this by setting the Url attribute of the ListInstance element like this:
<ListInstance
Title="My Customers"
OnQuickLaunch="TRUE"
TemplateType="10000"
Url="Lists/Customers" <!-- List/Customer will not work -->
Description="My List Instance">
...
The better solution would be to make the serverTypeNameToClientDefault() function more robust or fix it to my needs locally but hopefully this can be addressed in a future version of the adapter.
Note that I have tested this solution with the following configurations (not all dependencies listed):
Breeze.Client 1.4.9 with Breeze.DataService.SharePoint 0.2.3
Breeze.Client 1.5.0 with Breeze.DataService.SharePoint 0.3.2
Also note that the 0.3.2 version of the adapter now displays a better error message when this happens as mentioned above -- "Missing the extra metadata for an update/delete entity"; but it doesn't fix the problem.
Hope this helps someone.
For breeze v1.4.14 and breeze labs sharepoint 2013 v0.2.3 i am using small fix in file breeze.labs.dataservice.sharepoint.js.
At the end of function
function visitNode(node, mappingContext, nodeContext)
just before
return result;
i just set property extraMetadata like this:
result.extraMetadata = node.__metadata;
This seems to fix problem when i try to save modified entity back to sharepoint.
Sorry folks for the long overdue aspect of this, but I got the bug with the extra "s" resolved today... FINALLY. You can track the issue here: https://github.com/andrewconnell/breeze.js.labs/issues/6
This all stemmed from a very incorrect assumption I made. It's been fixed in version 0.6.2 of the data service adapter for SharePoint. Note that you MUST use the same name for your entity when creating it in the metadata store as the list where the data is coming from.
I resolved my issue with multiple navigationProperties on an entity by editing line 319 of breeze.labs.dataservice.sharepoint.js v.0.10.0
I changed:
if (entityType._mappedPropertiesCount <= Object.keys(node).length - 1)
to:
if (entityType.dataProperties.length <= Object.keys(node).length - 1)
It looks like the _mappedPropertiesCount includes the navigationProperties count too. e.g. dataProperties.length + navigationProperties.length
The query node was then thought to not contain a full set of properties for the entity (it was assumed to be the result of a partial projection).
It therefore wasn't being treated as an entity, its metadata wasn't being set, and it ultimately wasn't being added to the cache.
It worked with only one navigationProperty as there were two extra items in Object.keys(node), __Metadata and ID. So it would still pass the test with one navigationProperty, but not two or more.
I apologize if I'm missing something really obvious here but I've been pulling my hair out with this issue.
I have a command object:
class MyCommand {
Long id
String value
}
I bind to this in my controller:
public update(MyCommand myCmd) {
}
Everything is fine in this scenario. Now I'm trying to add the version field, which is passed in the request parameters to the command object:
class MyCommand {
Long id
Long version
String value
}
Now however when the binding happens the id and version are always null, even though they are present in the params object.
I suspected that there may be some special handling for id / version attributes related to how grails handles optimistic locking (as this is ultimately why I'm doing this) but the issue is present at the command object independent of any domain object.
I'm baffled why this is not working. Is there some special case when version is present on a command object?
Seems this is by design per Jeff Brown jira
The data binding explicitly avoids binding id or version [if] they both
exist and does this by design. This is a shield against potential
security problems relevant to data binding as it relates to domain
classes. A simple work around for command objects would be to name the
properties with something like "idValue" and "versionValue" or
anything other than "id" and "version".
I'm working on an application at the moment in ASP.NET MVC which has a number of look-up tables, all of the form
LookUp {
Id
Text
}
As you can see, this just maps the Id to a textual value. These are used for things such as Colours. I now have a number of these, currently 6 and probably soon to be more.
I'm trying to put together an API that can be used via AJAX to allow the user to add/list/remove values from these lookup tables, so for example I could have something like:
http://example.com/Attributes/Colours/[List/Add/Delete]
My current problem is that clearly, regardless of which lookup table I'm using, everything else happens exactly the same. So really there should be no repetition of code whatsoever.
I currently have a custom route which points to an 'AttributeController', which figures out the attribute/look-up table in question based upon the URL (ie http://example.com/Attributes/Colours/List would want the 'Colours' table). I pass the attribute (Colours - a string) and the operation (List/Add/Delete), as well as any other parameters required (say "Red" if I want to add red to the list) back to my repository where the actual work is performed.
Things start getting messy here, as at the moment I've resorted to doing a switch/case on the attribute string, which can then grab the Linq-to-Sql entity corresponding to the particular lookup table. I find this pretty dirty though as I find myself having to write the same operations on each of the look-up entities, ugh!
What I'd really like to do is have some sort of mapping, which I could simply pass in the attribute name and get out some form of generic lookup object, which I could perform the desired operations on without having to care about type.
Is there some way to do this to my Linq-To-Sql entities? I've tried making them implement a basic interface (IAttribute), which simply specifies the Id/Text properties, however doing things like this fails:
System.Data.Linq.Table<IAttribute> table = GetAttribute("Colours");
As I cannot convert System.Data.Linq.Table<Colour> to System.Data.Linq.Table<IAttribute>.
Is there a way to make these look-up tables 'generic'?
Apologies that this is a bit of a brain-dump. There's surely imformation missing here, so just let me know if you'd like any further details. Cheers!
You have 2 options.
Use Expression Trees to dynamically create your lambda expression
Use Dynamic LINQ as detailed on Scott Gu's blog
I've looked at both options and have successfully implemented Expression Trees as my preferred approach.
Here's an example function that i created: (NOT TESTED)
private static bool ValueExists<T>(String Value) where T : class
{
ParameterExpression pe = Expression.Parameter(typeof(T), "p");
Expression value = Expression.Equal(Expression.Property(pe, "ColumnName"), Expression.Constant(Value));
Expression<Func<T, bool>> predicate = Expression.Lambda<Func<T, bool>>(value, pe);
return MyDataContext.GetTable<T>().Where(predicate).Count() > 0;
}
Instead of using a switch statement, you can use a lookup dictionary. This is psuedocode-ish, but this is one way to get your table in question. You'll have to manually maintain the dictionary, but it should be much easier than a switch.
It looks like the DataContext.GetTable() method could be the answer to your problem. You can get a table if you know the type of the linq entity that you want to operate upon.
Dictionary<string, Type> lookupDict = new Dictionary<string, Type>
{
"Colour", typeof(MatchingLinqEntity)
...
}
Type entityType = lookupDict[AttributeFromRouteValue];
YourDataContext db = new YourDataContext();
var entityTable = db.GetTable(entityType);
var entity = entityTable.Single(x => x.Id == IdFromRouteValue);
// or whatever operations you need
db.SubmitChanges()
The Suteki Shop project has some very slick work in it. You could look into their implementation of IRepository<T> and IRepositoryResolver for a generic repository pattern. This really works well with an IoC container, but you could create them manually with reflection if the performance is acceptable. I'd use this route if you have or can add an IoC container to the project. You need to make sure your IoC container supports open generics if you go this route, but I'm pretty sure all the major players do.