I am wondering what the best way is to perform an in clause query with breeze. I've used a series of 'column = blah1 or column = blah2', etc. I've also made a named server method that accepts an array of parameters, and called it using the withParameters syntax. The problem I'm facing is when the list gets long, I can bump up against the http GET querystring limit.
Is there a better way to perform "in" clause queries with breeze, and what is the best way to deal with long lists of terms when performing these queries (is http POST possible when querying?)
Thanks for your time,
Mathias
It's a good question. We are looking at providing a simplified query mechanism that uses POST instead of GET for exactly these reasons. This will probably be available within the next release or two.
Related
I think I'm doing something wrong, but I noticed that sometimes it is better to divide the single query on two separate queries and pass the result of the first one as a parameter to the second query. Doe this make sense? Is there any magic under the hood for external parameters vs query variable passed from one part of the same query to another part?
Yes, it makes sense, although you can chain parts of your queries together using WITH. When you do that, make sure that you watch the cardinality, especially when you have UNWIND clauses.
When you decide to use multiple queries, and depending on their type (as in WRITE / READ ) it may be smart to make sure that, in the case of a WRITE query, they are all part of the same transaction. See https://neo4j.com/docs/cypher-manual/4.0/introduction/transactions/
I am looking for an OData query syntax which helps to solve Sum((DateDiff(minute, StartDate, EndDate) which we do in SqlServer. Is it possible to do such things using OData v4?
I tried the aggregate function but not able to use the sum operator on the duration type. Any idea?
You can't execute a query like that directly in standards compliant v4 service as the built in Aggregates all operate on single fields, for instance there is no support for creating a new arbitrary column to project the results into, this is mainly because the new column is undefined. By restricting the specification to only columns that are pre-defined in the resource itself, we can have a strong level of certainty on the structure of the data that will be returned.
If you are the author of the API, there are three common approaches that can achieve a query similar to your request.
Define a Custom Data Aggregate, this is way more involved than is necessary, but it means you could define the aggregate once and use it in many resource queries.
Only research this solution if you truly need to reuse the same aggregate on multiple resources
Define a Custom Function to compute the result of all or some elements in your query.
Think of a Function as similar to a SQL View, it is really just a way of expressing a custom query and custom response object that is associated with a resource.
It is common to use Functions to apply complex filter conditions that still return the resource that they are bound to, but you can return an entirely different structure of data if you want.
Exploit Open Type, this can sometimes be more effort than you expect, but can be managed if there is only a small number of common transformations you want to apply to the resource and project their results as discrete properties in addition to the standard resource definition.
In your case you could project DateDiff(minute, StartDate, EndDate) into its own discrete column, perhaps called Minutes or Duration. Then you could $apply a simple SUM across this new field.
Exposing a custom Function is usually the least effort approach, because you are not constrained by the shape of the result at all, it can be maintained in relative isolation from the main resource, as with Open Types, the useful thing about functions is that the caller can still apply OData aggregates to the result of the Function.
If the original post is updated with some more detailed code examples, I can elabortate on the function implementation, however in this state I hope this information sets you on the right path.
I'd like to know just what the title says.
The reason I'd want this is to permit constrained read-only cypher queries to be executed; the data results would later be interpreted and serialized by a separate API layer.
I've seen code that makes basic assumptions in an attempt to mimic this behavior, e.g. the code might filter out any Cypher query that contains certain special words associated with write query structures (merge, create, delete, set, and so on).
This approach tends to be limited and naive though; if it very simply looks for those tokens, it would prevent a query like MATCH n WHERE n.label =~ '.*create.*' RETURN n even though it's a read-only query.
I'd really prefer not to do a full parse on a candidate query and then descend through the AST trying to figure out whether something is read-only or not (although I would gladly accept an answer that shows how to do this easily in java)
EDIT - I'm aware it's possible to start the entire database in read-only mode via the configuration property read_only=true, but this would be undesirable; no other aspect of the java API would be able to change the database.
EDIT 2 - I found another possible strategy, but I'm not sure of its advisability. Comments welcome on this, and potential downsides:
try (Transaction ignore = graphDb.beginTx()) {
ExecutionResult result = executionEngine.execute(query);
// Do nifty stuff with result, then...
// Force transaction to fail.
ignore.failure();
}
The idea here is that if queries happen within transactions and the transaction is always force-failed, then nothing can ever be written to the DB no matter what the result.
Read-only Cypher is (not yet) directly supported. However I can think of two workarounds for that:
1) assuming you're running a Neo4j enterprise cluster: you can set read_only=true on one instance. That instance is then used for the read only queries where the other cluster instances are used for r/w. A load balancer in front of the cluster can be set up to send the requests to the right instance.
2) Use a TransactionEventHandler that vetos a transaction if its TransactionData contains write operations. Just for fun I've invested some minutes to implement that, see https://github.com/sarmbruster/read-only-cypher - feedback is appreciated.
I am using the Neo4j .NET Client ExecuteGetCypherResults to run cypher. It expects everything to come back in a single column. I have simple class JobType which contains a list of JobSpecialties on it. In the database this is modeled as the Types having a relationship to the Specialties.
I need a cypher query that returns the results as such, in a single column. The related Specialties should be a child property of the Type node I would expect the query to look like this:
start s=node:node_auto_index(StartType='JobTypes')
match s-[:starts]->t, t-[:SubTypes]->ts
return {Id: t.Id, Name: t.Name, JobSpecialties: ts}
But this doesn't work. I can't figure out from the docs if this is even possible. If there is a better way to get the result back to the .Net client, I am open to suggestions.
start s=node:node_auto_index(StartType='JobTypes')
match s-[:SubTypes]->js
return s.Id, s.Name, js;
ExecuteGetCypherResults does support multiple columns, you just need to kick our deserializer into a different mode. This is an implementation detail generally hidden behind our higher level APIs, which is why this isn't obvious.
When you call new CypherQuery, pass CypherResultMode.Projection instead of CypherResultMode.Set.
I actually can't remember why we have this. Sometime, I'll need to dig through the lower levels and try and kill it. Pull requests welcomed. :)
As a preference though, we always prefer people to use the higher level APIs (but we recognise there are some limitations).
It sounds like the .Net client needs some updating for cypher. Cypher doesn't support building maps on the fly yet, although it is something that is in the feature request list already...
You can create an array with your results (but as of 1.9.M04, they need to be the same type to be merged into the array):
http://console.neo4j.org/r/xo7voi
I've actually submitted a pull request (through back channels, since it broke some unit tests) to fix that (so you can have multiple types in an array built on the fly), but I think there are some concerns whether merging of different types is a good idea.
https://github.com/wfreeman/neo4j/commit/ca457ace0df4732376833b8694e4affac4143244
Update: This will be fixed in 1.9.M05/1.9.GA. Now you can build an array with any type mixed:
http://console.neo4j.org/r/vm4f83
My objective:
I have built a working controller action in MVC which takes user input for various filter criteria and, using PredicateBuilder (part of LinqKit - sorry, I'm not allowed enough links yet) builds the appropriate LINQ query to return rows from a "master" table in SQL with a couple hundred thousand records. My implementation of the predicates is totally inelegant, as I'm new to a lot of this, and under a very tight deadline, but it did make life easier. The page operates perfectly as-is.
To this, I need to add a Full-Text search filter. Understanding the way LINQ translates Contains to LIKE(%%), using the advice in Simon Blog: LINQ-to-SQL - Enabling Full-Text Searching, I've already prepared Table Functions in SQL to run Freetext queries on the relevant columns. I have 4 functions, to match the query against 4 separate tables.
My approach:
At the moment, I'm building the predicates (I'll spare you) for the initial IQueryable data object, running a LINQ command to return them, like so:
var MyData = DB.Master_Items.Where(outer);
Then, I'm attempting to further filter MyData on the Keys returned by my full-text search functions:
var FTS_Matches_Subtable_1 = (from tbl in DB.Subtable_1
join fts in DB.udf_Subtable_1_FTSearch(KeywordTerms)
on tbl.ID equals fts.ID
select tbl.ForeignKey);
... I have 4 of those sets of matches which I've tried to use to filter my original dataset in several ways with no success. For instance:
MyNewData = MyData.Where(d => FTS_Matches_Subtable_1.Contains(d.Key) ||
FTS_Matches_Subtable_2.Contains(d.Key) ||
FTS_Matches_Subtable_3.Contains(d.Key) ||
FTS_Matches_Subtable_4.Contains(d.Key));
I just get the error: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Too many parameters were provided in this RPC request. The maximum is 2100.
I get that it's because I'm trying to pass a relatively large set of data into the Contains function and LINQ is converting each record into a separate parameter, exceeding the limit.
I just don't know how to get around it.
I found another post linq expression to return property value which seemed SO promising. I tried ifwdev's solution (2nd highest ranked answer): using LinqKit to build an extension that will break up the queries into manageable chunks. But I can't figure out how to implement it. Out of my depth right now maybe?
Is there another approach that I'm missing? Some simpler way to accomplish this that I've overlooked?
Sorry for the long post. But thank you for any help you can provide!
This is a perfect time to go back to raw ado.net.
Twisting things around just to use linq to sql is probably just as time consuming if you wrote the query and hydration by hand.