Document example
{
"_id": 1,
"test": {
"item_obj": {
"item1": ["a", "b"],
"item2": ["c"],
"item3": ["a", "d"]
}
}
}
I want to fetch documents where "a" exists in test.item_obj. "a" may exist in any array. And we don't know the keys present inside item_obj (No idea item1, item2 or item3 exists).
Need rails-mongo query.
If this is your search case, then whatever way you look at it you need the JavaScript evaluation of the $where clause to resolve your current structure. In the shell example ( since you need to use the JavaScript expression anyway ):
db.collection.find(function() {
var root = this.test.item_obj;
return Object.keys(root).some(function(key) {
return root[key] == "a";
});
})
Or for mongoid that is something like:
func = <<-eof
var root = this.test.item_obj;
return Object.keys(root).some(function(key) {
return root[key] == "a";
});
eof
Model.for_js(func)
However, if you simply change your structure to define "items_objects" as an array as follows:
{
"_id": 1,
"test": {
"item_objects": [
{ "name": "item1", "data": ["a","b"] },
{ "name": "item2", "data": ["c"] },
{ "name": "item3", "data": ["a","d"] }
}
}
}
Then asking for what you want here is as basic as:
db.collection.find({
"test.item_objects.data": "a"
})
Or for mongoid:
Model.where( "test.item_objects.data" => "a" )
Nested arrays are not really a great idea though, so perhaps live with:
{
"_id": 1,
"test": {
"item_objects": [
{ "name": "item1", "data": "a" },
{ "name": "item1", "data": "b" },
{ "name": "item2", "data": "c" },
{ "name": "item3", "data": "a" },
{ "name": "item3", "data": "d" }
}
}
}
Which is basically the same thing, but a but more long winded. But ultimately much more easy to deal with in atomic updates. And of course the query to find the values in the document is exactly the same.
Related
I have the following Odata response
{
"d": {
"results": [
{
"__metadata": {
"id": ....,
"URI": .....,
"type": .....
},
"SchemaId": "ABC"
},
{
"__metadata": {
"id": ....,
"URI": .....,
"type": .....
},
"SchemaId": "DEF"
}
]
}
I want to filter for all the schemaId. Can anyone help with filter query.
I want to filter for all the schemaId. -> No idea what you say, but if you want to filter for a SchemaId you have to do it like this:
.../EntitySet$filter=SchemaId eq 'Whateva'
https://www.odata.org/getting-started/basic-tutorial/
I have a set of Author nodes. An Author node is the single parent of multiple Book nodes.
My goal: To print all Author nodes with no order and no limit, with each authors' first three books in alphabetical order.
Desired output: (let's pretend book names are a single letter)
[
{
"name" : "Leo Tolstoy",
"books": [
{ "name": "A" },
{ "name": "B" },
{ "name": "D" }
]
},
{
"name": "Charles Dickens",
"books": [
{ "name": "C" },
{ "name": "E" },
{ "name": "F" }
]
},
{
"name": "Oscar Wilde
...
]
My Problem:
I tried this:
MATCH(author:Author)
WITH author
OPTIONAL MATCH(author)-[:WROTE]->(book:Book)
WITH author, book
ORDER BY book.name
LIMIT 3
WITH author, collect(book) AS books
RETURN collect (
{
name: author.name,
books: books
}
);
But this gives:
[
{
"name" : "Leo Tolstoy",
"books": [
{ "name": "A" },
{ "name": "B" },
]
},
{
"name": "Charles Dickens",
"books": [
{ "name": "C" }
]
}
]
How could I achieve my desired output in Neo4j v3.5?
[EDITED]
This should work:
MATCH(author:Author)
OPTIONAL MATCH(author)-[:WROTE]->(book:Book)
WITH author, book.name AS bookName
ORDER BY bookName
WITH author, COLLECT({name: bookName})[..3] AS bookNames
RETURN COLLECT({name: author.name, books: bookNames}) AS result
I have one SwiftyJson object.
I can not filter that array. I have tried this solution https://stackoverflow.com/a/37497170/4831567. But my json format is different that's why not working.
[
{
"name": "19860",
"header": {
"start_time": "1519270200",
"end_time": "1519299000",
"state": "lunch"
},
"alerts": "1",
"venue": {
"location": "Delhi, India",
"timezone": "+05:30"
},
"srs_category": [
0,
1
]
},
{
"name": "19861",
"header": {
"start_time": "1519270200",
"end_time": "1519299000",
"state": "Dinner"
},
"alerts": "1",
"venue": {
"location": "Mumbai, India",
"timezone": "+05:30"
},
"srs_category": [
1,
3
]
},
{
"name": "19862",
"header": {
"start_time": "1519270200",
"end_time": "1519299000",
"state": "lunch"
},
"alerts": "1",
"venue": {
"location": "Surat, India",
"timezone": "+05:30"
},
"srs_category": [
0,
2
]
}
]
i want to find that object that srs_category contain 1. I know it is possible by looping and condition. But i want via NSPredicate. If it is possible then please help me.
Thank You.
Here is easy way to use SwiftyJSON:
let filtered = JSON(yourArray).arrayValue.filter({
$0["srs_category"].arrayValue.map({ $0.intValue }).contains(1)
})
Use a Swift native function rather than NSPredicate
data represents the Data object received from somewhere
do {
if let json = try JSONSerialization.jsonObject(with:data) as? [[String:Any]] {
let srsCategory1 = json.first(where: { dict -> Bool in
guard let array = dict["srs_category"] as? [Int] else { return false }
return array.contains(1)
})
print(srsCategory1 ?? "not found")
}
} catch {
print(error)
}
If there are multiple items which can match the condition replace first with filter. Then the result is a non-optional array.
I'm using default analyzers and indexing. So let's say I have this simple mapping:
"question": {
"properties": {
"title": {
"type": "string"
},
"answer": {
"properties": {
"text": {
"type": "string"
}
}
}
}
}
(that was an example. sorry if it has typos)
Now, I perform the following search.
GET _search
{
"query": {
"query_string": {
"query": "yes correct",
"fields": ["answer.text"]
}
}
}
The results will score a text value like "yes correct." (doc id value 1) higher than simply "yes correct" (without a period, doc id value 181). Both hits have the same score value, but the hits array lists the one with the smaller doc id first. I understand that the default index option includes sorting by doc id, so how do I exclude that one attribute and still use the rest of the default options?
I'm not setting any custom analyzers, so everything is using default values for Elasticsearch 2.0.
This is probably a use case for Dis Max Query
A query that generates the union of documents produced by its
subqueries, and that scores each document with the maximum score for
that document as produced by any subquery, plus a tie breaking
increment for any additional matching subqueries.
So following that, you need to make your answer score as an exact match and give it highest boost. You'll have to use a custom analyzer for that. That'd be your mappings:
PUT /test
{
"settings": {
"analysis": {
"analyzer": {
"my_keyword": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"asciifolding",
"lowercase"
]
}
}
}
},
"mappings": {
"question": {
"properties": {
"title": {
"type": "string"
},
"answer": {
"type": "object",
"properties": {
"text": {
"type": "string",
"analyzer": "my_keyword",
"fields": {
"stemmed": {
"type": "string",
"analyzer": "standard"
}
}
}
}
}
}
}
}
}
Your test data:
PUT /test/question/1
{
"title": "title nr1",
"answer": [
{
"text": "yes correct."
}
]
}
PUT /test/question/2
{
"title": "title nr2",
"answer": [
{
"text": "yes correct"
}
]
}
Now when you're querying for "yes correct." using such query:
POST /test/_search
{
"query": {
"dis_max": {
"tie_breaker": 0.7,
"boost": 1.2,
"queries": [
{
"match": {
"answer.text": {
"query": "yes correct.",
"type": "phrase"
}
}
},
{
"match": {
"answer.text.stemmed": {
"query": "yes correct.",
"operator": "and"
}
}
}
]
}
}
}
You get this output:
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 0.37919715,
"hits": [
{
"_index": "test",
"_type": "question",
"_id": "1",
"_score": 0.37919715,
"_source": {
"title": "title nr1",
"answer": [
{
"text": "yes correct."
}
]
}
},
{
"_index": "test",
"_type": "question",
"_id": "2",
"_score": 0.11261705,
"_source": {
"title": "title nr2",
"answer": [
{
"text": "yes correct"
}
]
}
}
]
}
}
If you run very same query without trailing dot, which then becomes "yes correct", you're getting this result:
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 0.37919715,
"hits": [
{
"_index": "test",
"_type": "question",
"_id": "2",
"_score": 0.37919715,
"_source": {
"title": "title nr2",
"answer": [
{
"text": "yes correct"
}
]
}
},
{
"_index": "test",
"_type": "question",
"_id": "1",
"_score": 0.11261705,
"_source": {
"title": "title nr1",
"answer": [
{
"text": "yes correct."
}
]
}
}
]
}
}
Hopefully this is what you're looking for.
By the way, I'd recommend to always use Match query when performing text search. Taken from documentation:
Comparison to query_string / field The match family of queries
does not go through a "query parsing" process. It does not support
field name prefixes, wildcard characters, or other "advanced"
features. For this reason, chances of it failing are very small / non
existent, and it provides an excellent behavior when it comes to just
analyze and run that text as a query behavior (which is usually what a
text search box does). Also, the phrase_prefix type can provide a
great "as you type" behavior to automatically load search results.
Elasticsearch or rather Lucene scoring does not take into account the relative positioning of the tokens. It utlizes 3 different criterias to do the same
Term frequency - Frequency at which the search terms is present in
the document
Inverse document frequency - Number of occurrence of the search term
in the entire database. The more the occurance , the more the common
is the search term and less the importance it has in search
Field length normalization - Number of tokens present in the target
field.
You can learn more about it here.
I am trying to get the relationship type of a very simple Cypher query, like the following
MATCH (n)-[r]-(m) RETURN n, r, m;
Unfortunately this return an empty object for r. This is troublesome since I can't distinguish between the different types of relationships. I can monkey patch this by adding a property like [r:KNOWS {type:'KNOWS'}] but I am wondering if there isn't a direct way to get the relationship type.
I even followed the official Neo4J tutorial (as described below), demonstrating the problem.
Graph Setup:
create (_0 {`age`:55, `happy`:"Yes!", `name`:"A"})
create (_1 {`name`:"B"})
create _0-[:`KNOWS`]->_1
create _0-[:`BLOCKS`]->_1
Query:
MATCH p=(a { name: "A" })-[r]->(b)
RETURN *
JSON RESPONSE BODY:
{
"results": [
{
"columns": [
"a",
"b",
"p",
"r"
],
"data": [
{
"row": [
{
"name": "A",
"age": 55,
"happy": "Yes!"
},
{
"name": "B"
},
[
{
"name": "A",
"age": 55,
"happy": "Yes!"
},
{},
{
"name": "B"
}
],
{}
]
},
{
"row": [
{
"name": "A",
"age": 55,
"happy": "Yes!"
},
{
"name": "B"
},
[
{
"name": "A",
"age": 55,
"happy": "Yes!"
},
{},
{
"name": "B"
}
],
{}
]
}
]
}
],
"errors": []
}
As you can see, I get an empty object for r, which makes it impossible to distinguish between the relationships.
NOTE: I am running Neo4J v.2.2.2
Use the type() function.
MATCH (n)-[r]-(m) RETURN type(r);
Added distinct.
MATCH (n)-[r]-(m) RETURN distinct type(r);