Suppose I have :items with a has_many association with :properties, then I can search for all items that have a property with name 'a_name' and value 'a_value' like this
q: { properties_name_eq: 'a_name', properties_value_eq: 'a_value' }
Now what if I want to search for all items that have a property with name 'a_name' and value 'a_value' and also a property with name 'another_name' and value 'another_value'?
The following doesn't work as it joins the properties table only once
q: {
g: {
'0' => { properties_name_eq: 'a_name', properties_value_eq: 'a_value' },
'1' => { properties_name_eq: 'another_name', properties_value_eq: 'another_value'}
}
}
The generated SQL looks something like this
SELECT DISTINCT "items".* FROM "items"
LEFT OUTER JOIN "properties" ON "properties"."item_id" = "items"."id"
INNER JOIN ((SELECT "items".* FROM "items")) AS sel_111 on sel_111.id
WHERE
(("properties"."name" = 'a_name' AND "properties"."value" = 'a_value') AND ("properties"."name" = 'another_name' AND "properties"."value" = 'another_value'))
EDIT:
To make it more clear what I am after, I'll paste a spec below.
Item.create name: 'ab', properties_attributes: [{ name: 'a', value: 'a1'}, {name: 'b', value: 'b1'}]
Item.create name: 'a', properties_attributes: [{ name: 'a', value: 'a1'}]
Item.create name: 'b', properties_attributes: [{name: 'b', value: 'b1'}]
Item.create name: 'ax', properties_attributes: [{ name: 'a', value: 'a1'}, {name: 'b', value: 'x'}]
Item.create name: 'bx', properties_attributes: [{ name: 'a', value: 'x'}, {name: 'b', value: 'b1'}]
Item.create name: 'other', properties_attributes: [{ name: 'other', value: '123'}]
get :index, q: { properties_name_eq: 'a', properties_value_eq: 'a1' }
names = JSON.parse(response.body).map{|u| u['name']}
expect(names).to match_array ['ab', 'a', 'ax'] # OK!
get :index,
q: {
m: 'or',
g: {
'0' => { properties_name_eq: 'a', properties_value_eq: 'a1' },
'1' => { properties_name_eq: 'b', properties_value_eq: 'b1'}
}
}
names = JSON.parse(response.body).map{|u| u['name']}
expect(names).to match_array ['ab'] #FAILS!
Just use Model.search(params[:q].try(:merge, m: 'or')), using your example:
q: {
m: 'or',
g: {
'0' => { properties_name_eq: 'a_name', properties_value_eq: 'a_value' },
'1' => { properties_name_eq: 'another_name', properties_value_eq: 'another_value'}
}
}
You can find more information here
You need an or at the where level of your query, because properties.name can't be equal 'a_name' and 'another_name' at the same time. A second alias for the table is not required.
You can solve this by using multiple queries.
For each name + value property, get all item IDs with this property
Intersect the resulting IDs for each property into item_ids
In the final query on :items, add the clause WHERE id IN (item_ids)
Here's a code example that does steps 1 & 2:
def property_item_ids(conditions)
conditions.inject([]) do |result, (key, condition)|
result.method(result.empty? ? '+' : '&').(Property.ransack(m: "and", g: condition).pluck(:item_id).to_a)
end
end
Get the item IDs that have all properties:
conditions = {
'0' => { properties_name_eq: 'a', properties_value_eq: 'a1' },
'1' => { properties_name_eq: 'b', properties_value_eq: 'b1'}
}
item_ids = property_item_ids(conditions)
For step 3, invoke ransack with item_ids:
q: {
m: 'and',
g: {
'0' => { item_id_in: item_ids }
}
}
Related
I have a question How can i I get some nodes with same property (for example same name property). In SQL i would use GROUP BY, but in CYPHER i don't have idea what should i use to group them. Below I added my simple input and example output to visualizate my problem.
[
{
id:1,
name: 'name1'
},
{
id:2,
name: 'name2'
},
{
id:3,
name: 'name2'
},
{
id:4,
name: 'name3'
},
{
id:5,
name: 'name3'
},
{
id:6,
name: 'name3'
},
{
id:7,
name: 'name4'
},
{
id:8,
name: 'name5'
},
{
id:9,
name: 'name6'
},
{
id:10,
name: 'name6'
}
]
My solution should gave me this:
[
{
count:2,
name: 'name2'
},
{
count:3,
name: 'name3'
},
{
count:2,
name: 'name6'
}
]
Thank you in advance for your help
In Cypher, when you aggregate (for the straight-forward cases) the grouping key is formed from the non-aggregation terms.
If nodes have already been created from the input (let's say they're using the label :Entry), then we can get the output you want with this:
MATCH (e:Entry)
RETURN e.name as name, count(e) as count
The grouping key here is name, which becomes distinct as the result of the aggregation. The result is a row for each distinct name value and the count of nodes with that name.
I want to merge the ids of same name values
hashes = [{
id: 3456824,
name: 'John'
},{
id: 6578954,
name: 'Vicky'
},{
id: 987456,
name: 'John'
}]
Expected:
[{
id: [3456824,987456],
name: 'John'
},{
id: 6578954,
name: 'Vicky'
}]
how I can achieve this in ruby on rails?
Here is a one liner:
hashes = [{
id: 3456824,
name: 'John'
},{
id: 6578954,
name: 'Vicky'
},{
id: 987456,
name: 'John'
}]
result = hashes.group_by{|h| h[:name] }.map{|k, v| {id: v.map{|x| x[:id]}, name: k}}
puts result
Check this repl: https://repl.it/repls/ShadowyCornyVideogames
Here are two ways to compute the desired result.
Use the form of Hash::new that takes a block
hashes.each_with_object(Hash.new { |h,k| h[k] = [] }) do |g,h|
h[g[:name]] << g[:id]
end.map { |name,id| { id: id, name: name } }
#=> [{:id=>[3456824, 987456], :name=>"John"},
# {:id=>[6578954], :name=>"Vicky"}]
The first step of this calculation1 is
hashes.each_with_object(Hash.new { |h,k| h[k] = [] }) do |g,h|
h[g[:name]] << g[:id]
end
#=> {"John"=>[3456824, 987456], "Vicky"=>[6578954]}
If a hash is defined
h = Hash.new { |h,k| h[k] = [] }
and (possibly after having added key-value pairs) h has no key k, h[k] in
h[k] << v
causes the block { |h,k| h[k] = [] } to be executed, resulting in the key value pair k=>[] being added to h, then << v is executed, changing h[k] from [] to [k].
Notice that this returns :id=>[6578954], rather than :id=>6578954, which was asked for by the question. Having all values of :id return an array avoids the need to check if :id returns an array or integer in subsequent code that processes the return value of this operation.
If :id=>6578954, were desired, one could write
hashes.each_with_object(Hash.new { |h,k| h[k] = [] }) do |g,h|
h[g[:name]] << g[:id]
end.transform_values { |v| v.size==1 ? v.first : v }.
map { |name,id| { id: id, name: name } }
#=> [{:id=>[3456824, 987456], :name=>"John"},
# {:id=>6578954, :name=>"Vicky"}]
See Hash#transform_values.
Use the form of Hash#update (a.k.a. merge!) that employs a block to determine the values of keys that are present in both hashes being merged
arr.each_with_object({}) do |g,h|
h.update(g[:name]=>[g[:id]]) { |_,o,n| o+n }
end.map { |name,id| { id: id, name: name } }
#=> [{:id=>[3456824, 987456], :name=>"John"},
# {:id=>[6578954], :name=>"Vicky"}]
If :id=>6578954, rather than :id=>[6578954], were desired:
arr.each_with_object({}) do |g,h|
h.update(g[:name]=>g[:id]) { |_,o,n| [*o,n] }
end.map { |name,id| { id: id, name: name } }
#=> [{:id=>[3456824, 987456], :name=>"John"},
# {:id=>6578954, :name=>"Vicky"}]
Notice that here update's argument is g[:name]=>g[:id] whereas it was previously g[:name]=>[g[:id]].
The first step is as follows.
arr.each_with_object({}) do |g,h|
h.update(g[:name]=>g[:id]) { |_,o,n| [*o,n] }
end
#=> {"John"=>[3456824, 987456], "Vicky"=>6578954}
In general, one or both of these approaches can be taken when Enumerable#group_by can be used. The reverse if often true also. The choice among these methods is a matter of personal taste.
1. A variant of the first part of this calculation is hashes.each_with_object({}) { |g,h| (h[g[:name]] ||= []) << g[:id] } #=> {"John"=>[3456824, 987456], "Vicky"=>[6578954]}.
You can do it like this:
args = [{
id: 3456824,
name: 'John'
},{
id: 6578954,
name: 'Vicky'
},{
id: 987456,
name: 'John'
}]
value_pairs = args.map { |h| h.values_at(:name, :id) }
grouped_by_name = value_pairs.group_by(&:first).transform_values { |arr| arr.map(&:last) }
as_hashes = grouped_by_name.map { |name, ids| { id: ids, name: name } }
One more possible solution is:
array = [
{
id: 3456824,
name: "John"
},
{
id: 6578954,
name: "Vicky"
},
{
id: 987456,
name: "John"
}
]
grouped_by_name = array.each_with_object(Hash.new {|h,k| h[k] = [] }) do |hash, result|
result[hash[:name]] << hash[:id]
end
=> {"John"=>[3456824, 987456], "Vicky"=>[6578954]}
grouped_by_name.map do |grouped_hash|
{
id: grouped_hash.last,
name: grouped_hash.first
}
end
=> [{:id=>[3456824, 987456], :name=>"John"}, {:id=>[6578954], :name=>"Vicky"}]
I have a graph like the one below:
What cypher query must I use to return the graph as an object, where each parent node contains an array of its children nodes?
Example:
state: {
name: 'New York'
publishingCompanies: [
{
name: 'Penguin'
authors: [
{
name: 'George Orwell',
books: [
{ name: 'Why I Write'},
{ name: 'Animal Farm'},
{ name: '1984' }
]
},
{
name: 'Vladimir Nobokov'
books: [
{ name: 'Lolita' }
]
},
...
]
},
{
name: 'Random House',
authors: [
...
]
}
]
}
I tried to use apoc.convert.toTree, but it returned an array of paths from State to Book.
This should return the state object (assuming the state name is passed in via the stateName parameter):
MATCH (s:State)
WHERE s.name = $stateName
OPTIONAL MATCH (s)-[:IS_STATE_OF]->(c)
OPTIONAL MATCH (c)-[:PUBLISHED_FOR]->(a)
OPTIONAL MATCH (a)-[:WROTE]->(b)
WITH s, c, a, CASE WHEN b IS NULL THEN [] ELSE COLLECT({name: b.name}) END AS books
WITH s, c, CASE WHEN a IS NULL THEN [] ELSE COLLECT({name: a.name, books: books}) END AS authors
WITH s, CASE WHEN c IS NULL THEN [] ELSE COLLECT({name: c.name, authors: authors}) END AS pcs
RETURN {name: s.name, publishingCompanies: pcs} AS state
I want to create a nested hash using four values type, name, year, value. ie, key of the first hash will be type, value will be another hash with key name, then value of that one will be another hash with key year and value as value.
The array of objects I'm iterating looks like this:
elements = [
{
year: '2018',
items: [
{
name: 'name1',
value: 'value1',
type: 'type1',
},
{
name: 'name2',
value: 'value2',
type: 'type2',
},
]
},
{
year: '2019',
items: [
{
name: 'name3',
value: 'value3',
type: 'type2',
},
{
name: 'name4',
value: 'value4',
type: 'type1',
},
]
}
]
And I'm getting all values together using two loops like this:
elements.each do |element|
year = element.year
element.items.each |item|
name = item.name
value = item.value
type = item.type
# TODO: create nested hash
end
end
Expected output is like this:
{
"type1" => {
"name1" => {
"2018" => "value1"
},
"name4" => {
"2019" => "value4"
}
},
"type2" => {
"name2" => {
"2018" => "value2"
},
"name3" => {
"2019" => "value3"
}
}
}
I tried out some methods but it doesn't seems to work out as expected. How can I do this?
elements.each_with_object({}) { |g,h| g[:items].each { |f|
h.update(f[:type]=>{ f[:name]=>{ g[:year]=>f[:value] } }) { |_,o,n| o.merge(n) } } }
#=> {"type1"=>{"name1"=>{"2018"=>"value1"}, "name4"=>{"2019"=>"value4"}},
# "type2"=>{"name2"=>{"2018"=>"value2"}, "name3"=>{"2019"=>"value3"}}}
This uses the form of Hash#update (aka merge!) that employs a block (here { |_,o,n| o.merge(n) } to determine the values of keys that are present in both hashes being merged. See the doc for definitions of the three block variables (here _, o and n). Note that in performing o.merge(n) o and n will have no common keys, so a block is not needed for that operation.
Assuming you want to preserve the references (unlike in your desired output,) here you go:
elements = [
{
year: '2018',
items: [
{name: 'name1', value: 'value1', type: 'type1'},
{name: 'name2', value: 'value2', type: 'type2'}
]
},
{
year: '2019',
items: [
{name: 'name3', value: 'value3', type: 'type2'},
{name: 'name4', value: 'value4', type: 'type1'}
]
}
]
Just iterate over everything and reduce into the hash. On the structures of known shape is’s a trivial task:
elements.each_with_object(
Hash.new { |h, k| h[k] = Hash.new(&h.default_proc) } # for deep bury
) do |h, acc|
h[:items].each do |item|
acc[item[:type]][item[:name]][h[:year]] = item[:value]
end
end
#⇒ {"type1"=>{"name1"=>{"2018"=>"value1"},
# "name4"=>{"2019"=>"value4"}},
# "type2"=>{"name2"=>{"2018"=>"value2"},
# "name3"=>{"2019"=>"value3"}}}
I am trying to left join the following arrays of hashes:
input:
a = [{id: 1, name: 'Bob'}, {id: 2, name: 'Jack'}, {id: 3, name: 'Tom'}]
b = [{id: 3, age: 12}, {id: 2, age: 7}]
output:
[{id: 1, name: 'Bob', age: nil}, {id: 2, name: 'Jack', age: 7}, {id: 3, name: 'Tom', age: 12}]
Currently I am doing something along the lines with:
a.map do |x|
{
id: x[:id],
name: x[:name],
age: (b.detect{|y| x[:id] == y[:id]} || {age: nil}).fetch(:age)
}
end
It works, but it is super slow when the data set is large.
Is there any better way to perform the "join" operation more efficiently?
[a, b].map { |a| a.group_by { |e| e[:id] } }
.reduce do |a, b|
a.merge(b) { |_, v1, v2| v1.first.merge v2.first }
end.values
.map do |e|
Array === e ? {age:nil, name:nil}.merge(e.first) : e
end
The whole preparation step takes O(N) and the merge then is done as O(N), plus the finalization takes O(N).
h = b.each_with_object({}) { |g,h| h[g[:id]] = g[:age] }
#=> {3=>12, 2=>7}
a.map { |g| g.merge(age: h[g[:id]]) }
#=> [{:id=>1, :name=>"Bob", :age=>nil},
# {:id=>2, :name=>"Jack", :age=>7},
# {:id=>3, :name=>"Tom", :age=>12}]
If a is to be modified in place, change the second line to
a.each { |g| g[:age] = h[g[:id]] }
a #=> [{:id=>1, :name=>"Bob", :age=>nil},
# {:id=>2, :name=>"Jack", :age=>7},
# {:id=>3, :name=>"Tom", :age=>12}]