I want to perform multiple calculations using one query to price_histories table, and finally render some statistics using those prices like average, minimum and maximum etc.
price_histories_controller.rb
price_stats = PriceHistory.where('created_at >= ? AND cast(item_id as integer) = ?', 1.day.ago, params['item_id'])
avg_array = price_stats.group(:day).average(:price).to_a
min_array = price_stats.group(:day).min(:price).to_a
max_array = price_stats.group(:day).max(:price).to_a
count_array = price_stats.group(:day).count(:price).to_a
This is the relevant code that causes the error, i'd like to perform some calculations on a set of grouped data but after the first calculation is done, I keep getting
TypeError (no implicit conversion of Symbol into Integer)
Ideally I would end up with an object like this one to be rendered:
#all_stats = {
average: avg_array,
min: min_array,
max: max_array,
count: count_array
}
render json: #all_stats
This sums up my intentions pretty well, I'm new to ruby and I'd like a solution or a better approach which I'm sure there are.
The following code works fine and I'd like anyone to point me in the right direction to finding out why this works fine and when adding and extra calculation it doesn't:
price_stats = PriceHistory.where('created_at >= ? AND cast(item_id as integer) = ?', 1.day.ago, params['item_id'])
avg_array = price_stats.group(:day).average(:price).to_a
and leads to:
{
"average": [
[
null,
"11666.666666666667"
],
[
"24/4/2019",
"11666.666666666667"
],
[
"24",
"11666.6666666666666667"
],
[
"2051",
"11666.6666666666666667"
]
],
"min": [],
"max": [],
"count": []
}
Other approach:
PriceHistory.select(
"AVG(price) AS average_score,
MIN(price) AS average_min,
MAX(price) AS average_max,
COUNT(*) AS price_count"
).where(
'created_at >= ? AND cast(item_id as integer) = ?',
1.day.ago, params['item_id']
).group(:day)
Error:
ArgumentError (Call `select' with at least one field):
I think this should work:
PriceHistory.where(
'created_at >= ? AND cast(item_id as integer) = ?',
1.day.ago,
params['item_id']
).group(:day).select(
"SUM(price) AS sum_price",
"MAX(price) AS max_price",
"MIN(price) AS min_price",
"AVG(price) AS avg_price",
"day"
)
This will return you an array of records, each which has methods day, sum_price, max_price, min_price, and avg_price.
Note that the names of the SQL functions might be different based on your db
Related
I need a RoR mongoDB query to list articles within a given radius, sorted by created_at.
The challenge is that addresses are saved in separate table and referenced by key/id out of articles. Don't know how to make query with geoNear for this scenario.
Also pagination needed and performant query desirable.
Currently approaching like:
Get addresses within defined radius
Get articles associated to address results out of 1.
sort_by address (geoNear default)
Pagination is making usage of last_address_id. Also here have an issue, as last page is in loop.
#seaches_controller.rb
def index
#addresses =
Address.get_addresses_with_radius(article_search_params).to_a
#address_hash = #addresses.group_by{|a| a['_id'].to_s}
#articles = Article.includes(:gift, :category)
.where( transaction_status:
{
'$nin' => ["concluded"]
},
address_id:
{
:$in => #addresses.map{|a| a['_id'].to_s}
}
).to_a
.sort_by{|m|
#addresses.map{|a|
a['_id']}.index(m['address_id']) }
end
#address.rb
def self.get_addresses_with_radius(params, additional_query={})
#raw query for aggreegate with geoNear
last_maximum_distance = params[:last_maximum_distance] || 0 # in
meeters
radius = params[:radius] || 5000000 #In Meters
query_params = additional_query
if params[:last_address_id]
query_params[:_id] ||= {}
query_params[:_id] = query_params[:_id].merge({ '$ne' =>
(BSON::ObjectId(params[:last_address_id])) } )
end
addresses_in_radius =
Address.collection.aggregate([
{
'$geoNear':
{
near:
{
type: "Point",
coordinates: [ params[:lat].to_f, params[:lon].to_f ]
},
distanceField: "distance_from", #GeoNear Will atomatically distance as distance_from_field
minDistance: last_maximum_distance.to_f,
maxDistance: radius,
query: query_params,
#query:{ 'location.0': {'$ne' =>
params[:last_lat].to_f},'location.1': {'$ne' => params[:last_lon].to_f}},
spherical: true
}
},
{"$limit": params[:per_page].to_i}
])
addresses_in_radius
end
Currently I'm getting the list of articles sorted by addresses/distance, as per default geoNear behavior => should be by created_at.
Pagination is somehow based on addresses => should ideally be based on articles.
Pagination is buggy, as last page is loading in loop => loop-bug to go away.
Not sure if best is to first search for articles and then addresses, or first addresses and then get the articles; relevant note: all within defined radius.
I have two tables connected with habtm relation (through a table).
Table1
id : integer
name: string
Table2
id : integer
name: string
Table3
id : integer
table1_id: integer
table2_id: integer
I need to group Table1 records by simmilar records from Table2. Example:
userx = Table1.create()
user1.table2_ids = 3, 14, 15
user2.table2_ids = 3, 14, 15, 16
user3.table2_ids = 3, 14, 16
user4.table2_ids = 2, 5, 7
user5.table2_ids = 3, 5
Result of grouping that I want is something like
=> [ [ [1,2], [3, 14, 15] ], [ [2,3], [3,14, 16] ], [ [ 1, 2, 3, 5], [3] ] ]
Where first array is an user ids second is table2_ids.
I there any possible SQL solution or I need to create some kind of algorithm ?
Updated:
Ok, I have a code that is working like I've said. Maybe someone who can help me will find it useful to understand my idea.
def self.compare
hash = {}
Table1.find_each do |table_record|
Table1.find_each do |another_table_record|
if table_record != another_table_record
results = table_record.table2_ids & another_table_record.table2_ids
hash["#{table_record.id}_#{another_table_record.id}"] = results if !results.empty?
end
end
end
#hash = hash.delete_if{|k,v| v.empty?}
hash.sort_by{|k,v| v.count}.to_h
end
But I can bet that you can imagine how long does it takes to show me an output. For my 500 Table1 records it's something near 1-2 minutes. If I will have more, time will be increased in progression, so I need some elegant solution or SQL query.
Table1.find_each do |table_record|
Table1.find_each do |another_table_record|
...
Above codes have performance issue that you have to query database N*N times, which could be optimized down to one single query.
# Query table3, constructing the data useful to us
# { table1_id: [table2_ids], ... }
records = Table3.all.group_by { |t| t.table1_id }.map { |t1_id, t3_records|
[t1_id, t3_records.map(&:table2_id)]
}.to_h
Then you could do exactly the same thing to records to get the final result hash.
UPDATE:
#AKovtunov You miss understood me. My code is the first step. With records, which have {t1_id: t2_ids} hash, you could do sth like this:
hash = {}
records.each do |t1_id, t2_ids|
records.each do |tt1_id, tt2_ids|
if t1_id != tt1_id
inter = t2_ids & tt2_ids
hash["#{t1_id}_#{tt1_id}"] = inter if !inter.empty?
end
end
end
Initially when I was trying to build a histogram of all Items that have an Order start between a given set of dates based on exactly what the item was (:name_id) and the frequency of that :name_id, I was using the following code:
dates = ["May 27, 2016", "May 30, 2016"]
items = Item.joins(:order).where("orders.start >= ?", dates.first).where("orders.start <= ?", dates.last)
histogram = {}
items.pluck(:name_id).uniq.each do |name_id|
histogram[name_id] = items.where(name_id:name_id).count
end
This code worked FINE.
Now, however, I'm trying to build a histogram that's more expansive. I still want to capture frequency of :name_id over a period of time, but now I want to bound that time by Order start and end. I'm having trouble however, combining the ActiveRecord Relations that follow the queries. Specifically, if my queries are as follows:
items_a = Item.joins(:order).where("orders.start >= ?", dates.first).where("orders.start <= ?", dates.last)
items_b = Item.joins(:order).where("orders.end >= ?", dates.first).where("orders.end <= ?", dates.last)
How do I join the 2 queries so that my code below that acts on query objects still works?
items.pluck(:name_id).each do |name_id|
histogram[name_id] = items.where(name_id:name_id).count
end
What I've tried:
+, but of course that doesn't work because it turns the result into an Array where methods like pluck don't work:
(items_a + items_b).pluck(:name_id)
=> error
merge, this is what all the SO answers seem to say... but it doesn't work for me because, as the docs say, merge figures out the intersection, so my result is like this:
items_a.count
=> 100
items_b.count
=> 30
items_a.merge(items_b)
=> 15
FYI currently, I've monkey-patched this with the below, but it's not very ideal. Thanks for the help!
name_ids = (items_a.pluck(:name_id) + items_b.pluck(:name_id)).uniq
name_ids.each do |name_id|
# from each query object, return the ids of the item objects that meet the name_id criterion
item_object_ids = items_a.where(name_id:name_id).pluck(:id) + items_b.where(name_id:name_id).pluck(:id) + items_c.where(name_id:name_id).pluck(:id)
# then check the item objects for duplicates and then count up. btw I realize that with the uniq here I'm SOMEWHAT doing an intersection of the objects, but it's nowhere near as severe... the above example where merge yielded a count of 15 is not that off from the truth, when the count should be maybe -5 from the addition of the 2 queries
histogram[name_id] = item_object_ids.uniq.count
end
You can combine your two queries into one:
items = Item.joins(:order).where(
"(orders.start >= ? AND orders.start <= ?) OR (orders.end >= ? AND orders.end <= ?)",
dates.first, dates.last, dates.first, dates.last
)
This might be a little more readable:
items = Item.joins(:order).where(
"(orders.start >= :first AND orders.start <= :last) OR (orders.end >= :first AND orders.end <= :last)",
{ first: dates.first, last: dates.last }
)
Rails 5 will support an or method that might make this a little nicer:
items_a = Item.joins(:order).where(
"orders.start >= :first AND orders.start <= :last",
{ first: dates.first, last: dates.last }
).or(
"orders.end >= :first AND orders.end <= :last",
{ first: dates.first, last: dates.last }
)
Or maybe not any nicer in this case
Maybe this will be a bit cleaner:
date_range = "May 27, 2016".to_date.."May 30, 2016".to_date
items = Item.joins(:order).where('orders.start' => date_range).or('orders.end' => date_range)
I need an array that gives me #idea.id sorted by #idea.created_at.month
For example:
[[1,2,3], [4,5,6], [7,8,9], [], [], [], [], [], [], [], [], []]
where ids 1, 2, and 3 have #idea.created_at.month = 1 and so on through month = 12.
#ideas_by_month = Array.new(12){Array.new}
#ideas.each do |idea|
month = idea.created_at.month
#ideas_by_month[month-1] << idea.id
end
By the example, I'd need #ideas_by_month[0] to give me ids 1, 2, 3.
This currently adds all of the ideas into one slot [], and isn't sorting properly. How can I change it to make my array look like the example?
Array.new(12,[]) gives you 12 references to the same array. Array.new(12){Array.new} creates 12 different arrays.
The issue is not in your << call, but in your creation of the #ideas_by_month Array. From the Ruby API...
Array.new(3, true) #=> [true, true, true]
Note that the second argument populates the array with references to
the same object. Therefore, it is only recommended in cases when you
need to instantiate arrays with natively immutable objects such as
Symbols, numbers, true or false.
So, when you're pushing into any of the nested Arrays, it's all referencing the same space in memory. Which is why it appears that every id is getting pushed into every nested Array.
Instead declare your Array with a block:
#ideas_by_month = Array.new(12) { Array.new }
...which would look like this fully implemented as a class method:
idea.rb
class Idea < ActiveRecord::Base
...
def self.ideas_by_month
#ideas_by_month = Array.new(12){Array.new}
Idea.all.each do |idea|
month = idea.created_at.month
#ideas_by_month[month-1] << idea.id
end
return #ideas_by_month
end
...
end
I have data in the following format
"stats": {
"team": [],
"outcome": [],
"rank": []
}
I need to determine if there is a combination of 2 or more results present from the above structure then print something.
So the idea is:
(if stats.team.present? && if stats.outcome.present) || (if stats.outcome.present? && if stats.rank.present) || (if stats.team.present? && if stats.rank.present)
A better way is to create a method to add a counter that its incremented if the team, outcome, rank has a count of greater than 0.
And then check if the counter is greater than 2.
Eg:
def my_count
count = 0
count += 1 if stats.team.count > 0
count += 1 if stats.outcome.count > 0
count += 1 if stats.rank.count > 0
if count > 1
return true
end
end
Are these the only 2 options or is there a cleaner way?
Are these the only 2 options or is there a cleaner way?
A ton of cleaner ways, but the best ones will use many?, part of ActiveSupport.
many? is essentially like any?, but instead of asking if "one or more" meet a condition, it asks if two or more. It's by far the most semantically correct implementation of your question:
stats = { team: [], outcome: [], rank: [] }}
stats.many? { |k,v| v.present? } # false
stats = { team: [1], outcome: [1], rank: [] }}
stats.many? { |k,v| v.present? } # true
You can get slightly more clever with stats.values and Symbol#to_proc to shorten this further, but I don't see the need:
stats.values.many?(&:present?)
No need to transform it into an array:
data = {stats: { team: [], outcome: [], rank: [] }}
if data[:stats].reject{|k,v| v.empty?}.size > 1
You can do as
data = {"stats" => { "team" => [], "outcome" => [1], "rank" => [] }}
if data["stats"].values.count([]) > 1
#code
end
First of all, is it hash or object? I will think of hash.
According to your question: some kind of MapReduce may be looking better:
["team", "outcome", "rank"].map{|key| stats[key].present? }.count(true) > 1
You can try map/reduce and can read more here
after map/reduce you can check the output to see if there are any combination