I have request:
Model.group(:p_id).pluck("AVG(desired)")
=> [0.77666666666666667e1, 0.431666666666666667e2, ...]
but when I ran SQL
SELECT AVG(desired) AS desired
FROM model
GROUP BY p_id
I got
-----------------
| desired |
|-----------------|
| 7.76666666666667|
|43.1666666666667 |
| ... |
-----------------
What is the reason of this? Sure I can multiply, but I bet where are should be an explanation.
I found that
Model.group(:p_id).pluck("AVG(desired)").map{|a| a.to_f}
=> [7.76666666666667,43.1666666666667, ...]
Now I'm struggle with other task, I need numbers attributes in pluck so my request is:
Model.group(:p_id).pluck("p_id, AVG(desired)")
how to get correct AVG value in this case?
0.77666666666666667e1 is (almost) 7.76666666666667, they're the same number in two different representations with slightly different precision. If you dump the first one into irb, you'll see:
> 0.77666666666666667e1
=> 7.766666666666667
When you perform an avg in the database, the result has type numeric which ActiveRecord represents using Ruby's BigDecimal. The BigDecimal values are being displayed in scientific notation but that shouldn't make any difference when you format your data for display.
In any case, pluck isn't the right tool for this job, you want to use average:
Model.group(:p_id).average(:desired)
That will give you a Hash which maps p_id to averages. You'll still get the averages in BigDecimals but that really shouldn't be a problem.
Finally I've found solution:
Model.group(:p_id).pluck("p_id, AVG(Round(desired))")
=> [[1,7.76666666666667],[2,43.1666666666667], ...]
Related
I've never came across this before. I'm working with a table attribute whos value is a string, not float/int.
Model.first.amount => "58.00"
I need to sum up all amount. What I'm used to, with the amount being a float, would be:
Model.all.sum(&:amount) => # total value
Took a wild guess with:
Model.all.sum(&:amount.to_i) # undefined method `to_i' for :amount:Symbol
Is there a clean way to sum up the amount? Or convert the database to float?
Processing database with Ruby is memory inefficient.
First shot:
Model
.pluck(:amount) # will fire sql
.sum(&:to_f) # convert to float, operating on resulting Array, not AR and sum
But the most effective way to process database data is SQL of course:
Model.sum("CAST(COALESCE(amount, '0') AS DECIMAL)")
coalesce will replace null values with '0'
sum all values casted to DECIMAL.
In pure Ruby you can use method inject.
Model.all.inject(0) { |sum, object| sum += object.amount.to_i }
I dont have commenting permissions but this should work for ruby:
Model.all.map(&:to_f).reduce(&:+)
I have a sumo logic query where i'm taking a numeric field and summing it, but that fields value is milliseconds so I want to divide the field by 1000 to get the number as seconds.
parse "DownloadDuration=*," as DownloadTime | sum(downloadtime / 1000) as TotalDownloadTime
but sumologic gives me an error: Parse error: ')' expected but '/' found. when i try to do this (even though their help docs seem to suggest this is totally legit.
I had to add another parse statement to alter the fields value.
parse "DownloadDuration=*," as DownloadTime |
(downloadtime / 1000) as DownloadTime |
sum(downloadtime) as TotalDownloadTime
Works perfectly!
I am new to splunk and facing an issue in comparing values in two columns of two different queries.
Query 1
index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**" A_to="**" A_from="**" | transaction call_id keepevicted=true | search "xyz event:" | table _time, call_id, A_from, A_to | rename call_id as Call_id, A_from as From, A_to as To
Query 2
index="abc_ndx" source="*/ jkdhgsdjk.log" call_id="**" B_to="**" B_from="**" | transaction call_id keepevicted=true | search " xyz event:"| table _time, call_id, B_from, B_to | rename call_id as Call_id, B_from as From, B_to as To
These are my two different queries. I want to compare each values in A_from column with each values in B_from column and if the value matches, then display the those values of A_from.
Is it possible?
I have run the two queries separately and exported the results of each into csv and used vlookup function. But the problem is there is a limit of max 10000 rows of data which can be exported and so I miss out lots of data as my data search has more than 10000 records.
Any help?
Haven't got any data to test this on at the moment, however, the following should point you in the right direction.
When you have the table for the first query sorted out, you should 'pipe' the search string to an appendcols command with your second search string. This command will allow you to run a subsearch and "import" a columns into you base search.
Once you have the two columns in the same table. You can use the eval command to create a new field which compares the two values and assigns a value as you desire.
Hope this helps.
http://docs.splunk.com/Documentation/Splunk/5.0.2/SearchReference/Appendcols
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Eval
I'm not sure why there is a need to keep this as two separate queries. Everything is coming from the same sourcetype, and is using almost identical data. So I would do something like the following:
index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**" (A_to="**" A_from="**") OR (B_to="**" B_from="**")
| transaction call_id keepevicted=true
| search "xyz event:"
| eval to=if(A_from == B_from, A_from, "no_match")
| table _time, call_id, to
This grabs all events from your specified sourcetype and index, which have a call_id, and either A_to and A_from or B_to and B_from. Then it transactions all of that, lets you filter based on the "xyz event:" (Whatever that is)
Then it creates a new field called 'to' which shows A_from when A_from == B_from, otherwise it shows "no_match" (Placeholder since you didn't specify what should be done when they don't match)
There is also a way to potentially tackle this without using transactions. Although without more details into the underlying data, I can't say for sure. The basic idea is that if you have a common field (call_id in this case) you can just use stats to collect values associated with that field instead of an expensive transaction command.
For example:
index="abc_ndx" index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**"
| stats last(_time) as earliest_time first(A_to) as A_to first(A_from) as A_from first(B_to) as B_to first(B_from) as B_from by call_id
Using first() or last() doesn't actually matter if there is only one value per call_id. (You can even use min() max() avg() and you'll get the same thing) Perhaps this will help you get to the output you need more easily.
My database has "spine numbers" and I want to sort by them.
#films = Film.all.sort{|a,b| a.id <=> b.id }
That is my one controller, but the spines go 1, 2, 3 ... 100, 101 etc. instead of 001,002,003... so the sorting is out of whack. There's probably an easy class for this something like:
#films = Film.all.sort{|a,b| a.id.abs <=> b.id.abs }
But I don't know it. Thanks for the help.
PS also, why has the rails wiki been down so often recently?
You should use Film.order("id DESC") (or "ASC") method which aplies SQL ORDER BY clause to the query.
By default, records are sorted by the primary key column, at least in MySQL.
If this hasn't answered your question, please provide some more information on your database.
Edited
Yes, I do see. The only thing that comes to mind is that you're using some kind of string datatype for the spine numbers column. In this case, this kind of sorting makes sense, because values are compared alpabetically char to char like this
1| |
0|5|4
2|5|
1|4|3
which'll return
054
1
143
25
while numeric values such as integer, or float, are compared by their actual value, and not by separate bytes.
So you should create a migration to change the datatype of your spine number to integer.
I have a standard master-detail relationship between two models in a RoR application. The detail records contain four boolean fields indicating presence / absence of something.
When I display the detail records I want to add a summary indicating the number of records which have their boolean value set to True for each of the four boolean fields.
For example:
Date | Boolean Field 1 | Boolean Field 2 | etc
2009/08/29 | T | T |
2009/08/30 | T | F |
2009/08/31 | F | T |
2009/09/01 | F | T |
Total: 4 2 3
I tried using something like #entries.count(["Boolean Field 1", true])
The way I see it, there are two ways to calculate these values: one at the model by executing an SQL query (ugly) or at the view level by using a counter (ugly again.) Is there some other way to achieve what I want?
Thank you for your time,
Angelos Arampatzis
May be
#entries.select {|r| r.bool_field1}.size
You can either do:
#entries.count(:conditions => { :boolean_field_1 => true })
You can pretty this up by doing a named scope:
named_scope :booleans, :conditions => { :boolean_field_1 => true })
and then
#entries.booleans.count
Or if you already have ALL the items in an array (rather than a select few) and do not want to hit the database…
Rails provides a ? method for all columns. So while you have:
#entry.boolean_field
You also have:
#entry.boolean_field?
So you can do this:
#entries.collect(&:boolean_field?).length
sql isn't as ugly as rails makes it out to be and it is rather efficient, just make it a named_scope and your controller/view will still look pretty
Because you have all your entries as Rails objects, you can use the shortest form:
#entries.count(&:boolean_field1?)
It's using Enumerable#count.
Keep in mind though, that it counts using Ruby (as opposed to SQL). If you'll ever want to count without reading all records from DB, you will need to use something different for efficiency.