How to transform CSV rows with the same keys into hash - ruby-on-rails

I have parsed the .csv file and received the following headers:
"Date":"Start" "Date":"Due" "Amount":"Total" "Amount":"Left"
There are values for each. I need to map it to hash dynamically in the following way:
{date => {start: value, due: value}, amount => {total: value, left: value}}
Please, could you suggest a way. I've tried to use it as an array like:
[["Date", "Start"], ["Date", "Due"], ["Amount", "Total"], ["Amount", "Left"]]
and then
.each_slice(2)
but after that I got stuck. Thanks in advance

you can try something like:
rows.map do |row|
{
date: { start: row[1], end: row[3] },
amount: { total: row[4], left: row[6] }
}
end

Given:
headers = [["Date", "Start"], ["Date", "Due"], ["Amount", "Total"], ["Amount", "Left"]]
values= [1, 2, 3, 4]
Hash[headers.zip(values)].each_with_object({}) do |(keys, value), memo|
pk = keys.first.downcase.to_sym
memo[pk] ||= {}
memo[pk][keys.last.downcase.to_sym] = value
end
Output:
=> {:date=>{:start=>1, :due=>2}, :amount=>{:total=>3, :left=>4}}

Related

how to make a deep_slice in a hash on ruby

I was looking around for a clean way to do this and I found some workarounds but did not find anything like the slice (some people recommended to use a gem but I think is not needed for this operations, pls correct me if I am wrong), so I found myself with a hash that contains a bunch of hashes and I wanted a way to perform the Slice operation over this hash and get also the key/value pairs from nested hashes, so the question:
Is there something like deep_slice in ruby?
Example:
input: a = {b: 45, c: {d: 55, e: { f: 12}}, g: {z: 90}}, keys = [:b, :f, :z]
expected output: {:b=>45, :f=>12, :z=>90}
Thx in advance! 👍
After looking around for a while I decided to implement this myself, this is how I fix it:
a = {b: 45, c: {d: 55, e: { f: 12}}, g: {z: 90}}
keys = [:b, :f, :z]
def custom_deep_slice(a:, keys:)
result = a.slice(*keys)
a.keys.each do |k|
if a[k].class == Hash
result.merge! custom_deep_slice(a: a[k], keys: keys)
end
end
result
end
c_deep_slice = custom_deep_slice(a: a, keys: keys)
p c_deep_slice
The code above is a classic DFS, which takes advantage of the merge! provided by the hash class.
You can test the code above here
require 'set'
def recurse(h, keys)
h.each_with_object([]) do |(k,v),arr|
if keys.include?(k)
arr << [k,v]
elsif v.is_a?(Hash)
arr.concat(recurse(v,keys))
end
end
end
hash = { b: 45, c: { d: 55, e: { f: 12 } }, g: { b: 21, z: 90 } }
keys = [:b, :f, :z]
arr = recurse(hash, keys.to_set)
#=> [[:b, 45], [:f, 12], [:b, 21], [:z, 90]]
Notice that hash differs slightly from the example hash given in the question. I added a second nested key :b to illustrate the problem of returning a hash rather than an array of key-value pairs. Were we to convert arr to a hash the pair [:b, 45] would be discarded:
arr.to_h
#=> {:b=>21, :f=>12, :z=>90}
If desired, however, one could write:
arr.each_with_object({}) { |(k,v),h| (h[k] ||= []) << v }
#=> {:b=>[45, 21], :f=>[12], :z=>[90]}
I converted keys from an array to a set merely to speed lookups (keys.include?(k)).
A slightly modified approach could be used if the hash contained nested arrays of hashes as well as nested hashes.
My version
maybe it should help
def deep_slice( obj, *args )
deep_arg = {}
slice_args = []
args.each do |arg|
if arg.is_a? Hash
arg.each do |hash|
key, value = hash
if obj[key].is_a? Hash
deep_arg[key] = deep_slice( obj[key], *value )
elsif obj[key].is_a? Array
deep_arg[key] = obj[key].map{ |arr_el| deep_slice( arr_el, *value) }
end
end
elsif arg.is_a? Symbol
slice_args << arg
end
end
obj.slice(*slice_args).merge(deep_arg)
end
Object to slice
obj = {
"id": 135,
"kind": "transfer",
"customer": {
"id": 1,
"name": "Admin",
},
"array": [
{
"id": 123,
"name": "TEST",
"more_deep": {
"prop": "first",
"prop2": "second"
}
},
{
"id": 222,
"name": "2222"
}
]
}
Schema to slice
deep_slice(
obj,
:id,
customer: [
:name
],
array: [
:name,
more_deep: [
:prop2
]
]
)
Result
{
:id=>135,
:customer=>{
:name=>"Admin"
},
:array=>[
{
:name=>"TEST",
:more_deep=>{
:prop2=>"second"
}
},
{
:name=>"2222"
}
]
}

Rails group by column and select column

I have a table DinnerItem with columns id, name, project_id, client_id, item_id and item_quantity.
I want to fetch data group_by item_id column and the value should only have the item_quantity column value in the format
{ item_id1 => [ {item_quantity from row1}, {item_quantity from row2}],
item_id2 => [ {item_quantity from row3}, {item_quantity from row4} ]
}
How can I achieve it in one single query?
OfferServiceModels::DinnerItem.all.select('item_id, item_quantity').group_by(&:item_id)
But this has the format
{1=>[#<DinnerItem id: nil, item_id: 1, item_quantity: nil>, #<DinnerItem id: nil, item_id: 1, item_quantity: {"50"=>30, "100"=>10}>], 4=>[#<DinnerItem id: nil, item_id: 4, item_quantity: {"100"=>5, "1000"=>2}>}
Something like this should do the job:
result = OfferServiceModels::DinnerItem
.pluck(:item_id, :item_quantity)
.group_by(&:shift)
.transform_values(&:flatten)
#=> {1 => [10, 20], 2 => [30, 40]}
# ^ item id ^^ ^^ item quantity
A step by step explanation:
# retrieve the item_id and item_quantity for each record
result = OfferServiceModels::DinnerItem.pluck(:item_id, :item_quantity)
#=> [[1, 10] [1, 20], [2, 30], [2, 40]]
# ^ item id ^^ item quantity
# group the records by item id, removing the item id from the array
result = result.group_by(&:shift)
#=> {1 => [[10], [20]], 2 => [[30], [40]]}
# ^ item id ^^ ^^ item quantity
# flatten the groups since we don't want double nested arrays
result = result.transform_values(&:flatten)
#=> {1 => [10, 20], 2 => [30, 40]}
# ^ item id ^^ ^^ item quantity
references:
pluck
group_by
shift
transform_values
flatten
You can keep the query and the grouping, but append as_json to the operation:
DinnerItem.select(:item_id, :item_quantity).group_by(&:item_id).as_json
# {"1"=>[{"id"=>nil, "item_id"=>1, "item_quantity"=>1}, {"id"=>nil, "item_id"=>1, "item_quantity"=>2}],
# "2"=>[{"id"=>nil, "item_id"=>2, "item_quantity"=>1}, {"id"=>nil, "item_id"=>2, "item_quantity"=>2}]}
Notice as_json will add the id of each row which will have a nil value.
I don't know that this is possible without transforming the value returned from the db. If you are able to transform this, the following should work to give you the desired format:
OfferServiceModels::DinnerItem.all.select('item_id, item_quantity').group_by(&:item_id)
.transform_values { |vals| vals.map(&:item_quantity) }
# => {"1"=>[nil,{"50"=>30, "100"=>10}],"4"=>...}
# or
OfferServiceModels::DinnerItem.all.select('item_id, item_quantity').group_by(&:item_id)
.transform_values { |vals| vals.map { |val| val.slice(:item_quantity) }
# => {"1"=>[{:item_quantity=>nil},:item_quantity=>{"50"=>30, "100"=>10}}],"4"=>...}
I'd argue there's nothing wrong with the output you're receiving straight from the db though. The data is there, so output the relevant field when needed: either through a transformation like above or when iterating through the data.
Hope this helps in some way, let me know :)

Ruby - Generate array of comma separated arrays

I'm trying to create a Google chart using GoogleVisualr.
This input works:
data_table.add_rows([
['04/14', 1],
['04/15', 2],
['04/16', 3],
['04/17', 4],
['04/18', 5],
['04/19', 1],
['04/20', 12],
['04/21', 13],
['04/24', 14],
['04/14', 15],
['04/24', 16],
['04/22', 17],
['04/14', 18],
['04/4', 19],
])
I am currently using:
Product.find(:all, :order => "created_at ASC").each do |p|
data_table.add_rows([
[p.created_at.strftime("%m/%d"), p.rating]
])
which returns:
01/13
2
01/20
3
02/22
2
03/14
2
03/19
2
04/14
1
04/15
2
04/17
2
05/14
2
05/14
2
05/14
2
05/14
2...
How can I format my array to match what GoogleVisualr requires:
[ [data, value], [date, value]...]
No need to use a loop, just use map:
rows = Product.all.order("created_at ASC").map do |p|
[p.created_at.strftime("%m/%d"), p.rating]
end
data_table.add_rows(rows)
This code "Product.find(:all, :order => "created_at ASC")" you can create a :scope and your controller assigns #products = Product.order_by_created
#products.inject([]) do {|result, p| result << [p.created_at.strftime("%m/%d"), p.rating])}
Since you're already looping over each row, you can just use DataTable#add_row.
Product.find(:all, :order => "created_at ASC").each do |p|
data_table.add_row([p.created_at.strftime("%m/%d"), p.rating])
end
Make it into a string and interpolate your values within it.
puts "data_table.add_rows(["
Product.find(:all, :order => "created_at ASC").each do |p|
puts "['#{p.created_at.strftime("%m/%d")}', #{p.rating}],"
end
puts " ])"
Try something like this
pry(main)> result = []
=> []
pry(main)> Project.find(:all, :order => "created_at ASC").each do |p|
pry(main)* result << [p.created_at.strftime("%m/%d"), p.id]
pry(main)* end
pry(main)> result
=> [["02/05", 1],
["02/14", 6],
["02/15", 7],
["02/18", 8]]
Probably not the most efficient way, but premature optimisation is the root of all evil:
#products = Product.all(:order => 'created_at ASC')
#csv = CSV.generate do |csv|
csv << ["Secret", "Timestamp"]
#products.each { |secret|
csv << ["#{secret.story}", "#{secret.updated_at.strftime('%s')}"]
}
end
#csv
If you look at the Google Visualr API (https://github.com/winston/google_visualr) on Winston's Github page, you will see that the add_rows method requires a nested array, where each element of the array is another array of size (length) 2. The first element e[0] is the date and the second element e[1] is the data value for that date. Pass your data in this format, and it should work correctly.
It looks like your code should already be doing that, as far as we can tell from here without actually being on your machine. However, your code is calling the add_rows method for each iteration in your each method, and supplying the data_table.add_rows method with a nested array that only has one array inside of it.
So instead of looking like this:
[ ['04/14', 1],
['04/15', 2],
['04/16', 3],
['04/17', 4] ]
and calling add_rows just one time like you would normally do, you are calling add_rows over and over again like this:
add_rows ( [ ['04/14', 1] ] )
add_rows ( [ ['04/15', 2] ] )
add_rows ( [ ['04/16', 3] ] )
add_rows ( [ ['04/17', 4] ] )
once for each data point.
What you should do is use your each iterator to put each date and its corresponding data value into an array, then call add_rows with that array as the parameter. Something like this:
my_array = []
Product.find(:all, :order => "created_at ASC").each do |p|
my_array << [p.created_at.strftime("%m/%d"), p.rating]
end
data_table.add_rows(my_array)
I hope this helps.

How to group by two conditions in rails 3 and loop through them

Ok so I have a sale model that
recent_sales = Sale.recent
=> [#<Sale id: 7788, contact_id: 9988, purchasing_contact_id: 876, event_id: 988, #<BigDecimal:7fdb4ac06fe8,'0.0',9(18)>, fulfilled_at: nil, skip_print: nil, convention_id: 6, refund_fee: #<BigDecimal:7fdb4ac06de0,'0.0',9(18)>, processing: false>, #<Sale id: 886166, , contact_id: 7775,
recent_sales.count
=> 32
I know i can do this
grouped_sales = recent_sales.group_by(&:contact_id).map {|k,v| [k, v.length]}
=> [[9988, 10], [7775, 22]]
But what i really need is not just grouping on contact_id but also event_id so the final results looks like this
=> [[9988, 988, 5], [9988, 977, 5], [7775, 988, 2], [7775, 977, 20]]
so i have the event_id and the grouping is splitting them up correctly...any ideas on how to do this
I tried
recent_sales.group('contact_id, event_id').map {|k,v| [k, k.event, v.length]}
but no go
grouped_sales = recent_sales.group_by { |s| [s.contact_id, s.event_id] }
.map { |k,v| [k.first, k.last, v.length] }
Simply, try
group(['contact_id','event_id'])
It worked for me. So, I posted as answer to help others as well.
Ask the database to do the grouping
grouped_sales = recent_sales.group([:contact_id, :event_id]).count
the result is a hash each key is an array of the contact and event id, and the value is the count.
So if you want arrays of three
grouped_sales = recent_sales.group([:contact_id, :event_id]).count.map{ |k,v| k << v }

How to rotate by 90° an Array with ActiveRecord objects

I have got
#my_objects = [ #<MyObject id: 1, title: "Blah1">,
#<MyObject id: 2, title: "Blah2">,
#<MyObject id: 3, title: "Blah3">,
#<MyObject id: 4, title: "Blah4"> ]
I need to turn it into:
#my_objects = { :id => [ 1, 2, 3, 4],
:title => [ "Blah1" ... ] }
Is there built in method or some standart approach?
I can imagine only this
#my_objects.inject({}){ |h, c| c.attributes.each{ |k,v| h[k] ||= []; h[k] << v }; h }
This question was born while I was thinking on this particular question
First, use Enumerable#map (something like #o.map { |e| [e.id, e.title] }) to get the ActiveRecord array into a simplified pure Ruby object that looks like this:
a = [[1, "Blah1"], [2, "Blah2"], [3, "Blah3"], [4, "Blah4"]]
Then:
a.transpose.zip([:id, :title]).inject({}) { |m, (v,k)| m[k] = v; m }
Alternate solution: It might be less tricky and easier to read if instead you just did something prosaic like:
i, t = a.transpose
{ :id => i, :title => t }
Either way you get:
=> {:title=>["Blah1", "Blah2", "Blah3", "Blah4"], :id=>[1, 2, 3, 4]}
Update: Tokland has a refinement that's worth citing:
Hash[[:id, :title].zip(a.transpose)]
You're on the right track there, there's no custom method for this sort of pivot, and it should work, but remember that ActiveRecord attribute keys are strings:
#my_objects.inject({ }) { |h, c| c.attributes.each { |k,v| (h[k.to_sym] ||= [ ]) << v }; h }
You can use the (x ||= [ ]) << y pattern to simplify that a bit if you're not too concerned with it being super readable to a novice.
Functional approach (no eachs!):
pairs = #my_objects.map { |obj| obj.attributes.to_a }.flatten(1)
Hash[pairs.group_by(&:first).map { |k, vs| [k, vs.map(&:second)] }]
#=> {:title=>["Blah1", "Blah2", "Blah3", "Blah4"], :id=>[1, 2, 3, 4]}
As usual, Facets allows to write nicer code; in this case Enumerable#map_by would avoid using the ugly and convoluted pattern group_by+map+map:
#my_objects.map { |obj| obj.attributes.to_a }.flatten(1).map_by { |k, v| [k, v] }
#=> {:title=>["Blah1", "Blah2", "Blah3", "Blah4"], :id=>[1, 2, 3, 4]}

Resources