Loop two arrays at once - ruby-on-rails

I do have an array with orders, each with a date. Like:
[
#<Order id: 1, date: '2019-10-07'>,
#<Order id: 2, date: '2019-10-08'>,
#<Order id: 3, date: '2019-10-10'>,
#<Order id: 4, date: '2019-10-10'>,
#<Order id: 5, date: '2019-10-12'>
]
I want to display it like this:
2019-10-05:
2019-10-06:
2019-10-07: id 1
2019-10-08: id 2
2019-10-09:
2019-10-10: id 3, id 4
2019-10-11:
2019-10-12: id 5
2019-10-13:
What is the best way to do this?
I can think of the following options:
date_range.each do ... and check if there are any corresponding orders on that date.
First sort the array of orders, then do orders.each do ... and check if there are any dates skipped.
Is there some 3rd way, that is walking through both arrays simultaneously? Like starting with the dates, when there is a corresponding order, start continue with the orders until there is a new date?

Similar to what Michael Kohl and arieljuod describe in their answers. First group your dates based on date, then loop through the dates and grab the groups that are relevant.
# mock
orders = [{id: 1, date: '2019-10-07'}, {id: 2, date: '2019-10-08'}, {id: 3, date: '2019-10-10'}, {id: 4, date: '2019-10-10'}, {id: 5, date: '2019-10-12'}]
orders.map!(&OpenStruct.method(:new))
# solution
orders = orders.group_by(&:date)
orders.default = []
date_range = Date.new(2019, 10, 5)..Date.new(2019, 10, 13)
date_range.map(&:iso8601).each do |date|
ids = orders[date].map { |order| "id: #{order.id}" }.join(', ')
puts "#{date}: #{ids}"
end
# 2019-10-05:
# 2019-10-06:
# 2019-10-07: id: 1
# 2019-10-08: id: 2
# 2019-10-09:
# 2019-10-10: id: 3, id: 4
# 2019-10-11:
# 2019-10-12: id: 5
# 2019-10-13:
#=> ["2019-10-05", "2019-10-06", "2019-10-07", "2019-10-08", "2019-10-09", "2019-10-10", "2019-10-11", "2019-10-12", "2019

I'd start with something like this:
Group the array of orders by date: lookup = orders.group_by(:date)
Iterate over your date range, use date as key into lookup, so at least you don't need to traverse the orders array repeatedly.

I would do a mix of both:
# rearange your array of hashes into a hash with [date, ids] pairs
orders_by_date = {}
orders.each do |id, date|
orders_by_date[date] ||= []
orders_by_date[date] << id
end
# iterate over the range and check if the previous hash has the key
date_range.each do |date|
date_s = date.strftime('%Y-%m-%d')
date_ids = orders_by_date.fetch(date_s, []).map { |x| "id: #{x}" }.join(', ')
puts "#{date_s}: #{date_ids}"
end

Try group_by. You can find the documentation at https://apidock.com/ruby/Enumerable/group_by
grouped_orders = orders.group_by{|ords| ords[:date]}
(start_date..end_date).each do |order_date|
puts order_date
grouped_orders.fetch(order_date).map{|m| puts m.id}
end

Data
We are given an array of instances of the class Order:
require 'date'
class Order
attr_reader :id, :date
def initialize(id,date)
#id = id
#date = date
end
end
arr = ['2019-10-07', '2019-10-08', '2019-10-10', '2019-10-10', '2019-10-12'].
map.each.with_index(1) { |s,i| Order.new(i, Date.iso8601(s)) }
#=> [#<Order:0x00005a49d68ad8b8 #id=1,
# #date=#<Date: 2019-10-07 ((2458764j,0s,0n),+0s,2299161j)>>,
# #<Order:0x00005a49d68ad6d8 #id=2,
# #date=#<Date: 2019-10-08 ((2458765j,0s,0n),+0s,2299161j)>>,
# #<Order:0x00005a49d68ad3b8 #id=3,
# #date=#<Date: 2019-10-10 ((2458767j,0s,0n),+0s,2299161j)>>,
# #<Order:0x00005a49d68ad138 #id=4,
# #date=#<Date: 2019-10-10 ((2458767j,0s,0n),+0s,2299161j)>>,
# #<Order:0x00005a49d68aceb8 #id=5,
# #date=#<Date: 2019-10-12 ((2458769j,0s,0n),+0s,2299161j)>>]
and start and end dates:
start_date = '2019-10-05'
end_date = '2019-10-13'
Assumption
I assume that:
Date.iso8601(start_date) <= arr.first.date &&
arr.first.date <= arr.last.date &&
arr.last.date <= Date.iso8601(end_date)
#=> true
There is no need for the elements of arr to be sorted by date.
Code
h = (start_date..end_date).each_with_object({}) { |d,h| h[d] = d + ':' }
arr.each do |inst|
date = inst.date.strftime('%Y-%m-%d')
h[date] += "#{h[date][-1] == ':' ? '' : ','} id #{inst.id}"
end
h.values
#=> ["2019-10-05:",
# "2019-10-06:",
# "2019-10-07: id 1",
# "2019-10-08: id 2",
# "2019-10-09:",
# "2019-10-10: id 3, id 4",
# "2019-10-11:",
# "2019-10-12: id 5",
# "2019-10-13:"]
Explanation
The first step is to construct the hash h:
h = (start_date..end_date).each_with_object({}) { |d,h| h[d] = d + ':' }
#=> {"2019-10-05"=>"2019-10-05:", "2019-10-06"=>"2019-10-06:",
# "2019-10-07"=>"2019-10-07:", "2019-10-08"=>"2019-10-08:",
# "2019-10-09"=>"2019-10-09:", "2019-10-10"=>"2019-10-10:",
# "2019-10-11"=>"2019-10-11:", "2019-10-12"=>"2019-10-12:",
# "2019-10-13"=>"2019-10-13:"}
Now we will loop through the elements inst (instances of Order) of arr, and for each will alter the value of the key in h that equals inst.date converted to a string:
arr.each do |inst|
date = inst.date.strftime('%Y-%m-%d')
h[date] += "#{h[date][-1] == ':' ? '' : ','} id #{inst.id}"
end
Resulting in:
h #=> {"2019-10-05"=>"2019-10-05:",
# "2019-10-06"=>"2019-10-06:",
# "2019-10-07"=>"2019-10-07: id 1",
# "2019-10-08"=>"2019-10-08: id 2",
# "2019-10-09"=>"2019-10-09:",
# "2019-10-10"=>"2019-10-10: id 3, id 4",
# "2019-10-11"=>"2019-10-11:",
# "2019-10-12"=>"2019-10-12: id 5",
# "2019-10-13"=>"2019-10-13:"}
All that remains is to extract the values of the hash h:
h.values
#=> ["2019-10-05:",
# "2019-10-06:",
# "2019-10-07: id 1",
# "2019-10-08: id 2",
# "2019-10-09:",
# "2019-10-10: id 3, id 4",
# "2019-10-11:",
# "2019-10-12: id 5",
# "2019-10-13:"]

Related

How can i add into database column array from ruby on rails model?

i have to following method
before_save :save_each_item_details
def save_each_item_details
items = itemname.length
i = 0
while i < items
items = itemname.length
if !ItemsCensu.exists?(itname: itemname[i], year: "#{date.to_s.split('-').first}")
ItemsCensu.create(itname: itemname[i], monadaM: mm[i], quntity: quantity[i], price: price[i], tax: tax[i], year: "#{date.to_s.split('-').first}", num_invoice << invoice_num)
i += 1
else
puts "test"
i += 1
end
end
end
num_invoice is the array from database. invoice_num is a number like 1239
Each time I save it, I want it to to add the invoice_num into num_invoice[] without removing the old value.
For example
1st save:
invoice_num = 1234
num_invoice << invoice_num
# => 1234
2nd save:
invoice_num = 12345
num_invoice << invoice_num
# => [1234, 12345]
Is there a way to build this into my ItemsCensu.create, something like ItemsCensu.create(num_invoice: << invoice_num)?

Rails group by column and select column

I have a table DinnerItem with columns id, name, project_id, client_id, item_id and item_quantity.
I want to fetch data group_by item_id column and the value should only have the item_quantity column value in the format
{ item_id1 => [ {item_quantity from row1}, {item_quantity from row2}],
item_id2 => [ {item_quantity from row3}, {item_quantity from row4} ]
}
How can I achieve it in one single query?
OfferServiceModels::DinnerItem.all.select('item_id, item_quantity').group_by(&:item_id)
But this has the format
{1=>[#<DinnerItem id: nil, item_id: 1, item_quantity: nil>, #<DinnerItem id: nil, item_id: 1, item_quantity: {"50"=>30, "100"=>10}>], 4=>[#<DinnerItem id: nil, item_id: 4, item_quantity: {"100"=>5, "1000"=>2}>}
Something like this should do the job:
result = OfferServiceModels::DinnerItem
.pluck(:item_id, :item_quantity)
.group_by(&:shift)
.transform_values(&:flatten)
#=> {1 => [10, 20], 2 => [30, 40]}
# ^ item id ^^ ^^ item quantity
A step by step explanation:
# retrieve the item_id and item_quantity for each record
result = OfferServiceModels::DinnerItem.pluck(:item_id, :item_quantity)
#=> [[1, 10] [1, 20], [2, 30], [2, 40]]
# ^ item id ^^ item quantity
# group the records by item id, removing the item id from the array
result = result.group_by(&:shift)
#=> {1 => [[10], [20]], 2 => [[30], [40]]}
# ^ item id ^^ ^^ item quantity
# flatten the groups since we don't want double nested arrays
result = result.transform_values(&:flatten)
#=> {1 => [10, 20], 2 => [30, 40]}
# ^ item id ^^ ^^ item quantity
references:
pluck
group_by
shift
transform_values
flatten
You can keep the query and the grouping, but append as_json to the operation:
DinnerItem.select(:item_id, :item_quantity).group_by(&:item_id).as_json
# {"1"=>[{"id"=>nil, "item_id"=>1, "item_quantity"=>1}, {"id"=>nil, "item_id"=>1, "item_quantity"=>2}],
# "2"=>[{"id"=>nil, "item_id"=>2, "item_quantity"=>1}, {"id"=>nil, "item_id"=>2, "item_quantity"=>2}]}
Notice as_json will add the id of each row which will have a nil value.
I don't know that this is possible without transforming the value returned from the db. If you are able to transform this, the following should work to give you the desired format:
OfferServiceModels::DinnerItem.all.select('item_id, item_quantity').group_by(&:item_id)
.transform_values { |vals| vals.map(&:item_quantity) }
# => {"1"=>[nil,{"50"=>30, "100"=>10}],"4"=>...}
# or
OfferServiceModels::DinnerItem.all.select('item_id, item_quantity').group_by(&:item_id)
.transform_values { |vals| vals.map { |val| val.slice(:item_quantity) }
# => {"1"=>[{:item_quantity=>nil},:item_quantity=>{"50"=>30, "100"=>10}}],"4"=>...}
I'd argue there's nothing wrong with the output you're receiving straight from the db though. The data is there, so output the relevant field when needed: either through a transformation like above or when iterating through the data.
Hope this helps in some way, let me know :)

Dynamically create hash from array of arrays

I want to dynamically create a Hash without overwriting keys from an array of arrays. Each array has a string that contains the nested key that should be created. However, I am running into the issue where I am overwriting keys and thus only the last key is there
data = {}
values = [
["income:concessions", 0, "noi", "722300", "purpose", "refinancing"],
["fees:fee-one", "0" ,"income:gross-income", "900000", "expenses:admin", "7500"],
["fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
]
What it should look like:
{
"income" => {
"concessions" => 0,
"gross-income" => "900000"
},
"expenses" => {
"admin" => "7500",
"other" => "0"
}
"noi" => "722300",
"purpose" => "refinancing",
"fees" => {
"fee-one" => 0,
"fee-two" => 0
},
"address" => {
"zip" => "10019"
}
}
This is the code that I currently, have how can I avoid overwriting keys when I merge?
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
a = data.merge!(hh)
end
end
end
The code you've provided can be modified to merge hashes on conflict instead of overwriting:
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
data.merge!(hh) { |_, old, new| old.merge(new) }
end
end
end
But this code only works for the two levels of nesting.
By the way, I noted ruby-on-rails tag on the question. There's deep_merge method that can fix the problem:
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
data.deep_merge!(hh)
end
end
end
values.flatten.each_slice(2).with_object({}) do |(f,v),h|
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
end
#=> {"income"=>{"concessions"=>0, "gross-income"=>"900000"},
# "noi"=>"722300",
# "purpose"=>"refinancing",
# "fees"=>{"fee-one"=>"0", "fee-two"=>"0"},
# "expenses"=>{"admin"=>"7500", "other"=>"0"},
# "address"=>{"zip"=>"10019"}}
The steps are as follows.
values = [
["income:concessions", 0, "noi", "722300", "purpose", "refinancing"],
["fees:fee-one", "0" ,"income:gross-income", "900000", "expenses:admin", "7500"],
["fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
]
a = values.flatten
#=> ["income:concessions", 0, "noi", "722300", "purpose", "refinancing",
# "fees:fee-one", "0", "income:gross-income", "900000", "expenses:admin", "7500",
# "fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
enum1 = a.each_slice(2)
#=> #<Enumerator: ["income:concessions", 0, "noi", "722300",
# "purpose", "refinancing", "fees:fee-one", "0", "income:gross-income", "900000",
# "expenses:admin", "7500", "fees:fee-two", "0", "address:zip", "10019",
# "expenses:other","0"]:each_slice(2)>
We can see what values this enumerator will generate by converting it to an array.
enum1.to_a
#=> [["income:concessions", 0], ["noi", "722300"], ["purpose", "refinancing"],
# ["fees:fee-one", "0"], ["income:gross-income", "900000"],
# ["expenses:admin", "7500"], ["fees:fee-two", "0"],
# ["address:zip", "10019"], ["expenses:other", "0"]]
Continuing,
enum2 = enum1.with_object({})
#=> #<Enumerator: #<Enumerator:
# ["income:concessions", 0, "noi", "722300", "purpose", "refinancing",
# "fees:fee-one", "0", "income:gross-income", "900000", "expenses:admin", "7500",
# "fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
# :each_slice(2)>:with_object({})>
enum2.to_a
#=> [[["income:concessions", 0], {}], [["noi", "722300"], {}],
# [["purpose", "refinancing"], {}], [["fees:fee-one", "0"], {}],
# [["income:gross-income", "900000"], {}], [["expenses:admin", "7500"], {}],
# [["fees:fee-two", "0"], {}], [["address:zip", "10019"], {}],
# [["expenses:other", "0"], {}]]
enum2 can be thought of as a compound enumerator (though Ruby has no such concept). The hash being generated is initially empty, as shown, but will be filled in as additional elements are generated by enum2
The first value is generated by enum2 and passed to the block, and the block values are assigned values by a process called array decomposition.
(f,v),h = enum2.next
#=> [["income:concessions", 0], {}]
f #=> "income:concessions"
v #=> 0
h #=> {}
We now perform the block calculation.
f.is_a?(String)
#=> true
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
#=> ["income", "concessions"]
e.nil?
#=> false
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
#=> {"concessions"=>0}
h[k] equals nil if h does not have a key k. In that case (h[k] || {}) #=> {}. If h does have a key k (and h[k] in not nil).(h[k] || {}) #=> h[k].
A second value is now generated by enum2 and passed to the block.
(f,v),h = enum2.next
#=> [["noi", "722300"], {"income"=>{"concessions"=>0}}]
f #=> "noi"
v #=> "722300"
h #=> {"income"=>{"concessions"=>0}}
Notice that the hash, h, has been updated. Recall it will be returned by the block after all elements of enum2 have been generated. We now perform the block calculation.
f.is_a?(String)
#=> true
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
#=> ["noi"]
e #=> nil
e.nil?
#=> true
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
#=> "722300"
h #=> {"income"=>{"concessions"=>0}, "noi"=>"722300"}
The remaining calculations are similar.
merge overwrites a duplicate key by default.
{ "income"=> { "concessions" => 0 } }.merge({ "income"=> { "gross-income" => "900000" } } completely overwrites the original value of "income". What you want is a recursive merge, where instead of just merging the top level hash you're merging the nested values when there's duplication.
merge takes a block where you can specify what to do in the event of duplication. From the documentation:
merge!(other_hash){|key, oldval, newval| block} → hsh
Adds the contents of other_hash to hsh. If no block is specified, entries with duplicate keys are overwritten with the values from other_hash, otherwise the value of each duplicate key is determined by calling the block with the key, its value in hsh and its value in other_hash
Using this you can define a simple recursive_merge in one line
def recursive_merge!(hash, other)
hash.merge!(other) { |_key, old_val, new_val| recursive_merge!(old_val, new_val) }
end
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
a = recursive_merge!(data, hh)
end
end
end
A few more lines will give you a more robust solution, that will overwrite duplicate keys that are not hashes and even take a block just like merge
def recursive_merge!(hash, other, &block)
hash.merge!(other) do |_key, old_val, new_val|
if [old_val, new_val].all? { |v| v.is_a?(Hash) }
recursive_merge!(old_val, new_val, &block)
elsif block_given?
block.call(_key, old_val, new_val)
else
new_val
end
end
end
h1 = { a: true, b: { c: [1, 2, 3] } }
h2 = { a: false, b: { x: [3, 4, 5] } }
recursive_merge!(h1, h2) { |_k, o, _n| o } # => { a: true, b: { c: [1, 2, 3], x: [3, 4, 5] } }
Note: This method reproduces the results you would get from ActiveSupport's Hash#deep_merge if you're using Rails.
This is how I would handle this:
def new_h
Hash.new{|h,k| h[k] = new_h}
end
values.flatten.each_slice(2).each_with_object(new_h) do |(k,v),obj|
keys = k.is_a?(String) ? k.split(':') : [k]
if keys.count > 1
set_key = keys.pop
obj.merge!(keys.inject(new_h) {|memo,k1| memo[k1] = new_h})
.dig(*keys)
.merge!({set_key => v})
else
obj[k] = v
end
end
#=> {"income"=>{
"concessions"=>0,
"gross-income"=>"900000"},
"noi"=>"722300",
"purpose"=>"refinancing",
"fees"=>{
"fee-one"=>"0",
"fee-two"=>"0"},
"expenses"=>{
"admin"=>"7500",
"other"=>"0"},
"address"=>{
"zip"=>"10019"}
}
Explanation:
Define a method (new_h) for setting up a new Hash with default new_h at any level (Hash.new{|h,k| h[k] = new_h})
First flatten the Array (values.flatten)
then group each 2 elements together as sudo key value pairs (.each_slice(2))
then iterate over the pairs using an accumulator where each new element added is defaulted to a Hash (.each_with_object(new_h.call) do |(k,v),obj|)
split the sudo key on a colon (keys = k.is_a?(String) ? k.split(':') : [k])
if there is a split then create the parent key(s) (obj.merge!(keys.inject(new_h.call) {|memo,k1| memo[k1] = new_h.call}))
merge the last child key equal to the value (obj.dig(*keys.merge!({set_key => v}))
other wise set the single key equal to the value (obj[k] = v)
This has infinite depth as long as the depth chain is not broken say [["income:concessions:other",12],["income:concessions", 0]] in this case the latter value will take precedence (Note: this applies to all the answers in one way or anther e.g. the accepted answer the former wins but a value is still lost dues to inaccurate data structure)
repl.it Example

How can you sort an array in Ruby starting at a specific letter, say letter f?

I have a text array.
text_array = ["bob", "alice", "dave", "carol", "frank", "eve", "jordan", "isaac", "harry", "george"]
text_array = text_array.sort would give us a sorted array.
However, I want a sorted array with f as the first letter for our order, and e as the last.
So the end result should be...
text_array = ["frank", "george", "harry", "isaac", "jordan", "alice", "bob", "carol", "dave", "eve"]
What would be the best way to accomplish this?
Try this:
result = (text_array.select{ |v| v =~ /^[f-z]/ }.sort + text_array.select{ |v| v =~ /^[a-e]/ }.sort).flatten
It's not the prettiest but it will get the job done.
Edit per comment. Making a more general piece of code:
before = []
after = []
text_array.sort.each do |t|
if t > term
after << t
else
before << t
end
end
return (after + before).flatten
This code assumes that term is whatever you want to divide the array. And if an array value equals term, it will be at the end.
You can do that using a hash:
alpha = ('a'..'z').to_a
#=> ["a", "b", "c",..."x", "y", "z"]
reordered = alpha.rotate(5)
#=> ["f", "g",..."z", "a",...,"e"]
h = reordered.zip(alpha).to_h
# => {"f"=>"a", "g"=>"b",..., "z"=>"u", "a"=>"v",..., e"=>"z"}
text_array.sort_by { |w| w.gsub(/./,h) }
#=> ["frank", "george", "harry", "isaac", "jordan",
# "alice", "bob", "carol", "dave", "eve"]
A variant of this is:
a_to_z = alpha.join
#=> "abcdefghijklmnopqrstuvwxyz"
f_to_e = reordered.join
#=> "fghijklmnopqrstuvwxyzabcde"
text_array.sort_by { |w| w.tr(f_to_e, a_to_z) }
#=> ["frank", "george", "harry", "isaac", "jordan",
# "alice", "bob", "carol", "dave", "eve"]
I think the easiest would be to rotate the sorted array:
text_array.rotate(offset) if offset = text_array.find_index { |e| e.size > 0 and e[0] == 'f' }
Combining Ryan K's answer and my previous answer, this is a one-liner you can use without any regex:
text_array = text_array.sort!.select {|x| x.first >= "f"} + text_array.select {|x| x.first < "f"}
If I got your question right, it looks like you want to create sorted list with biased predefined patterns.
ie. let's say you want to define specific pattern of text which can completely change the sorting sequence for the array element.
Here is my proposal, you can get better code out of this, but my tired brain got it for now -
an_array = ["bob", "alice", "dave", "carol", "frank", "eve", "jordan", "isaac", "harry", "george"]
# Define your patterns with scores so that the sorting result can vary accordingly
# It's full fledged Regex so you can put any kind of regex you want.
patterns = {
/^f/ => 100,
/^e/ => -100,
/^g/ => 60,
/^j/ => 40
}
# Sort the array with our preferred sequence
sorted_array = an_array.sort do |left, right|
# Find score for the left string
left_score = patterns.find{ |p, s| left.match(p) }
left_score = left_score ? left_score.last : 0
# Find the score for the right string
right_score = patterns.find{ |p, s| right.match(p) }
right_score = right_score ? right_score.last : 0
# Create the comparision score to prepare the right order
# 1 means replace with right and -1 means replace with left
# and 0 means remain unchanged
score = if right_score > left_score
1
elsif left_score > right_score
-1
else
0
end
# For debugging purpose, I added few verbose data
puts "L#{left_score}, R:#{right_score}: #{left}, #{right} => #{score}"
score
end
# Original array
puts an_array.join(', ')
# Biased array
puts sorted_array.join(', ')

How do I add values in an array when there is a null entry?

I want to create a real time-series array. Currently, I am using the statistics gem to pull out values for each 'day':
define_statistic :sent_count, :count
=> :all, :group => 'DATE(date_sent)',
:filter_on => {:email_id => 'email_id
> = ?'}, :order => 'DATE(date_sent) ASC'
What this does is create an array where there are values for a date, for example
[["12-20-2010",1], ["12-24-2010",3]]
But I need it to fill in the null values, so it looks more like:
[["12-20-2010",1], ["12-21-2010",0], ["12-22-2010",0], ["12-23-2010",0], ["12-24-2010",3]]
Notice how the second example has "0" values for the days that were missing from the first array.
#!/usr/bin/ruby1.8
require 'date'
require 'pp'
def add_missing_dates(series)
series.map do |date, value|
[Date.strptime(date, '%m-%d-%Y'), value]
end.inject([]) do |series, date_and_value|
filler = if series.empty?
[]
else
((series.last[0]+ 1)..(date_and_value[0] - 1)).map do |date|
[date, 0]
end
end
series + filler + [date_and_value]
end.map do |date, value|
[date.to_s, value]
end
end
a = [["12-20-2010",1], ["12-24-2010",3]]
pp add_missing_dates(a)
# => [["2010-12-20", 1],
# => ["2010-12-21", 0],
# => ["2010-12-22", 0],
# => ["2010-12-23", 0],
# => ["2010-12-24", 3]]
I would recommend against monkey-patching the base classes to include this method: It's not all that general purpose; even if it were, it just doesn't need to be there. I'd stick it in a module that you can mix in to whatever code needs it:
module AddMissingDates
def add_missing_dates(series)
...
end
end
class MyClass
include AddMissingDates
...
end
However, if you really want to:
def Array.add_missing_dates(series)
...
end
This works:
#!/usr/bin/env ruby
require 'pp'
require 'date'
# convert the MM-DD-YYYY format date string to a Date
DATE_FORMAT = '%m-%d-%Y'
def parse_date(s)
Date.strptime(s, DATE_FORMAT)
end
dates = [["12-20-2010",1], ["12-24-2010",3]]
# build a hash of the known dates so we can skip the ones that already exist.
date_hash = Hash[*dates.map{ |i| [parse_date(i[0]), i[-1]] }.flatten]
start_date_range = parse_date(dates[0].first)
end_date_range = parse_date(dates[-1].first)
# loop over the date range...
start_date_range.upto(end_date_range) do |d|
# ...and adding entries for the missing ones.
date_hash[d] = 0 if (!date_hash.has_key?(d))
end
# convert the hash back into an array with all dates
all_dates = date_hash.keys.sort.map{ |d| [d.strftime(DATE_FORMAT), date_hash[d] ] }
pp all_dates
# >> [["12-20-2010", 1],
# >> ["12-21-2010", 0],
# >> ["12-22-2010", 0],
# >> ["12-23-2010", 0],
# >> ["12-24-2010", 3]]
Most of the code is preparing things, either to build a new array, or return the date objects back to strings.

Resources