Dynamically create hash from array of arrays - ruby-on-rails

I want to dynamically create a Hash without overwriting keys from an array of arrays. Each array has a string that contains the nested key that should be created. However, I am running into the issue where I am overwriting keys and thus only the last key is there
data = {}
values = [
["income:concessions", 0, "noi", "722300", "purpose", "refinancing"],
["fees:fee-one", "0" ,"income:gross-income", "900000", "expenses:admin", "7500"],
["fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
]
What it should look like:
{
"income" => {
"concessions" => 0,
"gross-income" => "900000"
},
"expenses" => {
"admin" => "7500",
"other" => "0"
}
"noi" => "722300",
"purpose" => "refinancing",
"fees" => {
"fee-one" => 0,
"fee-two" => 0
},
"address" => {
"zip" => "10019"
}
}
This is the code that I currently, have how can I avoid overwriting keys when I merge?
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
a = data.merge!(hh)
end
end
end

The code you've provided can be modified to merge hashes on conflict instead of overwriting:
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
data.merge!(hh) { |_, old, new| old.merge(new) }
end
end
end
But this code only works for the two levels of nesting.
By the way, I noted ruby-on-rails tag on the question. There's deep_merge method that can fix the problem:
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
data.deep_merge!(hh)
end
end
end

values.flatten.each_slice(2).with_object({}) do |(f,v),h|
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
end
#=> {"income"=>{"concessions"=>0, "gross-income"=>"900000"},
# "noi"=>"722300",
# "purpose"=>"refinancing",
# "fees"=>{"fee-one"=>"0", "fee-two"=>"0"},
# "expenses"=>{"admin"=>"7500", "other"=>"0"},
# "address"=>{"zip"=>"10019"}}
The steps are as follows.
values = [
["income:concessions", 0, "noi", "722300", "purpose", "refinancing"],
["fees:fee-one", "0" ,"income:gross-income", "900000", "expenses:admin", "7500"],
["fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
]
a = values.flatten
#=> ["income:concessions", 0, "noi", "722300", "purpose", "refinancing",
# "fees:fee-one", "0", "income:gross-income", "900000", "expenses:admin", "7500",
# "fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
enum1 = a.each_slice(2)
#=> #<Enumerator: ["income:concessions", 0, "noi", "722300",
# "purpose", "refinancing", "fees:fee-one", "0", "income:gross-income", "900000",
# "expenses:admin", "7500", "fees:fee-two", "0", "address:zip", "10019",
# "expenses:other","0"]:each_slice(2)>
We can see what values this enumerator will generate by converting it to an array.
enum1.to_a
#=> [["income:concessions", 0], ["noi", "722300"], ["purpose", "refinancing"],
# ["fees:fee-one", "0"], ["income:gross-income", "900000"],
# ["expenses:admin", "7500"], ["fees:fee-two", "0"],
# ["address:zip", "10019"], ["expenses:other", "0"]]
Continuing,
enum2 = enum1.with_object({})
#=> #<Enumerator: #<Enumerator:
# ["income:concessions", 0, "noi", "722300", "purpose", "refinancing",
# "fees:fee-one", "0", "income:gross-income", "900000", "expenses:admin", "7500",
# "fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
# :each_slice(2)>:with_object({})>
enum2.to_a
#=> [[["income:concessions", 0], {}], [["noi", "722300"], {}],
# [["purpose", "refinancing"], {}], [["fees:fee-one", "0"], {}],
# [["income:gross-income", "900000"], {}], [["expenses:admin", "7500"], {}],
# [["fees:fee-two", "0"], {}], [["address:zip", "10019"], {}],
# [["expenses:other", "0"], {}]]
enum2 can be thought of as a compound enumerator (though Ruby has no such concept). The hash being generated is initially empty, as shown, but will be filled in as additional elements are generated by enum2
The first value is generated by enum2 and passed to the block, and the block values are assigned values by a process called array decomposition.
(f,v),h = enum2.next
#=> [["income:concessions", 0], {}]
f #=> "income:concessions"
v #=> 0
h #=> {}
We now perform the block calculation.
f.is_a?(String)
#=> true
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
#=> ["income", "concessions"]
e.nil?
#=> false
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
#=> {"concessions"=>0}
h[k] equals nil if h does not have a key k. In that case (h[k] || {}) #=> {}. If h does have a key k (and h[k] in not nil).(h[k] || {}) #=> h[k].
A second value is now generated by enum2 and passed to the block.
(f,v),h = enum2.next
#=> [["noi", "722300"], {"income"=>{"concessions"=>0}}]
f #=> "noi"
v #=> "722300"
h #=> {"income"=>{"concessions"=>0}}
Notice that the hash, h, has been updated. Recall it will be returned by the block after all elements of enum2 have been generated. We now perform the block calculation.
f.is_a?(String)
#=> true
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
#=> ["noi"]
e #=> nil
e.nil?
#=> true
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
#=> "722300"
h #=> {"income"=>{"concessions"=>0}, "noi"=>"722300"}
The remaining calculations are similar.

merge overwrites a duplicate key by default.
{ "income"=> { "concessions" => 0 } }.merge({ "income"=> { "gross-income" => "900000" } } completely overwrites the original value of "income". What you want is a recursive merge, where instead of just merging the top level hash you're merging the nested values when there's duplication.
merge takes a block where you can specify what to do in the event of duplication. From the documentation:
merge!(other_hash){|key, oldval, newval| block} → hsh
Adds the contents of other_hash to hsh. If no block is specified, entries with duplicate keys are overwritten with the values from other_hash, otherwise the value of each duplicate key is determined by calling the block with the key, its value in hsh and its value in other_hash
Using this you can define a simple recursive_merge in one line
def recursive_merge!(hash, other)
hash.merge!(other) { |_key, old_val, new_val| recursive_merge!(old_val, new_val) }
end
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
a = recursive_merge!(data, hh)
end
end
end
A few more lines will give you a more robust solution, that will overwrite duplicate keys that are not hashes and even take a block just like merge
def recursive_merge!(hash, other, &block)
hash.merge!(other) do |_key, old_val, new_val|
if [old_val, new_val].all? { |v| v.is_a?(Hash) }
recursive_merge!(old_val, new_val, &block)
elsif block_given?
block.call(_key, old_val, new_val)
else
new_val
end
end
end
h1 = { a: true, b: { c: [1, 2, 3] } }
h2 = { a: false, b: { x: [3, 4, 5] } }
recursive_merge!(h1, h2) { |_k, o, _n| o } # => { a: true, b: { c: [1, 2, 3], x: [3, 4, 5] } }
Note: This method reproduces the results you would get from ActiveSupport's Hash#deep_merge if you're using Rails.

This is how I would handle this:
def new_h
Hash.new{|h,k| h[k] = new_h}
end
values.flatten.each_slice(2).each_with_object(new_h) do |(k,v),obj|
keys = k.is_a?(String) ? k.split(':') : [k]
if keys.count > 1
set_key = keys.pop
obj.merge!(keys.inject(new_h) {|memo,k1| memo[k1] = new_h})
.dig(*keys)
.merge!({set_key => v})
else
obj[k] = v
end
end
#=> {"income"=>{
"concessions"=>0,
"gross-income"=>"900000"},
"noi"=>"722300",
"purpose"=>"refinancing",
"fees"=>{
"fee-one"=>"0",
"fee-two"=>"0"},
"expenses"=>{
"admin"=>"7500",
"other"=>"0"},
"address"=>{
"zip"=>"10019"}
}
Explanation:
Define a method (new_h) for setting up a new Hash with default new_h at any level (Hash.new{|h,k| h[k] = new_h})
First flatten the Array (values.flatten)
then group each 2 elements together as sudo key value pairs (.each_slice(2))
then iterate over the pairs using an accumulator where each new element added is defaulted to a Hash (.each_with_object(new_h.call) do |(k,v),obj|)
split the sudo key on a colon (keys = k.is_a?(String) ? k.split(':') : [k])
if there is a split then create the parent key(s) (obj.merge!(keys.inject(new_h.call) {|memo,k1| memo[k1] = new_h.call}))
merge the last child key equal to the value (obj.dig(*keys.merge!({set_key => v}))
other wise set the single key equal to the value (obj[k] = v)
This has infinite depth as long as the depth chain is not broken say [["income:concessions:other",12],["income:concessions", 0]] in this case the latter value will take precedence (Note: this applies to all the answers in one way or anther e.g. the accepted answer the former wins but a value is still lost dues to inaccurate data structure)
repl.it Example

Related

merging two arrays of hashes wisely

I am trying to combine two arrays of hashes arr1 and arr2:
arr1 = [{"id"=>1, "a"=>1, "c"=>2}, {"id"=>2, "a"=>1}]
arr2 = [{"id"=>1, "a"=>10, "b"=>20}, {"id"=>3, "b"=>2}]
And I want the result to include all elements in both arrays, but the ones that have the same value for the "id" key, should be merged so that if a key exists in both hashes, it should be selected from arr2, otherwise, it just picks the value from any hash that the key exists in. So the combination of the example above would be:
combined = [
{"id"=>1, "a"=>10, "b"=>20, "c"=>2}, # "id"=>1 exists in both, so they are merged
{"id"=>2, "a"=>1},
{"id"=>3, "b"=>2}
]
The code below works, but I am new to Ruby and I am sure there is a better way to do this. Can you provide a more ruby-ic way?
combined = []
# merge items that exist in both and add to combined
arr1.each do |a1|
temp = arr2.select {|a2| a2["id"] == a1["id"]}[0]
if temp.present?
combined << temp.reverse_merge(a1)
end
end
# Add items that exist in arr1 but not in arr2
arr1.each do |a1|
if arr2.pluck("id").exclude? a1["id"]
combined << a1
end
end
# Add items that exist in arr2 but not in arr1
arr2.each do |a2|
if arr1.pluck("id").exclude? a2["id"]
combined << a2
end
end
I assume that no two elements (hashes) of arr1, g and h, have the property that g["id"] == h["id"].
In this case one could write:
(arr1 + arr2).each_with_object(Hash.new { |h,k| h[k] = {} }) { |g,h|
h[g["id"]].update(g) }.values
#=> [{"id"=>1, "a"=>10, "c"=>2, "b"=>20}, {"id"=>2, "a"=>1},
# {"id"=>3, "b"=>2}]
Note that:
(arr1 + arr2).each_with_object(Hash.new { |h,k| h[k] = {} }) { |g,h|
h[g["id"]].update(g) }
#=> {1=>{"id"=>1, "a"=>10, "c"=>2, "b"=>20}, 2=>{"id"=>2, "a"=>1},
# 3=>{"id"=>3, "b"=>2}}
If a hash is defined:
h = Hash.new { |h,k| h[k] = {} }
then, possibly after keys have been added to h, if h does not have a key k, h[k] = {} is executed and the empty hash is returned. See the form of Hash::new that takes a block. See also Hash#update (aka Hash#merge!).
One may alternatively write:
(arr1 + arr2).each_with_object({}) { |g,h| (h[g["id"]] ||= {}).update(g) }.values
#=> {1=>{"id"=>1, "a"=>10, "c"=>2, "b"=>20}, 2=>{"id"=>2, "a"=>1},
# 3=>{"id"=>3, "b"=>2}}
Another way is to use Emumerable#group_by, where the grouping is on the value of the key "id":
(arr1 + arr2).group_by { |h| h["id"] }.values.map { |a| a.reduce(&:merge) }
#=> [{"id"=>1, "a"=>10, "c"=>2, "b"=>20}, {"id"=>2, "a"=>1}, {"id"=>3, "b"=>2}]

how to make a deep_slice in a hash on ruby

I was looking around for a clean way to do this and I found some workarounds but did not find anything like the slice (some people recommended to use a gem but I think is not needed for this operations, pls correct me if I am wrong), so I found myself with a hash that contains a bunch of hashes and I wanted a way to perform the Slice operation over this hash and get also the key/value pairs from nested hashes, so the question:
Is there something like deep_slice in ruby?
Example:
input: a = {b: 45, c: {d: 55, e: { f: 12}}, g: {z: 90}}, keys = [:b, :f, :z]
expected output: {:b=>45, :f=>12, :z=>90}
Thx in advance! 👍
After looking around for a while I decided to implement this myself, this is how I fix it:
a = {b: 45, c: {d: 55, e: { f: 12}}, g: {z: 90}}
keys = [:b, :f, :z]
def custom_deep_slice(a:, keys:)
result = a.slice(*keys)
a.keys.each do |k|
if a[k].class == Hash
result.merge! custom_deep_slice(a: a[k], keys: keys)
end
end
result
end
c_deep_slice = custom_deep_slice(a: a, keys: keys)
p c_deep_slice
The code above is a classic DFS, which takes advantage of the merge! provided by the hash class.
You can test the code above here
require 'set'
def recurse(h, keys)
h.each_with_object([]) do |(k,v),arr|
if keys.include?(k)
arr << [k,v]
elsif v.is_a?(Hash)
arr.concat(recurse(v,keys))
end
end
end
hash = { b: 45, c: { d: 55, e: { f: 12 } }, g: { b: 21, z: 90 } }
keys = [:b, :f, :z]
arr = recurse(hash, keys.to_set)
#=> [[:b, 45], [:f, 12], [:b, 21], [:z, 90]]
Notice that hash differs slightly from the example hash given in the question. I added a second nested key :b to illustrate the problem of returning a hash rather than an array of key-value pairs. Were we to convert arr to a hash the pair [:b, 45] would be discarded:
arr.to_h
#=> {:b=>21, :f=>12, :z=>90}
If desired, however, one could write:
arr.each_with_object({}) { |(k,v),h| (h[k] ||= []) << v }
#=> {:b=>[45, 21], :f=>[12], :z=>[90]}
I converted keys from an array to a set merely to speed lookups (keys.include?(k)).
A slightly modified approach could be used if the hash contained nested arrays of hashes as well as nested hashes.
My version
maybe it should help
def deep_slice( obj, *args )
deep_arg = {}
slice_args = []
args.each do |arg|
if arg.is_a? Hash
arg.each do |hash|
key, value = hash
if obj[key].is_a? Hash
deep_arg[key] = deep_slice( obj[key], *value )
elsif obj[key].is_a? Array
deep_arg[key] = obj[key].map{ |arr_el| deep_slice( arr_el, *value) }
end
end
elsif arg.is_a? Symbol
slice_args << arg
end
end
obj.slice(*slice_args).merge(deep_arg)
end
Object to slice
obj = {
"id": 135,
"kind": "transfer",
"customer": {
"id": 1,
"name": "Admin",
},
"array": [
{
"id": 123,
"name": "TEST",
"more_deep": {
"prop": "first",
"prop2": "second"
}
},
{
"id": 222,
"name": "2222"
}
]
}
Schema to slice
deep_slice(
obj,
:id,
customer: [
:name
],
array: [
:name,
more_deep: [
:prop2
]
]
)
Result
{
:id=>135,
:customer=>{
:name=>"Admin"
},
:array=>[
{
:name=>"TEST",
:more_deep=>{
:prop2=>"second"
}
},
{
:name=>"2222"
}
]
}

How to build a hash of directory names as keys and file names as values in Ruby?

I have directories with files and I would like to build a hash of directory names as keys and file names as values. Example:
/app/foo/create.json
/app/foo/update.json
/app/bar/create.json
/app/bar/update.json
Output:
{
"foo" => {
"create.json" => {},
"update.json" => {}
},
"bar" => {
"create.json" => {},
"update.json" => {}
}
}
Currently I'd doing this:
OUTPUT ||= {}
Dir.glob('app', '**', '*.json')) do |file|
OUTPUT[File.basename(file)] = File.read(file)
end
But it's not working as expected, I'm not sure how to get the parent directory name.
Dir.glob('*/*.json', base: 'app').each_with_object(Hash.new {|g,k| g[k]={}}) do |fname,h|
h[File.dirname(fname)].update(File.basename(fname)=>{})
end
#=> {"foo"=>{"create.json"=>{}, "update.json"=>{}},
# "bar"=>{"update.json"=>{}, "create.json"=>{}}}
#Amadan explains the use of Dir#glob, which is exactly as in his answer. I have employed the version of Hash::new that invokes a block (here {|g,k| g[k]={}}) when g[k] is executed and the hash g does not have a key k.1. See also Hash#update (aka merge!), File::dirname and File::basename.
The steps are as follows.
a = Dir.glob('*/*.json', base: 'app')
#=> ["foo/create.json", "foo/update.json", "bar/update.json", "bar/create.json"]
enum = a.each_with_object(Hash.new {|g,k| g[k]={}})
#=> #<Enumerator: ["foo/create.json", "foo/update.json", "bar/update.json",
# "bar/create.json"]:each_with_object({})>
The first value is generate by the enumerator and passed to the block, and the block variables are assigned values by the process of array decomposition:
fname, h = enum.next
#=> ["foo/create.json", {}]
fname
#=> "foo/create.json"
h #=> {}
d = File.dirname(fname)
#=> "foo"
b = File.basename(fname)
#=> "create.json"
h[d].update(b=>{})
#=> {"create.json"=>{}}
See Enumerator#next. The next value is generated by enum and passed to the block, the block variables are assigned values and the block calculations are performed. (Notice that the hash being built, h, has been updated in the following.)
fname, h = enum.next
#=> ["foo/update.json", {"foo"=>{"create.json"=>{}}}]
fname
#=> "foo/update.json"
h #=> {"foo"=>{"create.json"=>{}}}
d = File.dirname(fname)
#=> "foo"
b = File.basename(fname)
#=> "update.json"
h[d].update(b=>{})
#=> {"create.json"=>{}, "update.json"=>{}}
Twice more.
fname, h = enum.next
#=> ["bar/update.json", {"foo"=>{"create.json"=>{}, "update.json"=>{}}}]
d = File.dirname(fname)
#=> "bar"
b = File.basename(fname)
#=> "update.json"
h[d].update(b=>{})
#=> {"update.json"=>{}}
fname, h = enum.next
#=> ["bar/create.json",
# {"foo"=>{"create.json"=>{}, "update.json"=>{}}, "bar"=>{"update.json"=>{}}}]
d = File.dirname(fname)
#=> "bar"
b = File.basename(fname)
#=> "create.json"
h[d].update(b=>{})
#=> {"update.json"=>{}, "create.json"=>{}}
h #=> {"foo"=>{"create.json"=>{}, "update.json"=>{}},
# "bar"=>{"update.json"=>{}, "create.json"=>{}}}
1. This is equivalent to defining the hash as follows: g = {}; g.default_proc = proc {|g,k| g[k]={}}. See Hash#default_proc=.
An alternative to regexp:
output =
Dir.glob('*/*.json', base: 'app').
group_by(&File::method(:dirname)).
transform_values { |files|
files.each_with_object({}) { |file, hash|
hash[File.basename(file)] = File.read(file)
}
}
Note the base: keyword argument to File.glob (or Pathname.glob, for that matter) which simplifies things as we don't need to remove app; also that for the purposes of OP's question there only needs to be one directory level, so * instead of **.

Convert keys in inner hash to keys of outer/parent hash

I have a hash, say,
account = {
name: "XXX",
email: "xxx#yyy.com",
details: {
phone: "9999999999",
dob: "00-00-00",
address: "zzz"
}
}
Now I want to convert account to a hash like this:
account = {
name: "XXX",
email: "xxx#yyy.com",
phone: "9999999999",
dob: "00-00-00",
address: "zzz"
}
I'm a beginner and would like to know if there is any function to do it? (Other than merging the nested hash and then deleting it)
You could implement a generic flatten_hash method which works roughly similar to Array#flatten in that it allows to flatten Hashes of arbitrary depth.
def flatten_hash(hash, &block)
hash.dup.tap do |result|
hash.each_pair do |key, value|
next unless value.is_a?(Hash)
flattened = flatten_hash(result.delete(key), &block)
result.merge!(flattened, &block)
end
end
end
Here, we are still performing the delete / merge sequence, but it would be required in any such implementation anyway, even if hidden below further abstractions.
You can use this method as follows:
account = {
name: "XXX",
email: "xxx#yyy.com",
details: {
phone: "9999999999",
dob: "00-00-00",
address: "zzz"
}
}
flatten(account)
# => {:name=>"XXX", :email=>"xxx#yyy.com", :phone=>"9999999999", :dob=>"00-00-00", :address=>"zzz"}
Note that with this method, any keys in lower-level hashes overwrite existing keys in upper-level hashes by default. You can however provide a block to resolve any merge conflicts. Please refer to the documentation of Hash#merge! to learn how to use this.
This will do the trick:
account.map{|k,v| k==:details ? v : {k => v}}.reduce({}, :merge)
Case 1: Each value of account may be a hash whose values are not hashes
account.flat_map { |k,v| v.is_a?(Hash) ? v.to_a : [[k,v]] }.to_h
#=> {:name=>"XXX", :email=>"xxx#yyy.com", :phone=>"9999999999",
# :dob=>"00-00-00", :address=>"zzz"}
Case 2: account may have nested hashes
def doit(account)
recurse(account.to_a).to_h
end
def recurse(arr)
arr.each_with_object([]) { |(k,v),a|
a.concat(v.is_a?(Hash) ? recurse(v.to_a) : [[k,v]]) }
end
account = {
name: "XXX",
email: "xxx#yyy.com",
details: {
phone: "9999999999",
dob: { a: 1, b: { c: 2, e: { f: 3 } } },
address: "zzz"
}
}
doit account
#=> {:name=>"XXX", :email=>"xxx#yyy.com", :phone=>"9999999999", :a=>1,
# :c=>2, :f=>3, :address=>"zzz"}
Explanation for Case 1
The calculations progress as follows.
One way to think of Enumerable#flat_map, as it is used here, is that if, for some method g,
[a, b, c].map { |e| g(e) } #=> [f, g, h]
where a, b, c, f, g and h are all arrays, then
[a, b, c].flat_map { |e| g(e) } #=> [*f, *g, *h]
Let's start by creating an enumerator to pass elements to the block.
enum = account.to_enum
#=> #<Enumerator: {:name=>"XXX", :email=>"xxx#yyy.com",
# :details=>{:phone=>"9999999999", :dob=>"00-00-00",
# :address=>"zzz"}}:each>
enum generates an element which is passed to the block and the block variables are set equal to those values.
k, v = enum.next
#=> [:name, "XXX"]
k #=> :name
v #=> "XXX"
v.is_a?(Hash)
#=> false
a = [[k,v]]
#=> [[:name, "XXX"]]
k, v = enum.next
#=> [:email, "xxx#yyy.com"]
v.is_a?(Hash)
#=> false
b = [[k,v]]
#=> [[:email, "xxx#yyy.com"]]
k,v = enum.next
#=> [:details, {:phone=>"9999999999", :dob=>"00-00-00", :address=>"zzz"}]
v.is_a?(Hash)
#=> true
c = v.to_a
#=> [[:phone, "9999999999"], [:dob, "00-00-00"], [:address, "zzz"]]
d = account.flat_map { |k,v| v.is_a?(Hash) ? v.to_a : [[k,v]] }
#=> [*a, *b, *c]
#=> [[:name, "XXX"], [:email, "xxx#yyy.com"], [:phone, "9999999999"],
# [:dob, "00-00-00"], [:address, "zzz"]]
d.to_h
#=> <the return value shown above>

Creating a new hash with default keys

I want to create a hash with an index that comes from an array.
ary = ["a", "b", "c"]
h = Hash.new(ary.each{|a| h[a] = 0})
My goal is to start with a hash like this:
h = {"a"=>0, "b"=>0, "c"=>0}
so that later when the hash has changed I can reset it with h.default
Unfortunately the way I'm setting up the hash is not working... any ideas?
You should instantiate your hash h first, and then fill it with the contents of the array:
h = {}
ary = ["a", "b", "c"]
ary.each{|a| h[a] = 0}
Use the default value feature for the hash
h = Hash.new(0)
h["a"] # => 0
In this approach, the key is not set.
h.key?("a") # => false
Other approach is to set the missing key when accessed.
h = Hash.new {|h, k| h[k] = 0}
h["a"] # => 0
h.key?("a") # => true
Even in this approach, the operations like key? will fail if you haven't accessed the key before.
h.key?("b") # => false
h["b"] # => 0
h.key?("b") # => true
You can always resort to brute force, which has the least boundary conditions.
h = Hash.new.tap {|h| ["a", "b", "c"].each{|k| h[k] = 0}}
h.key?("b") # => true
h["b"] # => 0
You can do it like this where you expand a list into zero-initialized values:
list = %w[ a b c ]
hash = Hash[list.collect { |i| [ i, 0 ] }]
You can also make a Hash that simply has a default value of 0 for any given key:
hash = Hash.new { |h, k| h[k] = 0 }
Any new key referenced will be pre-initialized to the default value and this will avoid having to initialize the whole hash.
This may not be the most efficient way, but I always appreciate one-liners that reveal a little more about Ruby's versatility:
h = Hash[['a', 'b', 'c'].collect { |v| [v, 0] }]
Or another one-liner that does the same thing:
h = ['a', 'b', 'c'].inject({}) {|h, v| h[v] = 0; h }
By the way, from a performance standpoint, the one-liners run about 80% of the speed of:
h = {}
ary = ['a','b','c']
ary.each { |a| h[a]=0 }
Rails 6 added index_with on Enumerable module. This will help in creating a hash from an enumerator with default or fetched values.
ary = %w[a b c]
hash = ary.index_with(0) # => {"a"=>0, "b"=>0, "c"=>0}
Another option is to use the Enum#inject method which I'm a fan of for its cleanliness. I haven't benchmarked it compared to the other options though.
h = ary.inject({}) {|hash, key| hash[key] = 0; hash}
Alternate way of having a hash with the keys actually added
Hash[[:a, :b, :c].zip([])] # => {:a=>nil, :b=>nil, :c=>nil}
Hash[[:a, :b, :c].zip(Array.new(3, 0))] # => {:a=>0, :b=>0, :c=>0}

Resources