I am returning a response of user fields in JSON. I am creating JSON as below.
def user_response(users)
users_array = []
users.each do |user|
uhash = {}
uhash[:id] = user.id,
uhash[:nickname] = user.nickname,
uhash[:online_sharing] = user.online_sharing,
uhash[:offline_export] = user.offline_export,
uhash[:created_at] = user.created_at,
uhash[:app_opens_count] = user.app_opens_count,
uhash[:last_activity] = user.last_activity,
uhash[:activity_goal] = user.activity_goal,
uhash[:last_activity] = user.last_activity,
uhash[:region] = user.region
users_array << uhash
end
users_array
end
But the response is pretty weird. The :id key in hash has an array of all the fields don't know why.
{
"nickname": "adidas",
"online_sharing": null,
"offline_export": null,
"created_at": "2016-08-26T09:03:54.000Z",
"app_opens_count": 29,
"last_activity": "2016-08-26T09:13:01.000Z",
"activity_goal": 3,
"region": "US",
"id": [
9635,
"adidas",
null,
null,
"2016-08-26T09:03:54.000Z",
29,
"2016-08-26T09:13:01.000Z",
3,
"2016-08-26T09:13:01.000Z",
"US"
]
}
That's due to your , at the end of each line
The problem consists of two things:
An assignment evaluates as the value being assigned:
puts (foo = 42) # => prints 42
Multiple values, separated with comma on the right hand side of an assignment form an array:
bar = 1, 2, 3
bar # => [1, 2, 3]
The new lines don't change that, so you basically do something like this:
sonne = (foo = :eins), (bar = :zwei), (baz = :drei), (qux = :vier)
sonne # => [:eins, :zwei, :drei, :vier]
The fix is indeed to remove the commas.
You have comma , at the end of each line
uhash[:id] = user.id,
Also, You may change the above code to:
def user_response(users)
users.map do |user|
user.attributes.slice(:id, :nickname, :online_sharing, :offline_export, :created_at, :app_opens_count, :last_activity, :activity_goal, :last_activity, :region)
end
end
Related
I spend a time to find a solution how to convert an array came from the HTTP request to something can rails read.
Example of the request:
"line_item":[{
"item_id": 1,
"item_name": "Burger",
"item_price": 15.00,
"item_quantity": 2
},
{
"item_id": 2,
"item_name": "Soda",
"item_price": 3.00,
"item_quantity": 1
}]
and then I loop the array like so:
#not_available_items = []
#line_item = #order.line_item
#line_item.each do |i|
#item = Item.find(i.item_id)
if #item.is_active == false
#not_available_items.push(
{
item_id: i.item_id,
item_name: i.item_name
}
)
end
end
The error message:
"exception": "#<NoMethodError: undefined method `item_id' for {\"item_id\"=> 1, \"item_name\"=>\"Burger\", \"item_price\"=> 15.0, \"item_quantity\"=> 2}:Hash>"
Hope I explained the issue clear.
Thanks
The variable i is an object of type Hash (and not of type Item). In Ruby, unlike Javascript, you cannot access the key of a Hash object using the . operator.
You can use the [] operator to fetch the value for a key in a Hash.
In your code, you are using i.item_id, but i does not have an attribute or method called item_id; it has a key item_id. Since, i is a Hash, you can access the fields by calling and i["item_id"]
#not_available_items = []
#line_item = #order.line_item
#line_item.each do |i|
#item = Item.find(i["item_id"])
if #item.is_active == false
#not_available_items.push(
{
item_id: i.item_id,
item_name: i.item_name
}
)
end
end
I was looking around for a clean way to do this and I found some workarounds but did not find anything like the slice (some people recommended to use a gem but I think is not needed for this operations, pls correct me if I am wrong), so I found myself with a hash that contains a bunch of hashes and I wanted a way to perform the Slice operation over this hash and get also the key/value pairs from nested hashes, so the question:
Is there something like deep_slice in ruby?
Example:
input: a = {b: 45, c: {d: 55, e: { f: 12}}, g: {z: 90}}, keys = [:b, :f, :z]
expected output: {:b=>45, :f=>12, :z=>90}
Thx in advance! 👍
After looking around for a while I decided to implement this myself, this is how I fix it:
a = {b: 45, c: {d: 55, e: { f: 12}}, g: {z: 90}}
keys = [:b, :f, :z]
def custom_deep_slice(a:, keys:)
result = a.slice(*keys)
a.keys.each do |k|
if a[k].class == Hash
result.merge! custom_deep_slice(a: a[k], keys: keys)
end
end
result
end
c_deep_slice = custom_deep_slice(a: a, keys: keys)
p c_deep_slice
The code above is a classic DFS, which takes advantage of the merge! provided by the hash class.
You can test the code above here
require 'set'
def recurse(h, keys)
h.each_with_object([]) do |(k,v),arr|
if keys.include?(k)
arr << [k,v]
elsif v.is_a?(Hash)
arr.concat(recurse(v,keys))
end
end
end
hash = { b: 45, c: { d: 55, e: { f: 12 } }, g: { b: 21, z: 90 } }
keys = [:b, :f, :z]
arr = recurse(hash, keys.to_set)
#=> [[:b, 45], [:f, 12], [:b, 21], [:z, 90]]
Notice that hash differs slightly from the example hash given in the question. I added a second nested key :b to illustrate the problem of returning a hash rather than an array of key-value pairs. Were we to convert arr to a hash the pair [:b, 45] would be discarded:
arr.to_h
#=> {:b=>21, :f=>12, :z=>90}
If desired, however, one could write:
arr.each_with_object({}) { |(k,v),h| (h[k] ||= []) << v }
#=> {:b=>[45, 21], :f=>[12], :z=>[90]}
I converted keys from an array to a set merely to speed lookups (keys.include?(k)).
A slightly modified approach could be used if the hash contained nested arrays of hashes as well as nested hashes.
My version
maybe it should help
def deep_slice( obj, *args )
deep_arg = {}
slice_args = []
args.each do |arg|
if arg.is_a? Hash
arg.each do |hash|
key, value = hash
if obj[key].is_a? Hash
deep_arg[key] = deep_slice( obj[key], *value )
elsif obj[key].is_a? Array
deep_arg[key] = obj[key].map{ |arr_el| deep_slice( arr_el, *value) }
end
end
elsif arg.is_a? Symbol
slice_args << arg
end
end
obj.slice(*slice_args).merge(deep_arg)
end
Object to slice
obj = {
"id": 135,
"kind": "transfer",
"customer": {
"id": 1,
"name": "Admin",
},
"array": [
{
"id": 123,
"name": "TEST",
"more_deep": {
"prop": "first",
"prop2": "second"
}
},
{
"id": 222,
"name": "2222"
}
]
}
Schema to slice
deep_slice(
obj,
:id,
customer: [
:name
],
array: [
:name,
more_deep: [
:prop2
]
]
)
Result
{
:id=>135,
:customer=>{
:name=>"Admin"
},
:array=>[
{
:name=>"TEST",
:more_deep=>{
:prop2=>"second"
}
},
{
:name=>"2222"
}
]
}
I want to dynamically create a Hash without overwriting keys from an array of arrays. Each array has a string that contains the nested key that should be created. However, I am running into the issue where I am overwriting keys and thus only the last key is there
data = {}
values = [
["income:concessions", 0, "noi", "722300", "purpose", "refinancing"],
["fees:fee-one", "0" ,"income:gross-income", "900000", "expenses:admin", "7500"],
["fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
]
What it should look like:
{
"income" => {
"concessions" => 0,
"gross-income" => "900000"
},
"expenses" => {
"admin" => "7500",
"other" => "0"
}
"noi" => "722300",
"purpose" => "refinancing",
"fees" => {
"fee-one" => 0,
"fee-two" => 0
},
"address" => {
"zip" => "10019"
}
}
This is the code that I currently, have how can I avoid overwriting keys when I merge?
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
a = data.merge!(hh)
end
end
end
The code you've provided can be modified to merge hashes on conflict instead of overwriting:
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
data.merge!(hh) { |_, old, new| old.merge(new) }
end
end
end
But this code only works for the two levels of nesting.
By the way, I noted ruby-on-rails tag on the question. There's deep_merge method that can fix the problem:
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
data.deep_merge!(hh)
end
end
end
values.flatten.each_slice(2).with_object({}) do |(f,v),h|
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
end
#=> {"income"=>{"concessions"=>0, "gross-income"=>"900000"},
# "noi"=>"722300",
# "purpose"=>"refinancing",
# "fees"=>{"fee-one"=>"0", "fee-two"=>"0"},
# "expenses"=>{"admin"=>"7500", "other"=>"0"},
# "address"=>{"zip"=>"10019"}}
The steps are as follows.
values = [
["income:concessions", 0, "noi", "722300", "purpose", "refinancing"],
["fees:fee-one", "0" ,"income:gross-income", "900000", "expenses:admin", "7500"],
["fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
]
a = values.flatten
#=> ["income:concessions", 0, "noi", "722300", "purpose", "refinancing",
# "fees:fee-one", "0", "income:gross-income", "900000", "expenses:admin", "7500",
# "fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
enum1 = a.each_slice(2)
#=> #<Enumerator: ["income:concessions", 0, "noi", "722300",
# "purpose", "refinancing", "fees:fee-one", "0", "income:gross-income", "900000",
# "expenses:admin", "7500", "fees:fee-two", "0", "address:zip", "10019",
# "expenses:other","0"]:each_slice(2)>
We can see what values this enumerator will generate by converting it to an array.
enum1.to_a
#=> [["income:concessions", 0], ["noi", "722300"], ["purpose", "refinancing"],
# ["fees:fee-one", "0"], ["income:gross-income", "900000"],
# ["expenses:admin", "7500"], ["fees:fee-two", "0"],
# ["address:zip", "10019"], ["expenses:other", "0"]]
Continuing,
enum2 = enum1.with_object({})
#=> #<Enumerator: #<Enumerator:
# ["income:concessions", 0, "noi", "722300", "purpose", "refinancing",
# "fees:fee-one", "0", "income:gross-income", "900000", "expenses:admin", "7500",
# "fees:fee-two", "0", "address:zip", "10019", "expenses:other", "0"]
# :each_slice(2)>:with_object({})>
enum2.to_a
#=> [[["income:concessions", 0], {}], [["noi", "722300"], {}],
# [["purpose", "refinancing"], {}], [["fees:fee-one", "0"], {}],
# [["income:gross-income", "900000"], {}], [["expenses:admin", "7500"], {}],
# [["fees:fee-two", "0"], {}], [["address:zip", "10019"], {}],
# [["expenses:other", "0"], {}]]
enum2 can be thought of as a compound enumerator (though Ruby has no such concept). The hash being generated is initially empty, as shown, but will be filled in as additional elements are generated by enum2
The first value is generated by enum2 and passed to the block, and the block values are assigned values by a process called array decomposition.
(f,v),h = enum2.next
#=> [["income:concessions", 0], {}]
f #=> "income:concessions"
v #=> 0
h #=> {}
We now perform the block calculation.
f.is_a?(String)
#=> true
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
#=> ["income", "concessions"]
e.nil?
#=> false
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
#=> {"concessions"=>0}
h[k] equals nil if h does not have a key k. In that case (h[k] || {}) #=> {}. If h does have a key k (and h[k] in not nil).(h[k] || {}) #=> h[k].
A second value is now generated by enum2 and passed to the block.
(f,v),h = enum2.next
#=> [["noi", "722300"], {"income"=>{"concessions"=>0}}]
f #=> "noi"
v #=> "722300"
h #=> {"income"=>{"concessions"=>0}}
Notice that the hash, h, has been updated. Recall it will be returned by the block after all elements of enum2 have been generated. We now perform the block calculation.
f.is_a?(String)
#=> true
k,e = f.is_a?(String) ? f.split(':') : [f,nil]
#=> ["noi"]
e #=> nil
e.nil?
#=> true
h[k] = e.nil? ? v : (h[k] || {}).merge(e=>v)
#=> "722300"
h #=> {"income"=>{"concessions"=>0}, "noi"=>"722300"}
The remaining calculations are similar.
merge overwrites a duplicate key by default.
{ "income"=> { "concessions" => 0 } }.merge({ "income"=> { "gross-income" => "900000" } } completely overwrites the original value of "income". What you want is a recursive merge, where instead of just merging the top level hash you're merging the nested values when there's duplication.
merge takes a block where you can specify what to do in the event of duplication. From the documentation:
merge!(other_hash){|key, oldval, newval| block} → hsh
Adds the contents of other_hash to hsh. If no block is specified, entries with duplicate keys are overwritten with the values from other_hash, otherwise the value of each duplicate key is determined by calling the block with the key, its value in hsh and its value in other_hash
Using this you can define a simple recursive_merge in one line
def recursive_merge!(hash, other)
hash.merge!(other) { |_key, old_val, new_val| recursive_merge!(old_val, new_val) }
end
values.each do |row|
Hash[*row].each do |key, value|
keys = key.split(':')
if !data.dig(*keys)
hh = keys.reverse.inject(value) { |a, n| { n => a } }
a = recursive_merge!(data, hh)
end
end
end
A few more lines will give you a more robust solution, that will overwrite duplicate keys that are not hashes and even take a block just like merge
def recursive_merge!(hash, other, &block)
hash.merge!(other) do |_key, old_val, new_val|
if [old_val, new_val].all? { |v| v.is_a?(Hash) }
recursive_merge!(old_val, new_val, &block)
elsif block_given?
block.call(_key, old_val, new_val)
else
new_val
end
end
end
h1 = { a: true, b: { c: [1, 2, 3] } }
h2 = { a: false, b: { x: [3, 4, 5] } }
recursive_merge!(h1, h2) { |_k, o, _n| o } # => { a: true, b: { c: [1, 2, 3], x: [3, 4, 5] } }
Note: This method reproduces the results you would get from ActiveSupport's Hash#deep_merge if you're using Rails.
This is how I would handle this:
def new_h
Hash.new{|h,k| h[k] = new_h}
end
values.flatten.each_slice(2).each_with_object(new_h) do |(k,v),obj|
keys = k.is_a?(String) ? k.split(':') : [k]
if keys.count > 1
set_key = keys.pop
obj.merge!(keys.inject(new_h) {|memo,k1| memo[k1] = new_h})
.dig(*keys)
.merge!({set_key => v})
else
obj[k] = v
end
end
#=> {"income"=>{
"concessions"=>0,
"gross-income"=>"900000"},
"noi"=>"722300",
"purpose"=>"refinancing",
"fees"=>{
"fee-one"=>"0",
"fee-two"=>"0"},
"expenses"=>{
"admin"=>"7500",
"other"=>"0"},
"address"=>{
"zip"=>"10019"}
}
Explanation:
Define a method (new_h) for setting up a new Hash with default new_h at any level (Hash.new{|h,k| h[k] = new_h})
First flatten the Array (values.flatten)
then group each 2 elements together as sudo key value pairs (.each_slice(2))
then iterate over the pairs using an accumulator where each new element added is defaulted to a Hash (.each_with_object(new_h.call) do |(k,v),obj|)
split the sudo key on a colon (keys = k.is_a?(String) ? k.split(':') : [k])
if there is a split then create the parent key(s) (obj.merge!(keys.inject(new_h.call) {|memo,k1| memo[k1] = new_h.call}))
merge the last child key equal to the value (obj.dig(*keys.merge!({set_key => v}))
other wise set the single key equal to the value (obj[k] = v)
This has infinite depth as long as the depth chain is not broken say [["income:concessions:other",12],["income:concessions", 0]] in this case the latter value will take precedence (Note: this applies to all the answers in one way or anther e.g. the accepted answer the former wins but a value is still lost dues to inaccurate data structure)
repl.it Example
i have a field and value output looks like
A::field_1_2_3_4_22_5_6_7_8_365 => 6
because the field name is "dynamical" because of contain ip and port.
how to using ruby to get value from field name and add field for it ? it's will be looks like
A::field => 6
bIP => 1.2.3.4
bport => 22
cIP => 5.6.7.8
cport => 365
any help will be appreciated!!thanks
Here is an answer for a very similar question: logstash name fields dynamically
For this precise case, the ruby filter needs to be a bit more involved in order to capture different things
filter {
ruby {
code => "
newhash = {}
event.to_hash.each {|key, value|
re = /(\w::[a-z]+)_(\d+_\d+_\d+_\d+)_(\d+)_(\d+_\d+_\d+_\d+)_(\d+)/
if key =~ re then
field, bIP, bport, cIP, cport = key.match(re).captures
newhash[field] = event[key]
newhash['bIP'] = bIP.gsub('_', '.')
newhash['bport'] = bport
newhash['cIP'] = cIP.gsub('_', '.')
newhash['cport'] = cport
event.remove(key)
end
}
newhash.each {|key,value|
event[key] = value
}
"
}
}
So if you have a field like "A::field_1_2_3_4_22_5_6_7_8_365" => "6" in your event, your event will then contain the following fields:
{
"A::field" => "6",
"bIP" => "1.2.3.4",
"bport" => "22",
"cIP" => "5.6.7.8",
"cport" => "365"
}
You can use the the following code:
field_name = 'A::field_1_2_3_4_22_5_6_7_8_365'
fields = field_name.split("_")
bID = "#{fields[1]}.#{fields[2]}.#{fields[3]}.#{fields[4]}"
bport = "#{fields[5]}"
bID = "#{fields[6]}.#{fields[7]}.#{fields[8]}.#{fields[9]}"
bport = "#{fields[10]}"
Maybe i've not understooden this quite correctly but you can use arrays and method .push for adding:
Array=[1, 2, 5, 7]
#n = [3, 4, 6]
Array.push(#n) => Array=[1, 2, 3, 4, 5, 6, 7]
I have a text array.
text_array = ["bob", "alice", "dave", "carol", "frank", "eve", "jordan", "isaac", "harry", "george"]
text_array = text_array.sort would give us a sorted array.
However, I want a sorted array with f as the first letter for our order, and e as the last.
So the end result should be...
text_array = ["frank", "george", "harry", "isaac", "jordan", "alice", "bob", "carol", "dave", "eve"]
What would be the best way to accomplish this?
Try this:
result = (text_array.select{ |v| v =~ /^[f-z]/ }.sort + text_array.select{ |v| v =~ /^[a-e]/ }.sort).flatten
It's not the prettiest but it will get the job done.
Edit per comment. Making a more general piece of code:
before = []
after = []
text_array.sort.each do |t|
if t > term
after << t
else
before << t
end
end
return (after + before).flatten
This code assumes that term is whatever you want to divide the array. And if an array value equals term, it will be at the end.
You can do that using a hash:
alpha = ('a'..'z').to_a
#=> ["a", "b", "c",..."x", "y", "z"]
reordered = alpha.rotate(5)
#=> ["f", "g",..."z", "a",...,"e"]
h = reordered.zip(alpha).to_h
# => {"f"=>"a", "g"=>"b",..., "z"=>"u", "a"=>"v",..., e"=>"z"}
text_array.sort_by { |w| w.gsub(/./,h) }
#=> ["frank", "george", "harry", "isaac", "jordan",
# "alice", "bob", "carol", "dave", "eve"]
A variant of this is:
a_to_z = alpha.join
#=> "abcdefghijklmnopqrstuvwxyz"
f_to_e = reordered.join
#=> "fghijklmnopqrstuvwxyzabcde"
text_array.sort_by { |w| w.tr(f_to_e, a_to_z) }
#=> ["frank", "george", "harry", "isaac", "jordan",
# "alice", "bob", "carol", "dave", "eve"]
I think the easiest would be to rotate the sorted array:
text_array.rotate(offset) if offset = text_array.find_index { |e| e.size > 0 and e[0] == 'f' }
Combining Ryan K's answer and my previous answer, this is a one-liner you can use without any regex:
text_array = text_array.sort!.select {|x| x.first >= "f"} + text_array.select {|x| x.first < "f"}
If I got your question right, it looks like you want to create sorted list with biased predefined patterns.
ie. let's say you want to define specific pattern of text which can completely change the sorting sequence for the array element.
Here is my proposal, you can get better code out of this, but my tired brain got it for now -
an_array = ["bob", "alice", "dave", "carol", "frank", "eve", "jordan", "isaac", "harry", "george"]
# Define your patterns with scores so that the sorting result can vary accordingly
# It's full fledged Regex so you can put any kind of regex you want.
patterns = {
/^f/ => 100,
/^e/ => -100,
/^g/ => 60,
/^j/ => 40
}
# Sort the array with our preferred sequence
sorted_array = an_array.sort do |left, right|
# Find score for the left string
left_score = patterns.find{ |p, s| left.match(p) }
left_score = left_score ? left_score.last : 0
# Find the score for the right string
right_score = patterns.find{ |p, s| right.match(p) }
right_score = right_score ? right_score.last : 0
# Create the comparision score to prepare the right order
# 1 means replace with right and -1 means replace with left
# and 0 means remain unchanged
score = if right_score > left_score
1
elsif left_score > right_score
-1
else
0
end
# For debugging purpose, I added few verbose data
puts "L#{left_score}, R:#{right_score}: #{left}, #{right} => #{score}"
score
end
# Original array
puts an_array.join(', ')
# Biased array
puts sorted_array.join(', ')