How to add column to existing row with axlsx gem? - ruby-on-rails

I am trying to format my xlsx file, but i faced some problems. For example i need to add column to existing row. For example here is my code:
wb.add_worksheet(name: "Sums") do |sheet|
sheet.add_row ["1.", "Rodiklis"], :style=>[title]
sheet.add_row ["1.1", "Rekomendacijų vertė"]
#departaments.each do |departament|
sheet.add_row ["", departament.name]
end
#all_items.each do |summary|
tyfcb = 0
rgi = 0
rgo = 0
rgirgo = 0
total = 0
rgirgo_per_user = 0
meeting_1_2_1 = 0
meeting_1_2_1_per_user = 0
ceu = 0
ceu_per_user = 0
v = 0
v_per_user = 0
summary.departament.contacts.each do |c|
items = c.items.where(summary_id: summary)
tyfcb += c.item_x(items, 'tyfcb')
rgi += c.item_x(items, 'rgi')
rgo += c.item_x(items, 'rgo')
meeting_1_2_1 += c.item_x(items, '1_2_1')
total += 1
ceu += c.item_x(items, 'ceu')
v += c.item_x(items, 'v')
end
rgirgo = rgi + rgo
rgirgo_per_user = rgirgo.to_f / total.to_f
meeting_1_2_1_per_user = meeting_1_2_1.to_f / total.to_f
ceu_per_user = ceu.to_f / total.to_f
v_per_user = v.to_f / total.to_f
sheet.add_row [summary.departament.name,summary.id, tyfcb]
sheet.add_row [summary.departament.name,summary.id, rgirgo]
sheet.add_row [summary.departament.name,summary.id, sprintf('%.2f', rgirgo_per_user)]
sheet.add_row [summary.departament.name,summary.id, meeting_1_2_1]
sheet.add_row [summary.departament.name,summary.id, sprintf('%.2f', meeting_1_2_1_per_user)]
sheet.add_row [summary.departament.name,summary.id, ceu]
sheet.add_row [summary.departament.name,summary.id, sprintf('%.2f', ceu_per_user)]
sheet.add_row [summary.departament.name, summary.id, v]
sheet.add_row [summary.departament.name,summary.id, sprintf('%.2f', v_per_user)]
end
end
So from the code we can see that I generate lots of tyfcb, ceu and etc variables. All of them are printed in new row. I want to to print all tyfcb variables in one row, all ceu variables in one row and etc. How should I do with axlsx gem? Thanks for answers.

Use add_cell
add_cell(value = '', options = {}) ⇒ Cell
This adds a single cell to the row based on the data provided and updates the worksheet's autofit data.
http://www.rubydoc.info/github/randym/axlsx/Axlsx/Row

Just add a empty "" to the first cell of row or wherever you want a blank column.

Related

Group users by age range in rails

Based on 'PESEL" number I have to group user by their age. I created something like this and it is working, but... To be honest, it look bad for me.
HELPER:
def years(pesel)
years = (0..99).to_a
birth_year = []
case pesel[2..3].to_i
when 0..19
20.times do |index|
first_number = index % 2 == 0 ? (5 * index) : ((5 * index))
second_number = index % 2 == 0 ? (5 * index + 4) : ((5 * index) + 4)
first_year = Date.today.year - second_number.to_s.rjust(4,'1900').to_i
second_year = Date.today.year - first_number.to_s.rjust(4,'1900').to_i
birth_year += ["#{first_year}-#{second_year}"]
end
multiplied_birth_years = ([birth_year] * 5).inject(&:zip).flatten
hash = Hash[years.zip multiplied_birth_years]
hash.fetch(pesel[0..1].to_i)
when 20..39
20.times do |index|
first_number = index % 2 == 0 ? (5 * index) : ((5 * index))
second_number = index % 2 == 0 ? (5 * index + 4) : ((5 * index) + 4)
first_year = Date.today.year - second_number.to_s.rjust(4,'2000').to_i
second_year = Date.today.year - first_number.to_s.rjust(4,'2000').to_i
birth_year += ["#{first_year}-#{second_year}"]
end
multiplied_birth_years = ([birth_year] * 5).inject(&:zip).flatten
hash = Hash[years.zip multiplied_birth_years]
hash.fetch(pesel[0..1].to_i)
when 40..59
20.times do |index|
first_number = index % 2 == 0 ? (5 * index) : ((5 * index))
second_number = index % 2 == 0 ? (5 * index + 4) : ((5 * index) + 4)
first_year = Date.today.year - second_number.to_s.rjust(4,'2100').to_i
second_year = Date.today.year - first_number.to_s.rjust(4,'2100').to_i
birth_year += ["#{first_year}-#{second_year}"]
end
multiplied_birth_years = ([birth_year] * 5).inject(&:zip).flatten
hash = Hash[years.zip multiplied_birth_years]
hash.fetch(pesel[0..1].to_i)
end
end
CONTROLLER:
def grouped_by_age
#yearsbook = #study_participations.includes(user: :profile).group_by do |study_participation|
years(study_participation.user.profile.pesel)
end
end
A small explanation and example. I am interested in first 6 numbers that correspond sequentially: Year of birth, month, day
So if my PESEL == '980129(...)', then I was born twenty-ninth of January 1998
If someone was born in year 2000, then we add 20 to pesel-month number(for example '002129(...)' it is twenty-ninth of January 2000. If someone was born 2100, then we add 40 to pesel-month number.
I have explained what the pesel number is all about, now what I want to do with it.
I need to group users by their age range. Function from above returns has like this:
{0=>"118-122",
1=>"118-122",
2=>"118-122",
3=>"118-122",
4=>"118-122",
5=>"113-117",
6=>"113-117",
7=>"113-117",
8=>"113-117",
9=>"113-117",
10=>"108-112",
11=>"108-112",
12=>"108-112",
13=>"108-112",
14=>"108-112",
15=>"103-107",
16=>"103-107",
17=>"103-107",
18=>"103-107",
19=>"103-107",(...)}
Unfortunately this is not very efficient, because for each user (4000 max) I have to execute the functions from scratch. Is there any way to increase efficiency of this? I thought about storing this hash as const and changing it once a year, but I don't really know how to do that or if it is possible.
EDIT:
Forgot to mention: I need to compare user age with hash, so I can extract age range
EDIT2:
Based on #yoones answer I created something like this:
HELPER:
def years_cache
years = []
201.times do |index|
years += [Date.today.year - (1900 + index)]
end
birth_year = []
60.times do |index|
year = if index < 20
'1900'
elsif index < 40
'2000'
else
'2100'
end
first_number = 5 * (index % 20)
second_number = (5 * (index % 20)) + 4
first_year = Date.today.year - second_number.to_s.rjust(4, year).to_i
second_year = Date.today.year - first_number.to_s.rjust(4, year).to_i
birth_year += ["#{first_year}-#{second_year}"]
end
multiplied_birth_years = ([birth_year] * 5).inject(&:zip).flatten
#hash = (years.zip multiplied_birth_years).to_h
end
def years(cache, pesel)
day = pesel[4..5]
case pesel[2..3].to_i
when 0..19
month = pesel[2..3]
year = pesel[0..1].prepend('19')
when 20..39
month = (pesel[2..3].to_i - 20).to_s
year = pesel[0..1].prepend('20')
when 40..59
month = (pesel[2..3].to_i - 40).to_s
year = pesel[0..1].prepend('21')
end
birth_date = Time.strptime("#{day}/#{month}/#{year}", '%d/%m/%Y')
age = ((Time.zone.now - birth_date) / 1.year.seconds).floor
cache.fetch(age)
end
CONTROLLER:
def grouped_by_age
cache = years_cache()
#yearsbook = #study_participations.includes(user: :profile).group_by do |study_participation|
years(cache, study_participation.user.profile.pesel)
end
end
Instead of doing the complicated calculating of birth date from PESEL every time you want to view the page, do it once and store it in the database. Having a birth date column on the user makes a lot of sense.
Then when you want to group them, you can even do it via the database. If you still need to do it in ruby, then getting the birth year is as easy as user.birth_date.year
In order to then group users into ranges of 5 years according to age, add an age_range method to the model and group by that.
#study_participations.includes(user: :profile).group_by do |study_participation|
study_participation.user.age_range
end
Where age_range can be for example
def age_range
(Date.today.year - birth_date.year) / 5) * 5
end
Format that however you like
I guess you could at least build the cache once then use it in your loop. The following code is not pretty, it's just to illustrate what I mean:
def build_year_cache(index, rjust_str)
first_number = 5 * index
second_number = index % 2 == 0 ? (5 * index + 4) : ((5 * index) + 4)
first_year = Date.today.year - second_number.to_s.rjust(4, rjust_str).to_i
second_year = Date.today.year - first_number.to_s.rjust(4, rjust_str).to_i
"#{first_year}-#{second_year}"
end
def build_years_cache
cache = {}
years = (0..99).to_a
[
[0..19, '1900'],
[20..39, '2000'],
[40..59, '2100']
].each do |range, rjust_str|
birth_year = []
20.times do |index|
birth_year.append(build_year_cache(index, rjust_str))
end
multiplied_birth_years = ([birth_year] * 5).inject(&:zip).flatten
cache[range] = Hash[years.zip multiplied_birth_years]
end
cache
end
def years(pesel, cache)
year = pesel[0..1].to_i
month = pesel[2..3].to_i
range = cache.keys.find { |k, v| k.include?(month) }
cache[range].fetch(year)
end
def grouped_by_age
cache = build_years_cache
#yearsbook = #study_participations.includes(user: :profile).group_by do |study_participation|
years(study_participation.user.profile.pesel, cache)
end
end

Sum of values in loop and keeping highest value

I am trying to calculate the shipping dimensions of multiple products. I want to return the largest value of #tempLength and #tempWidth and sum up all #tempHeights in my loop:
#tempLength = 0
#tempWidth = 0
#tempHeight = 0
params[:rate][:items].each do |item|
item = item[:product_id]
puts "Item details"
puts item
#productDimensions = Product.where(:product_id => item).first
#tempLength = #productDimensions.length
#tempWidth = #productDimensions.width
#tempHeight = #productDimensions.height
# tempLength = maximum length value
# tempWidth = maximum width value
# tempHeight = sum of all heights
end
I know I have to use sum(&:symbol), but I'm stuck. What's the best approach?
here probably can help
#tempLength = 0
#tempWidth = 0
#tempHeight = 0
params[:rate][:items].each do |item|
item = item[:product_id]
puts "Item details"
puts item
#productDimensions = Product.where(:product_id => item).first
#tempLength = #tempLength + #productDimensions.length
#tempWidth = #tempWidth + #productDimensions.width
if #productDimensions.height > #tempHeight
#tempHeight = #productDimensions.height
end
end
# #tempLength = total of all tempLength values
# #tempWidth = total of all tempHeight values
# #tempHeight = keep highest value
Maybe another suggestion that may result in better performance:
#tempLength = 0
#tempWidth = 0
#tempHeight = 0
product_ids = []
params[:rate][:items].each do |item|
product_ids << item[:product_id]
end
# Filter products according to collected product_ids
#products = Product.where(:product_id => product_ids)
# Let ActiveRecord do the work
#tempLength = #products.sum('length')
#tempWidth = #products.sum('width')
#tempHeight = #products.maximum('height')
# #tempLength = total of all tempLength values
# #tempWidth = total of all tempHeight values
# #tempHeight = keep highest value

ruby axlsx gem: assign formula to axlsx worksheet without using add-row

According to the official examples, it is possible to assign formulas to sheets during add_row https://github.com/randym/axlsx/blob/master/examples/example.rb#L355
My question is is it possible to assign formulas to columns when the table is filled up?
Here is an example of adding a formula to an existing cell.
pk = Axlsx::Package.new
wb = pk.workbook
sheet = wb.add_worksheet(name: 'Test')
sheet.add_row(['First', 'Second', 'Third'])
sheet.add_row([1, 1, 1])
sheet.add_row([2, 2, 2])
sheet.add_row([3, 3, 0]) # we will be updating the last cell in this row (C4)
sheet.add_row([4, 4, 4])
cell = sheet['C4']
cell.type = :string # it is important to ensure the type of the cell is set before adding the formula
cell.value = '=A3+A4'
pk.serialize("example.xlsx")
Here is an example to dynamically update the cell formula:
pk = Axlsx::Package.new
wb = pk.workbook
sheet = wb.add_worksheet(name: 'Test')
sheet.add_row(['First', 'Second', 'Third'])
sheet.add_row([1, 1, 1])
sheet.add_row([2, 2, 2])
sheet.add_row([3, 3, 3])
sheet.add_row([4, 4, 4])
cells = sheet["C2:C5"] # select an array of cells
cells.each do |cell|
row_index = cell.row.index + 1
cell.type = :string
cell.value = "=SUM(A#{row_index}:B#{row_index})"
end
pk.serialize("example2.xlsx")

Count from json data Rails

I'm looking to count from some json data but it's outputting:
[0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
View
<%= #overall %>
Where the 1's are greater than 40. instead of '2'.
json data is formatted as from a url:
{"status": "ok", "data": [{"2014-06-16": 32.1},{"2014-06-17": 30.2},{"2014-06-18": 42.9}]} etc
Controller
#data = JSON.parse(open(#temperature.url).read)
#overall = []
#data['data'].each do |data|
dates << data.keys
temps << data.values
#overall << data.values.count { |i| i > 40 }
end
Since JSON data: is an array I am assuming that multiple dates are represented by multiple hashes (one for each day). Is this correct?
{"status": "ok", "data": [{"2014-06-16": 42.1}, {"2014-06-17": 45.5}]
If that's the case, this should work:
#data = JSON.parse(open(#temperature.url).read)
dates = #data['data'].map {|data| data.keys.first}
temps = #data['data'].map {|data| data.values.first}
#overall = temps.count {|temp| temp > 40}
Ok - this wil solve the issue
#data = JSON.parse(open(#temperature.url).read)
#overall = []
#data['data'].each do |data|
dates << data.keys
temps << data.values
end
forty_count = temps.flatten.count {|i| i > 40 }
the problem with your above code is that you can't do a count on-the-fly... you can only count when you have the full set of temperatures - which only happens when you get to the end.
also the way you are adding the "data.values" to the temps array, makes it into an array of arrays, which you can see if you do it this way:
data = [{"2014-06-16" => 32.1},{"2014-06-17" => 30.2},{"2014-06-18" => 42.9}]
data.each do |data|
temps << data.values
end
puts temps.inspect # [[32.1], [30.2], [42.9]]
puts temps.flatten.inspect # [32.1, 30.2, 42.9]
temps.count {|i| i > 40 } # explodes!
temps.flatten.count {|i| i > 40 } # 1

After a process why Memory continuously stay up in Ruby?

I'm using Rails 4 with Ruby 2.1.0. I have a code which taking about 2 GB of memory to complete process, but problem is after finish process...why is memory usage still up?
I also assign nil to variable after used which save 300 MB but still large amount of memory using by Ruby process.
Here is sample code to reproduce the problem:
def download_data(params, options = {})
$file_name = ''
portfolio_names = Portfolio.get_names
building_names = Building.get_names
tenant_names = Tenant.get_names
meter_names = Meter.get_meter_names
#portfolio_info, #building_info, #tenant_info, #meter_id, #from_date, #to_date = params[:portfolio_name], params[:building_name], params[:tenant_name], params[:meter_id], params[:from_date], params[:to_date]
#from_date = DateTime.parse(#from_date).to_i
#to_date = DateTime.parse(#to_date).to_i
#building_f_name = building_names["#{#building_info}"]
unless params[:fields].nil?
#fields = params[:fields].split('_')
#columns = []
#fields.each do |f|
if f.length > 0
#columns << MeterData.new.show_real_value_of_meter_data_field_revert(f).to_sym
end
end
#columns << 'date_time'.to_sym
#columns << 'tenant'.to_sym
#columns << 'meter_id'.to_sym
else
#columns = [:id, :date_time, :w3pht, :whintvlrec, :w__3phmaxavgdmd, :tenant, :meter_id]
end
#### database fetch ###########
#meter_data = MeterData.find_by_sql("SELECT tenant, meter_id, SUM(van) as van, SUM(vbn) as vbn,
SUM(vcn) as vcn, SUM(ia) as ia, SUM(ib) as ib, SUM(ic) as ic,
SUM(w3pht) as w3pht, SUM(pf3pht) as pf3pht, SUM(f) as f,
SUM(whrec) as whrec, SUM(whtot) as whtot, SUM(varhrec) as varhrec,
SUM(varhtot) as varhtot, SUM(whintvlrec) as whintvlrec,
SUM(whintvldel) as whintvldel, SUM(w__phavg) as w__phavg,
SUM(var__3phavg) as var__3phavg, SUM(w_3phavg) as w_3phavg,
SUM(var_3phavg) as var_3phavg, SUM(phai) as phai, SUM(phbi) as phbi,
SUM(phci) as phci, SUM(w__3phmaxavgdmd) as w__3phmaxavgdmd,
MAX(w__3phmaxavgdmd) as w__3phmaxavgdmd_max, SUM(var__3phmaxavgdmd) as var__3phmaxavgdmd,
SUM(w_3phmaxavgdmd) as w_3phmaxavgdmd, SUM(var_3phmaxavgdmd) as var_3phmaxavgdmd,
date_time FROM `meter_data` WHERE (`date_time_i` >= #{#from_date} AND `date_time_i` <= #{#to_date}
AND building = '#{#building_info}') GROUP by tenant, meter_id, date_time
ORDER BY `meter_data`.`date_time_i` ASC")
sleep 0.5
#meter_sum = MeterData.find_by_sql("SELECT meter_id, SUM(van) as van, SUM(vbn) as vbn, SUM(vcn) as vcn,
SUM(ia) as ia, SUM(ib) as ib, SUM(ic) as ic, SUM(w3pht) as w3pht,
SUM(pf3pht) as pf3pht, SUM(f) as f, SUM(whrec) as whrec,
SUM(whtot) as whtot, SUM(varhrec) as varhrec, SUM(varhtot) as varhtot,
SUM(whintvlrec) as whintvlrec, SUM(whintvldel) as whintvldel,
SUM(w__phavg) as w__phavg, SUM(var__3phavg) as var__3phavg,
SUM(w_3phavg) as w_3phavg, SUM(var_3phavg) as var_3phavg,
SUM(phai) as phai, SUM(phbi) as phbi, SUM(phci) as phci,
SUM(w__3phmaxavgdmd) as w__3phmaxavgdmd, MAX(w__3phmaxavgdmd) as w__3phmaxavgdmd_max,
SUM(var__3phmaxavgdmd) as var__3phmaxavgdmd, SUM(w_3phmaxavgdmd) as w_3phmaxavgdmd,
SUM(var_3phmaxavgdmd) as var_3phmaxavgdmd, date_time FROM `meter_data`
WHERE (`date_time_i` >= #{#from_date} AND `date_time_i` <= #{#to_date} AND building = '#{#building_info}')
GROUP by meter_id ORDER BY `meter_data`.`date_time_i` ASC")
#meter_data_max = MeterData.find_by_sql("SELECT tenant, meter_id, MAX(w__3phmaxavgdmd) as w__3phmaxavgdmd,
date_time FROM `meter_data` WHERE (`date_time_i` >= #{#from_date} AND `date_time_i` <= #{#to_date}
AND building = '#{#building_info}') GROUP by tenant, meter_id
ORDER BY `meter_data`.`date_time_i` ASC")
sleep 0.5
#uniq_meter_id = MeterData.select(:meter_id).where("`date_time_i` >= #{#from_date} AND `date_time_i` <= #{#to_date} AND building = '#{#building_info}'").uniq(:meter_id)
#### database fetch ###########
p = Axlsx::Package.new
wb = p.workbook
wb.styles do |s|
styles_hash = AxlsxStylesHash.get s
wb.add_worksheet(:name => "Building data details") do |sheet|
$row_num = 0
meter_extra = MeterExtra.new
columns_ = []; #columns.each { |s| columns_ << "#{s}_".to_s }
sheet.add_row []
$row_num += 1
sheet.add_row ["Building report of #{#building_f_name} #{#from_date} to #{#to_date}"], :style => styles_hash[:heading_cell]
$row_num += 1
sheet.add_row []
$row_num += 1
sheet.add_row []
$row_num += 1
sheet.merge_cells("A2:Q2")
new_columns_ = []
new_columns_ << "Date & Time" if columns_.include?('date_time_')
new_columns_ << "Building" if columns_.include?('building_')
new_columns_ << "Tenant" if columns_.include?('tenant_')
new_columns_ << "Meter ID" if columns_.include?('meter_id_')
new_columns_ << "Meter Name" if columns_.include?('meter_id_')
new_columns_ << "van" if columns_.include?('van_')
new_columns_ << "vbn" if columns_.include?('vbn_')
new_columns_ << "vcn" if columns_.include?('vcn_')
new_columns_ << "ia" if columns_.include?('ia_')
new_columns_ << "ib" if columns_.include?('ib_')
new_columns_ << "ic" if columns_.include?('ic_')
new_columns_ << "w3pht" if columns_.include?('w3pht_')
new_columns_ << "pf3pht" if columns_.include?('pf3pht_')
new_columns_ << "f" if columns_.include?('f_')
new_columns_ << "whrec" if columns_.include?('whrec_')
new_columns_ << "whtot" if columns_.include?('whtot_')
new_columns_ << "varhrec" if columns_.include?('varhrec_')
new_columns_ << "varhtot" if columns_.include?('varhtot_')
new_columns_ << "whintvlrec" if columns_.include?('whintvlrec_')
new_columns_ << "whintvldel" if columns_.include?('whintvldel_')
new_columns_ << "w+phavg" if columns_.include?('w__phavg_')
new_columns_ << "var+3phavg" if columns_.include?('var__3phavg_')
new_columns_ << "w-3phavg" if columns_.include?('w_3phavg_')
new_columns_ << "var-3phavg" if columns_.include?('var_3phavg_')
new_columns_ << "phai" if columns_.include?('phai_')
new_columns_ << "phbi" if columns_.include?('phbi_')
new_columns_ << "phci" if columns_.include?('phci_')
new_columns_ << "w+3phmaxavgdmd" if columns_.include?('w__3phmaxavgdmd_')
new_columns_ << "var+3phmaxavgdmd" if columns_.include?('var__3phmaxavgdmd_')
new_columns_ << "w-3phmaxavgdmd" if columns_.include?('w_3phmaxavgdmd_')
new_columns_ << "var-3phmaxavgdmd" if columns_.include?('var_3phmaxavgdmd_')
sheet.add_row new_columns_, :style => styles_hash[:green_bold_border_cell]
$row_num += 1
t = 1; #meter_data.each do |m|
new_columns_data = []
if columns_.include?('date_time_')
m_date_time = m['date_time']
new_columns_data << ( m_date_time.nil? ? '--' : m_date_time)
end
if columns_.include?('building_')
m_building = m_building
new_columns_data << ( m_building.nil? ? '--' : building_names["#{m_building}"])
end
if columns_.include?('tenant_')
m_tenant = m['tenant']
new_columns_data << ( m_tenant.nil? ? '--' : tenant_names["#{m_tenant}"])
end
if columns_.include?('meter_id_')
m_meter_id = m['meter_id']
new_columns_data << ( m_meter_id.nil? ? '--' : m_meter_id)
end
if columns_.include?('meter_id_')
m_meter_id = m['meter_id']
new_columns_data << ( meter_extra.get_meter_name(m_meter_id.nil? ? '--' : m_meter_id) )
end
if columns_.include?('van_')
m_van = m.van
new_columns_data << ( m_van.nil? ? '--' : m_van )
end
if columns_.include?('vbn_')
m_vbn = m.vbn
new_columns_data << ( m_vbn.nil? ? '--' : m_vbn )
end
if columns_.include?('vcn_')
m_vcn = m.vcn
new_columns_data << ( m_vcn.nil? ? '--' : m_vcn )
end
if columns_.include?('ia_')
m_ia = m.ia
new_columns_data << ( m_ia.nil? ? '--' : m_ia )
end
if columns_.include?('ib_')
m_ib = m.ib
new_columns_data << ( m_ib.nil? ? '--' : m_ib )
end
if columns_.include?('ic_')
m_ic = m.ic
new_columns_data << ( m_ic.nil? ? '--' : m_ic )
end
if columns_.include?('w3pht_')
m_w3pht = m.w3pht
new_columns_data << ( m_w3pht.nil? ? '--' : m_w3pht )
end
if columns_.include?('pf3pht_')
m_pf3pht = m.pf3pht
new_columns_data << ( m_pf3pht.nil? ? '--' : m_pf3pht )
end
if columns_.include?('f_')
m_f = m.f
new_columns_data << ( m_f.nil? ? '--' : m_f )
end
if columns_.include?('whrec_')
m_whrec = m.whrec
new_columns_data << ( m_whrec.nil? ? '--' : m_whrec )
end
if columns_.include?('whtot_')
m_whtot = m.whtot
new_columns_data << ( m_whtot.nil? ? '--' : m_whtot )
end
if columns_.include?('varhrec_')
m_varhrec = m.varhrec
new_columns_data << ( m_varhrec.nil? ? '--' : m_varhrec )
end
if columns_.include?('varhtot_')
m_varhtot = m.varhtot
new_columns_data << ( m_varhtot.nil? ? '--' : m_varhtot )
end
if columns_.include?('whintvlrec_')
m_whintvlrec = m.whintvlrec
new_columns_data << ( m_whintvlrec.nil? ? '--' : m_whintvlrec )
end
if columns_.include?('whintvldel_')
m_whintvldel = m.whintvldel
new_columns_data << ( m_whintvldel.nil? ? '--' : m_whintvldel )
end
if columns_.include?('w__phavg_')
m_w__phavg = m.w__phavg
new_columns_data << ( m_w__phavg.nil? ? '--' : m_w__phavg )
end
if columns_.include?('var__3phavg_')
m_var__3phavg = m.var__3phavg
new_columns_data << ( m_var__3phavg.nil? ? '--' : m_var__3phavg )
end
if columns_.include?('w_3phavg_')
m_w_3phavg = m.w_3phavg
new_columns_data << ( m_w_3phavg.nil? ? '--' : m_w_3phavg )
end
if columns_.include?('var_3phavg_')
m_var_3phavg = m.var_3phavg
new_columns_data << ( m_var_3phavg.nil? ? '--' : m_var_3phavg )
end
if columns_.include?('phai_')
m_phai = m.phai
new_columns_data << ( m_phai.nil? ? '--' : m_phai )
end
if columns_.include?('phbi_')
m_phbi = m.phbi
new_columns_data << ( m_phbi.nil? ? '--' : m_phbi )
end
if columns_.include?('phci_')
m_phci = m.phci
new_columns_data << ( m_phci.nil? ? '--' : m_phci )
end
if columns_.include?('w__3phmaxavgdmd_')
m_w__3phmaxavgdmd = m.w__3phmaxavgdmd
new_columns_data << ( m_w__3phmaxavgdmd.nil? ? '--' : m_w__3phmaxavgdmd )
end
if columns_.include?('var__3phmaxavgdmd_')
m_var__3phmaxavgdmd = m.var__3phmaxavgdmd
new_columns_data << ( m_var__3phmaxavgdmd.nil? ? '--' : m_var__3phmaxavgdmd )
end
if columns_.include?('w_3phmaxavgdmd_')
m_w_3phmaxavgdmd = m.w_3phmaxavgdmd
new_columns_data << (m_w_3phmaxavgdmd.nil? ? '--' : m_w_3phmaxavgdmd )
end
if columns_.include?('var_3phmaxavgdmd_')
m_var_3phmaxavgdmd = m.var_3phmaxavgdmd
new_columns_data << ( m_var_3phmaxavgdmd.nil? ? '--' : m_var_3phmaxavgdmd )
end
if t == 1
sheet.add_row new_columns_data , :style => styles_hash[:simple_green_cell], :widths=>[20]
else
sheet.add_row new_columns_data, :style => styles_hash[:simple_white_cell], :widths=>[20]
end
$row_num += 1
t = 0 if t == 2
t += 1
puts $row_num
m = nil
end
#meter_data = nil
## logo ##
ReportBillLogo.put sheet, $row_num
sheet = nil;
end
## summary calculation ##
wb.add_worksheet(:name => "Summary") do |sheet|
$row_num = 0
sheet.add_row []
$row_num += 1
sheet.add_row ["Summary report of #{#building_f_name} #{#from_date} to #{#to_date}"], :style => styles_hash[:heading_cell]
$row_num += 1
sheet.add_row []
$row_num += 1
sheet.add_row []
$row_num += 1
sheet.merge_cells("A1:Q1")
columns_ = []; #columns.each { |s| columns_ << "#{s}_".to_s }
new_columns_ = []
new_columns_ << "Meter ID"
new_columns_ << "Meter Name"
new_columns_ << "Max KW"
sheet.add_row new_columns_, :style => styles_hash[:green_bold_border_cell]
$row_num += 1
t = 0
#meter_data_max.each do |m|
new_columns_data = []
new_columns_data << m.meter_id
new_columns_data << meter_names["#{m.meter_id}"]
new_columns_data << ( m.w__3phmaxavgdmd.nil? ? '' : m.w__3phmaxavgdmd )
if t == 1
sheet.add_row new_columns_data, :style => styles_hash[:simple_green_cell], :widths=>[25]
else
sheet.add_row new_columns_data, :style => styles_hash[:simple_white_cell], :widths=>[25]
end
$row_num += 1
t = 0 if t == 2
t += 1
m = nil;
end
## logo ##
ReportBillLogo.put sheet, $row_num
sheet = nil
end
## summary calculation ##
## summary calculation ##
wb.add_worksheet(:name => "Energy consumed") do |sheet|
$row_num = 0
sheet.add_row []
$row_num += 1
sheet.add_row ["Energy consumed report of #{#building_f_name} #{#from_date} to #{#to_date}"], :style => styles_hash[:heading_cell]
$row_num += 1
sheet.add_row []
$row_num += 1
sheet.add_row []
$row_num += 1
sheet.merge_cells("A2:Q2")
engery = []
#uniq_meter_id.each do |f|
start = MeterData.find_by_sql("select whtot from meter_data where `date_time_i` >= #{#from_date} AND meter_id = '#{f.meter_id}' ORDER BY date_time_i ASC limit 1")
eand = MeterData.find_by_sql("select whtot from meter_data where `date_time_i` <= #{#from_date} AND meter_id = '#{f.meter_id}' ORDER BY date_time_i DESC limit 1")
sleep 0.009
begin
engery << (eand.last.whtot.to_i - start.first.whtot.to_i)
rescue
engery << '--'
end
start = nil; eand = nil;
f = nil;
end
columns_ = []; #columns.each { |s| columns_ << "#{s}_".to_s }
new_columns_ = []
new_columns_ << "Meter ID"
new_columns_ << "Meter Name"
new_columns_ << "Subtraction result"
new_columns_ << "Aggregate result"
sheet.add_row new_columns_, :style => styles_hash[:green_bold_border_cell]
$row_num += 1
flage = 0
t = 0
#meter_sum.each do |m|
new_columns_data = []
new_columns_data << m.meter_id
new_columns_data << meter_names["#{m.meter_id}"]
new_columns_data << engery[flage]
new_columns_data << m.whintvlrec
if t == 1
sheet.add_row new_columns_data, :style => styles_hash[:simple_green_cell], :widths=>[25]
else
sheet.add_row new_columns_data, :style => styles_hash[:simple_white_cell], :widths=>[25]
end
$row_num += 1
flage += 1
t = 0 if t == 2
t += 1
m = nil
end
## logo ##
ReportBillLogo.put sheet, $row_num
sheet = nil;
end
end
$file_name = "tmp/#{#building_info} . #{#from_date} - #{#to_date} _ #{Time.new.to_i}.xlsx"
p.serialize($file_name)
p = nil; wb = nil;
#meter_data = nil; #meter_data_max = nil; #meter_sum = nil;
return $file_name
end
# total records = 5, 00, 000
When ruby process is running out of memory it access a huge bit of memory from the system called heap slab. This big chunk is then internally divided into many small bits, which holds your variables and your code. When you assign nil to some variable, garbage collector will mark this small bit as empty, but the whole slab is never returned to the system until process is terminated.
I'm not exactly sure if this is actually the problem that you're having, but there's a known issue with Ruby 2.1.x's generational garbage collection where allocated objects in memory that are supposed to be "short-lived" end up being accidentally promoted to "long-lived" instead, causing memory to bloat because the garbage collector doesn't run frequently enough to clean up these objects:
Koichi Sasada, the author of the generational GC in Ruby 2.1:
“Some ‘short-lived’ young objects will be promoted to ‘old-gen’ accidentally….if such ‘short-lived’ objects consume huge memory we need to free such objects.”
A current workaround that is being mentioned a lot right now in the Ruby community is to change a default Ruby setting in order to tell it to garbage collect more often:
Expect memory doubling with Ruby 2.1.1. Not happy with that? You have 2 options:
Tune it down by reducing RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR. At 1 your memory consumption will be on par with 2.0. It ships with the 2 default.
Option 2, wait for a future release of Ruby, this will be fixed in 2.2 maybe even patched a bit more in 2.1.2 . See:
https://bugs.ruby-lang.org/issues/9607 and
http://vimeo.com/89491942 and
https://speakerdeck.com/samsaffron/why-ruby-2-dot-1-excites-me

Resources