How to load the data from a .yml file to database? - ruby-on-rails

There is a table questions, and a data file questions.yml. Assume there is no 'Question' model.
'questions.yml' has some recodes dump from the table.
---
questions_001:
title: ttt1
content: ccc1
questions_002:
title: ttt2
content: ccc2
I want to load the data from the yml file, insert them to database. But I can't use rake db:fixtures:load, because it will treat the content as 'erb' template, which is not want I want
So I want to write another rake task, to load the data manually.
I can read the records by:
File.open("#{RAILS_ROOT}/db/fixtures/#{table_name}.yml", 'r') do |file|
YAML::load(file).each do |record|
# how to insert the record??
end
end
But I don't know how to insert them.
Edit:
I have tried:
Class.new(ActiveRecord::Base).create(record)
and
class Dummy < ActiveRecord::Base {}
Dummy.create(rcord)
But nothing inserted to database

Try this after loading the date from the yml file to records:
class Question < ActiveRecord::Base
# Question model just to import the yml file
end
records.each { |record| Question.create(record) }
You can simply create a model just for importing. You don't need to create the app/models/question.rb. Just write the code above in the script responsible for the importing.
UPDATE:
You can use the following function:
def create_class(class_name, superclass, &block)
klass = Class.new superclass, &block
Object.const_set class_name, klass
end
source
File.open("#{RAILS_ROOT}/db/fixtures/#{table_name}.yml", 'r') do |file|
YAML::load(file).each do |record|
model_name = table_name.singularize.camelize
create_class(model_name, ActiveRecod::Base) do
set_table_name table_name.to_sym
end
Kernel.const_get(model_name).create(record)
end
end
To use the connection directly you can use the following:
ActiveRecord::Base.connection.execute("YOUR SQL CODE")

Got it working thanks to #jigfox 's answer. Had to modify a bit for the full implementation now with Rails 4.
table_names = Dir.glob(Rails.root + 'app/models/**.rb').map { |s| Pathname.new(s).basename.to_s.gsub(/\.rb$/,'') }
table_names.each do |table_name|
table_name = table_name.pluralize
path = "#{Rails.root}/db/fixtures/#{table_name}.yml"
if File.exists?(path)
File.open(path, 'r') do |file|
y = YAML::load(file)
if !y.nil? and y
y.each do |record|
model_name = table_name.singularize.camelize
rec = record[1]
rec.tap { |hs| hs.delete("id") }
Kernel.const_get(model_name).create(rec)
end
end
end
end
end

This loads fixtures into the current RAILS_ENV, which, by default, is development.
$ rake db:fixtures:load

Related

Creating a task in ruby on rails

I want to create a CRON task for daily report. I need guidance where to create my class in my project (in which folder). How to instantiate an object from rails console for the same class. Will that class inherit application controller? I would also like to know since i will be querying my database so would my models be directly accessible in this file or somehow i have to include them like we do in django?
I have created a class /lib/tasks/daily_report.rb. But i am unable to understand how will i use that file to create a task.
module Reports
class Report
class << self
def collect_data
row_data = []
headers = ["Mobile", "Buildings", "Owners", "Tenants", "Members", "Total People"]
row_data.push(*headers)
puts "in side collect data"
date = Date.today.to_s
mobile = ["mobiles"]
for i in mobile do
row = []
row << i
build_count = Buildings.where(created_at: date, added_by: i).count
row << build_count
puts "build_count"
owners_count = Residents.where(created_at: date, added_by: i, role: "owner").count
row << owners_count
puts "owners_count"
tenants_count = Residents.where(created_at: date, added_by: i, role: "tenant").count
row << tenants_count
members_count = MemeberRelations.where(created_at: date, added_by: i).count
row << members_count
total_people = owners_count + tenants_count + members_count
row << total_people
row_data << row
end
puts row_data
return row_data
end
def generate_csv()
puts "walk away"
row_data = self.collect_data
CSV.open('/home/rajdeep/police-api/daily_report.csv', 'w') do |csv|
row_data.each { |ar| csv << ar }
end
end
end
end
end
If you wish to manage cron tasks from Rails, try whenever gem.
Add it to your Gemfile,
Gemfile
gem 'whenever', require: false
Run initialize task from root of your app
$ bundle exec wheneverize .
This will create an initial config/schedule.rb file for you (as long
as the config folder is already present in your project)
(from the gem docs).
After that in config/schedule.rb set proper parameters of call time. For example
config/schedule.rb
every :hour do # Many shortcuts available: :hour, :day, :month, :year, :reboot
runner "Report.generate_csv"
end
More syntax options of schedule.rb here
UPDATE AFTER COMMENTS
Hope, you're under Rails context yet. Create file in public folder at application root path.
result_file = "#{Rails.root}/public/cards-excluded.csv"
CSV.open(result_file, 'w') do |csv|
row_data.each { |ar| csv << ar }
end
ANOTHER UPDATE LATER
Okay, although this is not relevant to the original question, let's try to solve your problem.
We'll proceed from what you have Rails application, not custom Ruby library.
First, create module at your_rals_app/lib/reports.rb file
module Reports
class Report
class << self
def collect_data
# your current code and line below, explicit return
return row_data
end
def generate_csv
row_data = collect_data # btw unnecessary assignment
CSV.open('/home/rajdeep/police-api/daily_report.csv', 'w') do |csv|
row_data.each { |ar| csv << ar }
end
end
end
end
end
Second, make sure, that you have lib files at autoload path. Check it in you config/application.rb
config.autoload_paths += %W(#{config.root}/lib/*)
Thirdly, use Reports module such way (> means that you're at rails console, rails c)
> Reports::Report.generate_csv

Rails: Helper method behaving differently between console and application

I am trying to write a helper method that can download a CSV file from S3 storage, read the first few rows of the file and then save those first few rows to a new local file.
All is working well when I include the helper in the rails console and call the methods on the object, but when calling it in exactly the same way through the controller, the local file contains all of the rows from the S3 file, rather than just the first few.
My code, in the helper file (I've replaced AWS credentials with comments for the purpose of posting the question):
def download_file(data_source)
s3 = Aws::S3::Client.new(#API keys etc.)
File.open(data_source.file.data['id'], 'wb') do |file|
reap = s3.get_object({ bucket:#Bucket Name, key: 'store/' + data_source.file.data['id'] }, target: file)
end
end
def reduce_csv(filename)
data = CSV.open(filename, 'r') { |csv| csv.first(3) }
csv_string = CSV.generate do |csv|
data.each do |d|
csv << d
end
end
File.open('test.csv', 'wb') do |file|
file << csv_string
end
end
def make_small_data_source(data_source)
download_file(data_source)
reduce_csv(data_source.file.data['id'])
end
And in the controller:
if #data_source.save
make_small_data_source(#data_source)
Any ideas would be much appreciated!

using rails how do I import CSV onto mongodb

I am trying to import CSV files into my mongodb Using Ruby on Rails.
I know how to do it from the shell/terminal but not how to do it from Rails
Following this tutorial as an example Using MongoDB to store and retrieve CSV files content in Ruby, you can store all the values as strings, so you only need to read the CSV file and MongoDB dynamically creates all the needed attributes in objects that should represent each file row for the given CSV file:
class StoredCSV
include Mongoid::Document
include Mongoid::Timestamps
def self.import!(file_path)
columns = []
instances = []
CSV.foreach(file_path) do |row|
if columns.empty?
# We dont want attributes with whitespaces
columns = row.collect { |c| c.downcase.gsub(' ', '_') }
next
end
instances << create!(build_attributes(row, columns))
end
instances
end
private
def self.build_attributes(row, columns)
attrs = {}
columns.each_with_index do |column, index|
attrs[column] = row[index]
end
attrs
end
end
Usage
StoredCSV.import!('data.csv')
stored_data = StoredCSV.all

Ruby, Tempfile, CSV

I have the below resque job that produces a csv file and sends it to a mailer. I want to validate that the csv file has data so I do not email blank files. For some reason, when I write a method outside of the perform method, it will not work. For example, the below code will print invalid when I know the csv file has data on the first line. If I uncomment the line below ensure it works properly, however I want to extract this checking of the file into a separate method. Is this correct?
class ReportJob
#queue = :report_job
def self.perform(application_id, current_user_id)
user = User.find(current_user_id)
client_application = Application.find(client_application_id)
transactions = application.transactions
file = Tempfile.open(["#{Rails.root}/tmp/", ".csv"]) do |csv|
begin
csv_file = CSV.new(csv)
csv_file << ["Application", "Price", "Tax"]
transactions.each do |transaction|
csv_file << [application.name, transaction.price, transaction.tax]
end
ensure
ReportJob.email_report(user.email, csv_file)
#ReportMailer.send_report(user.email, csv_file).deliver
csv_file.close(unlink=true)
end
end
end
def self.email_report(email, csv)
array = csv.to_a
if array[1].blank?
puts "invalid"
else
ReportMailer.send_report(email, csv).deliver
end
end
end
You should invoke your method as such:
ReportJob.email_report(email, csv)
Otherwise, get rid of the self in:
def self.email_report(email, csv)
# your implementation here.
end
and define your method as follows:
def email_report(email, csv)
# your implementation.
end
This is something that we call Class Methods and Instance Methods.

Exporting ActiveRecord objects into POROs

I'm developing a "script generator" to automatize some processes at work.
It has a Rails application running on a server that stores all data needed to make the script and generates the script itself at the end of the process.
The problem I am having is how to export the data from the ActiveRecord format to Plain Old Ruby Objects (POROs) so I can deal with them in my script with no database support and a pure-ruby implementation.
I thought about YAML, CSV or something like this to export the data but it would be a painful process to update these structures if the process changes. Is there a simpler way?
Ty!
By "update these structures if the process changes", do you mean changing the code that reads and writes the CSV or YAML data when the fields in the database change?
The following code writes and reads any AR object to/from CSV (requires the FasterCSV gem):
def load_from_csv(csv_filename, poro_class)
headers_read = []
first_record = true
num_headers = 0
transaction do
FCSV.foreach(csv_filename) do |row|
if first_record
headers_read = row
num_headers = headers_read.length
first_record = false
else
hash_values = {}
for col_index in 0...num_headers
hash_values[headers_read[col_index]] = row[col_index]
end
new_poro_obj = poro_class.new(hash_values) # assumes that your PORO has a constructor that accepts a hash. If not, you can do something like new_poro_obj.send(headers_read[col_index], row[col_index]) in the loop above
#work with your new_poro_obj
end
end
end
end
#objects is a list of ActiveRecord objects of the same class
def dump_to_csv(csv_filename, objects)
FCSV.open(csv_filename,'w') do |csv|
#get column names and write them as headers
col_names = objects[0].class.column_names()
csv << col_names
objects.each do |obj|
col_values = []
col_names.each do |col_name|
col_values.push obj[col_name]
end
csv << col_values
end
end
end

Resources