all. I'm currently adding to my seeds.rb file in Rails using the faker gem. I'm wondering: how do you get the fake data to follow "the rules" I want?
I'm building a basketball statistics application. I want stat in seeds to create 300 sets of statistics where all the criteria I have set in the stat model are true. Right now, only 7-9 of the 300 sets of data end up being created. Is there a way to get seeds to ignore the models that don't work and make 300 that do?
For instance, I want field goal attempts (fga in my db) to be greater or equal to field goals made (fg). (I have this "rule" set up in my model.) When I do this in my seeds file:
# seeds.rb snippet
300.times do
stat = Stat.create(
fg: Faker::Number.between(0, 15),
fga: Faker::Number.between(0, 20)
# more stats below
)
how do I make sure that fga is >= fg every time?
Do I have to say specifically in seeds that fg can't be greater than fga? Or do I set a method in my stat.rb model file and Faker will follow it? (I have a few other rules on my model, otherwise I would just set the fake numbers differently.)
Thanks
until Stat.count >= 300 do
Stat.create(
fg: Faker::Number.between(0, 15),
fga: Faker::Number.between(0, 20)
# more stats below
)
end
Related
I have a table called "Scores" which has 4 columns, "first", "second", "third", and "average" for keeping record of user's score.
When a user create the record initially, he can leave "average" column blank. Then he can edit all 3 scores later.
After editing, the user can see the computed average (or sum, or any calculation result.) in his show page, since I have
def show
#ave = (#score.first + #score.second + #score.third)/3
end
However, #ave is not in the database, how can I update #ave into the column of "average" of my database?
Ideally, it would be the best if the computing takes place before updating into database, so all 4 values can be updated into database together. It might have something to do with Active Record Callbacks, but I don't know how to do that.
Second approach, I think i need a "trigger" in database so that it can compute and update "average" column as soon as other 3 columns got updated. If this is how you do it, please let me know and the advantage of comparing with solution number 1.
Last approach, since the user already know the average in his show page, I don't have to update the computed average into "average" column immediately. I think i can leave this to a delayed_job or background job. If this is how you do it, please let know me how.
Thank you in advance!(ruby 2.3, rails 5.0.1, postgresql 9.5
Unless you really do need the average stored in the database for some reason, I would add an attribute to the Score model:
def average
(first + second + third)/3.0
end
If one or more might not be present, I would:
def average
actual_scores = [first, second, third].compact
return nil if actual_scores.empty?
actual_scores.sum / actual_scores.size
end
If you do need the average saved, then I would add a before_validate callback:
before_validation do
self.average = (first + second + third)/3.0
end
Ideas 1 and 2 are perfectly valid approaches. Idea 3 is overkill and I would strongly recommend against that approach.
In idea 1, all you need to do (in any language) is simply look at each individual value put in (not including average) and generate the average value to be included in your insert statement. It's really as simple as that.
Idea 2 requires making a trigger as follows:
CREATE OR REPLACE FUNCTION update_average()
RETURNS trigger AS
$BODY$
BEGIN
NEW.AVERAGE=(NEW.first+NEW.second+NEW.third)/3;
RETURN NEW;
END;
$BODY$
Then assign it to run on update or insert of your table:
CREATE TRIGGER last_name_changes
BEFORE INSERT or UPDATE
ON scores
FOR EACH ROW
EXECUTE PROCEDURE update_average();
i have a rails app with questions and answers and each answer has has a rating depending on certain parameters like the example
note: i dont want in terms of stars i want from 1 to 10 so i can take a weighted average of all the fields later and perform calculations.
how to use write attribute to fetch data?
error: SystemStackError: stack level too deep
def set_price_rating
# set_price = 5
puts case price_today
when 1..21 then
"0".to_i
when 22..29 then
"5".to_i
when 30..39 then
"7".to_i
when 40..49 then
"8".to_i
when 50..59 then
"6".to_i
else
"0".to_i
end
write_attribute(:price_rating, set_price_rating)
save
end
You can try and customize the below related gems:
https://github.com/wazery/ratyrate
or please devote some time to understand the concept here:
https://www.sitepoint.com/ratyrate-add-rating-rails-app/
I use the Faker gem for seeding certain data. How can I set the max. length for a fake Company.name, and how can I set the range for a fake number?
name = Faker::Company.name
Here I would like to include the maximum length, since name has a model restriction for max. 40 characters.
code_id = Faker::Number.number
For code_id I would like a range from 1 to 50. I tried code_id = Faker::Number.number(from=1, to=50) but that seems incorrect as on seeding it produced the following error:
ArgumentError: wrong number of arguments (2 for 1)
/usr/local/rvm/gems/ruby-2.1.5/gems/faker-1.4.3/lib/faker/number.rb:4:in 'number'
How should I adjust Faker to my needs?
For the name you can just cut off the extra parts of the generated one (you don't care about half-finished words there, do you?)
name = Faker::Company.name[0..40]
And for the number you can use Faker::Number.between or use core ruby rand directly.
rand(1..50)
you can override the data with I18n and add your own names with short length:
faker:
name:
short_names: [Ben, Ava...]
Faker::Number.between
You can use
Faker::Lorem.words(50);
It will return 50 words
https://github.com/Marak/faker.js/wiki/Basic-Random-Data
How can I assign a table of users a random number between 1 and 9 without needing to store it in the db (to recall it later).
Is there some way to hash their user_id into returning a number between a range (and then get the same number for that user every time that function would be called).
I know the following is not an optimal way to do this, but it works and is guaranteed to return the same random number between 1 and 9, which will be unique for each user, i.e. you wont need to store it in your database:
require 'digest/md5'
def unique_number_for(user)
hash = (Digest::MD5.new << user.id.to_s).to_s
hash.split("").map(&:to_i).detect {|a| a > 0}
end
The obvious solution:
id.to_s[-1,1]
I'm using rails with the oracleenhanced adaptor to create a new interface for a legacy application.
Database migrations work successfully, but take an incredibly long amount of time before rake finishes. The database changes happen pretty quickly (1 or 2 seconds), but the db/schema.db dump takes over an hour to complete. (See example migration below.)
It's a relatively large schema (about 150 tables), but I'm sure it shouldn't be taking this long to dump out each table description.
Is there anyway to speed this up by just taking the last schema.db and applying the change specified in the migration to it? Or am I able to skip this schema dump altogether?
I understand this schema.db is used to create the test database from scratch each time, but this case, there's a large chunk of the database logic in table triggers which aren't included in the schema.rb anyway, so the rake tests are no good to us in any case. (That's a whole different issue that I need to sort out at some other point.)
dgs#dgs-laptop:~/rails/voyager$ time rake db:migrate
(in /home/dgs/rails/voyager)
== 20090227012452 AddModuleActionAndControllerNames: migrating ================
-- add_column(:modules, :action_name, :text)
-> 0.9619s
-> 0 rows
-- add_column(:modules, :controller_name, :text)
-> 0.1680s
-> 0 rows
== 20090227012452 AddModuleActionAndControllerNames: migrated (1.1304s) =======
real 87m12.961s
user 0m12.949s
sys 0m2.128s
After all migrations are applied to database then rake db:migrate calls db:schema:dump task to generate schema.rb file from current database schema.
db:schema:dump call adapter's "tables" method to get the list of all tables, then for each table calls "indexes" method and "columns" method. You can find SQL SELECT statements that are used in these methods in activerecord-oracle_enhanced-adapter gem's oracle_enhanced_adapter.rb file. Basically it does selects from ALL% or USER% data dictionary tables to find all the information.
Initially I had issues with original Oracle adapter when I used it with databases with lot of different schemas (as performance might be affected by the total number of table in the database - not just in your schema) and therefore I did some optimizations in Oracle enhanced adapter. It would be good to find out which methods are slow in your case (I suspect that it could be either "indexes" or "columns" method which is executed for each table).
One way hoe to debug this issue would be if you would put some debug messages in oracle_enhanced_adapter.rb file so that you could identify which method calls are taking so long time.
Problem mostly solved after some digging round in oracle_enhanced_adapter.rb.
The problem came down to way too many tables in the local schema (many EBA_%, EVT_%, EMP_%, SMP_% tables had been created in there coincidently at some point), archive tables being included in the dump and a select from the data dictionaries taking 14 seconds to execute.
To fix the speed, I did three things:
Dropped all unneeded tables (about 250 out of 500)
Excluded archive tables from the schema dump
Cached the result of the long running query
This improved the time from the migration/schema dump for the remaining 350 tables from about 90 minutes to about 15 seconds. More than fast enough.
My code as follows (for inspiration not copying and pasting - this code is fairly specific to my database, but you should be able to get the idea). You need to create the temp table manually. It takes about 2 or 3 minutes for me to do - still too long to generate with each migration, and it's fairly static anyway =)
module ActiveRecord
module ConnectionAdapters
class OracleEnhancedAdapter
def tables(name = nil)
select_all("select lower(table_name) from all_tables where owner = sys_context('userenv','session_user') and table_name not like 'A!_%' escape '!' ").inject([]) do | tabs, t |
tabs << t.to_a.first.last
end
end
# TODO think of some way to automatically create the rails_temp_index table
#
# Table created by:
# create table rails_temp_index_table as
# SELECT lower(i.index_name) as index_name, i.uniqueness,
# lower(c.column_name) as column_name, i.table_name
# FROM all_indexes i, user_ind_columns c
# WHERE c.index_name = i.index_name
# AND i.owner = sys_context('userenv','session_user')
# AND NOT exists (SELECT uc.index_name FROM user_constraints uc
# WHERE uc.constraint_type = 'P' and uc.index_name = i.index_name);
def indexes(table_name, name = nil) #:nodoc:
result = select_all(<<-SQL, name)
SELECT index_name, uniqueness, column_name
FROM rails_temp_index_table
WHERE table_name = '#{table_name.to_s.upcase}'
ORDER BY index_name
SQL
current_index = nil
indexes = []
result.each do |row|
if current_index != row['index_name']
indexes << IndexDefinition.new(table_name, row['index_name'], row['uniqueness'] == "UNIQUE", [])
current_index = row['index_name']
end
indexes.last.columns << row['column_name']
end
indexes
end
end