I have 2 Rails applications. The first one manages user data. The second one has read only access to the first's database to retrieve that user data.
user = FactoryBot.create(:user)
# Test user associations that only exist in the 2nd application
Obviously that line will fail as the user cannot be created due to database-level permissions. My first thought would be to open up the first database's _test database to have full write permissions, but this feels wrong as it would have different permissions than the production environment, potentially hiding/causing other issues.
What is the best approach for using FactoryBot to test aspects of the second application using mock user data from the first?
Update
One thought I had was to pre-load a dummy test database with data and ensure the 2nd application has read only access to it. Then my tests would simply query for an existing user (would need to have prior knowledge about the data inside though) instead of using FactoryBot to create ones. Is this a viable approach?
Loading a fixture would be a good choice here as you can simulate the read only database with your user data interactions.
You will need to read in a yml or xml file similar setup to your db .. parse out the mappings of the file and then you can use it.
Quick example:
def fixture_file_path(filename)
Rails.root.join("spec/fixtures/#{filename}").to_s
end
def read_fixture_file(filename)
File.read fixture_file_path(filename)
end
let(:fixture) { read_fixture_file("file.xml") }
Then create a subject here which will be useful as you will be calling this in many tests .. call the fixture
subject(:service) { described_class.call fixture }
Then you can go from there and build the interactivity to test how you need.
Related
This might be a basic misunderstanding on my part. I have a bunch of logic in my app which collects sets of data from several tables and combines them into memory structures. I want to write tests for that logic.
It seems to me that fixtures, factory girl and similar tools build in-memory model instances. If I do activerecord calls, like Model.find(foo: 12) won't those apply only against records that were saved?
In most cases, I agree with #mrbrdo's opinion: prefer rpsec's stub method. but as a rails programmer, I think you have to know both the "fixture" and the "stub".
Fixtures, whatever the yaml file or the factory girl data , will both save into database. see your config/database.yml file for the location where they are. Actually this is useful when you want to make sure that there is ALWAYS some data in the DB during your test, such as an "admin user" with a fixed ID.
Stubs, it's faster than fixture since it won't be saved into DB, and maybe very useful when you want to perform a test that can't implemented by "Fixtures".
So, I suggest that you try both of them in your real coding life, and choose either of them according to the real context.
What you are saying is not true, fixtures or factory girl will use the database. I would avoid fixtures, though, people don't usually use them nowadays.
The proper way to really write your tests would be to stub out activerecord calls, though, because this will make your tests a lot faster. What you want to test is combining data into your structures, not pulling data out of the database - that part is already tested in activerecord's tests.
So stub out the finders like this (if you are using rspec):
Model.should_receive(:find).with(foo: 12) do
Model.new(foo: 12, other_attribute: true)
end
So when the method you are testing calls Model.find(foo: 12) it will get
Model.new(foo: 12, other_attribute: true)
This is much faster than actually creating it in the database and then pulling it out, and there is no point in doing this for what you are testing - it's not important. You can also stub save on the instance and so on depending on what your method is doing. Keep in mind, retrieving data from DB and saving to DB is all already tested in activerecord's tests, there is no point for you to re-do these tests - just focus on your specific logic.
FactoryGirl supports several build strategies, including one where records are saved to the database.
It's straightforward: FactoryGirl.create(:foo) will create a foo and save it to the database, whereas FactoryGirl.build(:foo) will only create the in-memory version of that object.
More information is available about build strategies here: https://github.com/thoughtbot/factory_girl/blob/master/GETTING_STARTED.md , in the "Using factories" section.
I am using Ruby on Rails 3.0.9, RSpect 2 and FactoryGirl. I would like to know how I should proceed in my case relating to factory usages for testing purposes.
I have seeded the database (by using the /db/seed.rb file) with some data that is necessary in order to initialize a make my application to work. Now I am in trouble because I implemented\created some factories and when I instantiate them (eg: Factory(:user), Factory(:user_authorization), ...) those are "combined" with the seeded data present in the database.
So, the question is: should I use factories exclusively (that is, to use only factory data by not considering the test database data) or can I use those in "combination" with the seeded data in the database? That is, should I consider also database seeded data or I should implement\emulate all seeded data with factories?
In my projects I use seeding for the data my application really needs to get started. E.g. the first user (admin), domain tables, ... For these I do not use factories.
Secondly I use factories during testing to simulate any data that should be present.
In my opinion those co-exist perfectly. I use the factories when the contents of the data is actually not that important (there has to be some user, some post, some comment ...), but with the seeding the actual data in the database is very important, and if I would be using factories for that, I would be redefining every attribute anyway, so for me there would be no benefit of using a factory.
I hope this helps.
You can use factory_girl within in seed.rb. This post explains it a little more. http://xtargets.com/2011/02/06/seeding-your-rails-db-with-factory-girl/
I am creating a SAAS app using Rails 3. When a user creates a new account, the database needs to populate with a lot of data. Some of the data will be specific to the newly created account. I don't necessarily want to do this all with Models within the controller "sign up" action. What would be the best way to do this?
From the sounds of things you should be using a callback within your User model. Most likely: a before_create or after_create (depending on your exact needs). You can then have the user model handle the creation of the account specific data, rather than your controller and thus adhere to the ideals of 'fat-model, skinny-controller'.
class User < ActiveRecord::Base
after_create :setup_account_data
private
def setup_account_data
# create other data as required
end
end
Pretty simple really, after the user model is created - the setup_account_data method will be called. A list of the other available callbacks is here.
A couple of approaches come to mind.
simple ruby. This is similar to what is done when you run rake db:seed -- execution of a ruby script.
fixtures. If you dump a database to fixtures [http://snippets.dzone.com/posts/show/4468], you can modify the fixtures so that the data that needs to be customized is done in erb blocks. This is the same technique that is used often in test fixtures.
I'm working on a Rails app that has one database per account. (I know this is a controversial approach in itself, but I'm confident it's the right one in this case.)
I'd like to automate entirely the process of creating a new user account, which means I need to be able create a new database and populate it with some seed data programatically from within a Rails app.
My question, then, is how best to do this? I don't think I can just run migrations from within the app (or, if I can, how?), and just running the straight SQL queries within the app with hardcoded CREATE TABLE statements seems a really unwieldy way of doing things. What approach should I take, then?
Thanks in advance for your help!
David
This is an approach that my application requires. The app provides a web front-end onto a number of remote embedded devices which in turn monitor sensors. Each embedded device runs a ruby client process which reads a config file to determine its setup. There is a need to be able to add a new sensor type.
The approach I have is that each sensor type has it's own data table, which is written into by every device which has that sensor. So in order to be able to create a new sensor type, I need to be able to set up new tables.
One initial issue is that the remote embedded devices do not have a rails app on them - therefore table name pluralization is a bad plan, as the pluralization rules are not accessible to the remote devices. Therefore I set
ActiveRecord::Base.pluralize_table_names = false
in config/environment.rb
The data on each sensor device type is held in a SensorType model - which has two fields - the sensor name, and the config file contents.
Within the SensorType model class, there are methods for:
Parsing the config file to extract field names and types
Creating a migration to build a new model
Altering a particular field in the DB from a generic string to char(17) as it is a MAC address used for indexing
Altering the new model code to add appropriate belongs_to relationships
Build partial templates for listing the data in the table (a header partial and a line_item partial)
These methods are all bound together by a create_sensor_table method which calls all the above, and performs the appropriate require or load statements to ensure the new model is immediately loaded. This is called from the create method in the SensorTypeController as follows:
# POST /device_types
# POST /device_types.xml
def create
#sensor_type = SensorType.new(params[:sensor_type])
respond_to do |format|
if #sensor_type.save
#sensor_type.create_sensor_tables
flash[:notice] = 'SensorType was successfully created.'
#etc
I have an initializer which sets a default that is used throughout the app. The value is an ActiveRecord model, I'm essentially caching it for the lifetime of the app:
##default_region = Region.find_by_uri("whistler")
The record is guaranteed to be in the database: it's fixture data which is referenced by other models. This works fine, except in the test environment where the database is purged before every test run. (I'm running on edge rails and I think that's recent behavior: I used to be able to insert the data manually and keep it between test runs.) I also have the record in my regions.yml fixture file, but fixtures aren't loaded until after the rails initializer is done.
What's the right way to deal with such a dependency on fixture data? Or is there a better way to structure this? I'd rather not use a before_filter because there's no sense reloading this on each request: it will not change except on a different deployment.
I'd put something like this in region.rb:
def self.default_region
##default_region ||= Region.find_by_uri("whistler")
end
Then you can access it as Region.default_region wherever you need it, and it's only looked up once - the first time it's called - and by then the fixtures will be in place.
Not really familiar with Ruby or Rails... but why don't you try a "lazy-loading" scenario? Basically, have a global function that would check to see if the data was loaded, and if not, grab it from the database, then cache it. And if it was already cached, just return it.
That way, you won't be attempting to hit the database until that function is called for the first time and everything should be initialized by then.