Acts_as_tenant, cached data for different tenants? - ruby-on-rails

I made a user with two accounts, a user should have documents that are scoped to the account they belong to. I'm writing tests that #user.documents should be different if they are under #account_1 vs #account_2
ActsAsTenant.with_tenant(#account_1) do
#account_1_documents = #user.documents
end
ActsAsTenant.with_tenant(#account_2) do
#account_2_documents = #user.reload.documents
end
expect(#account_1_documents).not_to eq #account_2_documents
Unfortunately this fails, maybe because it is cached? If I do it in console and set the current tenant, I get back different documents.

I think my intuition was right, #user.documents is cached, and you can't count on #user.reload or #user.documents.reload to get the current object, you have to call reload on both at the same time.
So doing #user.reload.documents.reload ended up working and making the tests pass.

Related

Is there a way I can force a record to not be destroyed when running a feature test in RSpec? (Rails 6)

For context, I have a controller method called delete_cars. Inside of the method, I call destroy_all on an ActiveRecord::Collection of Cars. Below the destroy_all, I call another method, get_car_nums_not_deleted_from_portal, which looks like the following:
def get_car_nums_not_deleted_from_portal(cars_to_be_deleted)
reloaded_cars = cars_to_be_deleted.reload
car_nums = reloaded_cars.car_numbers
if reloaded_cars.any?
puts "Something went wrong. The following cars were not deleted from the portal: #{car_nums.join(', ')}"
end
car_nums
end
Here, I check to see if any cars were not deleted during the destroy_all transaction. If there are any, I just add a puts message. I also return the ActiveRecord::Collection whether there are any records or not, so the code to follow can handle it.
The goal with one of my feature tests is to mimic a user trying to delete three selected cars, but one fails to be deleted. When this scenario occurs, I display a specific notice on the page stating:
'Some selected cars have been successfully deleted from the portal, however, some have not. The '\
"following cars have not been deleted from the portal:\n\n#{some_car_numbers_go_here}"
How can I force just one record to fail when my code executes the destroy_all, WITHOUT adding extra code to my Car model (in the form of a before_destroy or something similar)? I've tried using a spy, but the issue is, when it's created, it's not a real record in the DB, so my query:
cars_to_be_deleted = Car.where(id: params[:car_ids].split(',').collect { |id| id.to_i })
doesn't include it.
For even more context, here's the test code:
context 'when at least one car is not deleted, but the rest are' do
it "should display a message stating 'Some selected cars have been successfully...' and list out the cars that were not deleted" do
expect(Car.count).to eq(100)
visit bulk_edit_cars_path
select(#location.name.upcase, from: 'Location')
select(#track.name.upcase, from: 'Track')
click_button("Search".upcase)
find_field("cars_to_edit[#{Car.first.id}]").click
find_field("cars_to_edit[#{Car.second.id}]").click
find_field("cars_to_edit[#{Car.third.id}]").click
click_button('Delete cars')
cars_to_be_deleted = Car.where(id: Car.first(3).map(&:id)).ids
click_button('Yes')
expect(page).to have_text(
'Some selected cars have been successfully deleted from the portal, however, some have not. The '\
"following cars have not been deleted from the portal:\n\n#{#first_three_cars_car_numbers[0]}".upcase
)
expect(Car.count).to eq(98)
expect(Car.where(id: cars_to_be_deleted).length).to eq(1)
end
end
Any help with this would be greatly appreciated! It's becoming quite frustrating lol.
One way to "mock" not deleting a record for a test could be to use the block version of .to receive to return a falsy value.
The argument for the block is the instance of the record that would be :destroyed.
Since we have this instance, we can check for an arbitrary record to be "not destroyed" and have the block return nil, which would indicate a "failure" from the :destroy method.
In this example, we check for the record of the first Car record in the database and return nil if it is.
If it is not the first record, we use the :delete method, as to not cause an infinite loop in the test (the test would keep calling the mock :destroy).
allow_any_instance_of(Car).to receive(:destroy) { |car|
# use car.delete to prevent infinite loop with the mocked :destroy method
if car.id != Car.first.id
car.delete
end
# this will return `nil`, which means failure from the :destroy method
}
You could create a method that accepts a list of records and decide which one you want to :destroy for more accurate testing!
I am sure there are other ways to work around this, but this is the best we have found so far :)
If there is a specific reason why the deletion might fail you can simulate that case.
Say you have a RaceResult record that must always refer to a valid Car and you have a DB constraint enforcing this (in Postgres: ON DELETE RESTRICT). You could write a test that creates the RaceResult records for some of your Car records:
it 'Cars prevented from deletion are reported` do
...
do_not_delete_cars = Car.where(id: Car.first(3).map(&:id)).ids
do_not_delete_cars.each { |car| RaceResult.create(car: car, ...) }
click_button('Yes')
expect(page).to have_text(...
end
Another option would be to use some knowledge of how your controller interacts with the model:
allow(Car).to receive(:destroy_list_of_cars).with(1,2,3).and_return(false) # or whatever your method would return
This would not actually run the destroy_list_of_cars method, so all the records would still be there in the DB. Then you can expect error messages for each of your selected records.
Or since destroy_all calls each record's destroy method, you could mock that method:
allow_any_instance_of('Car').to receive(:destroy).and_return(false) # simulates a callback halting things
allow_any_instance_of makes tests brittle however.
Finally, you could consider just not anticipating problems before they exist (maybe you don't even need the bulk delete page to be this helpful?). If your users see a more generic error, is there a page they could filter to verify for themselves what might still be there? (there's a lot of factors to consider here, it depends on the importance of the feature to the business and what sort of things could go wrong if the data is inconsistent).

How do I calculate values for the current user in ruby on rails?

I have an application with a series of tests (FirstTest, SecondTest etc.)
Each test has a calculation set out its relevant model that gets calculated before being saved to the database set out like this:
#first_test.rb
before_save :calculate_total
private
def calculate_total
...
end
I then have an index page for each user (welcome/index) which displays the current user's results for each test. This all works fine, however I want to work out various other things such as each users average score overall etc.
Is it possible to access the current user from the welcome model?
Currently my welcome.rb is accessing the data follows:
#welcome.rb
def self.total
FirstTest.last.total
end
This obviously access the last overall test NOT the last test from the current user.
I feel like I may have just laid the whole application out in a fairly unintelligent manner...
Thanks in advance x
Well you need to save user_id in a column for each record in FirstTest. Then you can find the total for current user
FirstTest.where(:user_id => current_user.id).last.total

Ruby on Rails - ActiveRecord::Relation count method is wrong?

I'm writing an application that allows users to send one another messages about an 'offer'.
I thought I'd save myself some work and use the Mailboxer gem.
I'm following a test driven development approach with RSpec. I'm writing a test that should ensure that only one Conversation is allowed per offer. An offer belongs_to two different users (the user that made the offer, and the user that received the offer).
Here is my failing test:
describe "after a message is sent to the same user twice" do
before do
2.times { sending_user.message_user_regarding_offer! offer, receiving_user, random_string }
end
specify { sending_user.mailbox.conversations.count.should == 1 }
end
So before the test runs a user sending_user sends a message to the receiving_user twice. The message_user_regarding_offer! looks like this:
def message_user_regarding_offer! offer, receiver, body
conversation = offer.conversation
if conversation.nil?
self.send_message(receiver, body, offer.conversation_subject)
else
self.reply_to_conversation(conversation, body)
# I put a binding.pry here to examine in console
end
offer.create_activity key: PublicActivityKeys.message_received, owner: self, recipient: receiver
end
On the first iteration in the test (when the first message is sent) the conversation variable is nil therefore a message is sent and a conversation is created between the two users.
On the second iteration the conversation created in the first iteration is returned and the user replies to that conversation, but a new conversation isn't created.
This all works, but the test fails and I cannot understand why!
When I place a pry binding in the code in the location specified above I can examine what is going on... now riddle me this:
self.mailbox.conversations[0] returns a Conversation instance
self.mailbox.conversations[1] returns nil
self.mailbox.conversations clearly shows a collection containing ONE object.
self.mailbox.conversations.count returns 2?!
What is going on there? the count method is incorrect and my test is failing...
What am I missing? Or is this a bug?!
EDIT
offer.conversation looks like this:
def conversation
Conversation.where({subject: conversation_subject}).last
end
and offer.conversation_subject:
def conversation_subject
"offer-#{self.id}"
end
EDIT 2 - Showing the first and second iteration in pry
Also...
Conversation.all.count returns 1!
and:
Conversation.all == self.mailbox.conversations returns true
and
Conversation.all.count == self.mailbox.conversations.count returns false
How can that be if the arrays are equal? I don't know what's going on here, blown hours on this now. Think it's a bug?!
EDIT 3
From the source of the Mailboxer gem...
def conversations(options = {})
conv = Conversation.participant(#messageable)
if options[:mailbox_type].present?
case options[:mailbox_type]
when 'inbox'
conv = Conversation.inbox(#messageable)
when 'sentbox'
conv = Conversation.sentbox(#messageable)
when 'trash'
conv = Conversation.trash(#messageable)
when 'not_trash'
conv = Conversation.not_trash(#messageable)
end
end
if (options.has_key?(:read) && options[:read]==false) || (options.has_key?(:unread) && options[:unread]==true)
conv = conv.unread(#messageable)
end
conv
end
The reply_to_convesation code is available here -> http://rubydoc.info/gems/mailboxer/frames.
Just can't see what I'm doing wrong! Might rework my tests to get around this. Or ditch the gem and write my own.
see this Rails 3: Difference between Relation.count and Relation.all.count
In short Rails ignores the select columns (if more than one) when you apply count to the query. This is because
SQL's COUNT allows only one or less columns as parameters.
From Mailbox code
scope :participant, lambda {|participant|
select('DISTINCT conversations.*').
where('notifications.type'=> Message.name).
order("conversations.updated_at DESC").
joins(:receipts).merge(Receipt.recipient(participant))
}
self.mailbox.conversations.count ignores the select('DISTINCT conversations.*') and counts the join table with receipts, essentially counting number of receipts with duplicate conversations in it.
On the other hand, self.mailbox.conversations.all.count first gets the records applying the select, which gets unique conversations and then counts it.
self.mailbox.conversations.all == self.mailbox.conversations since both of them query the db with the select.
To solve your problem you can use sending_user.mailbox.conversations.all.count or sending_user.mailbox.conversations.group('conversations.id').length
I have tended to use the size method in my code. As per the ActiveRecord code, size will use a cached count if available and also returns the correct number when models have been created through relations and have not yet been saved.
# File activerecord/lib/active_record/relation.rb, line 228
def size
loaded? ? #records.length : count
end
There is a blog on this here.
In Ruby, #length and #size are synonyms and both do the same thing: they tell you how many elements are in an array or hash. Technically #length is the method and #size is an alias to it.
In ActiveRecord, there are several ways to find out how many records are in an association, and there are some subtle differences in how they work.
post.comments.count - Determine the number of elements with an SQL COUNT query. You can also specify conditions to count only a subset of the associated elements (e.g. :conditions => {:author_name => "josh"}). If you set up a counter cache on the association, #count will return that cached value instead of executing a new query.
post.comments.length - This always loads the contents of the association into memory, then returns the number of elements loaded. Note that this won't force an update if the association had been previously loaded and then new comments were created through another way (e.g. Comment.create(...) instead of post.comments.create(...)).
post.comments.size - This works as a combination of the two previous options. If the collection has already been loaded, it will return its length just like calling #length. If it hasn't been loaded yet, it's like calling #count.
It is also worth mentioning to be careful if you are not creating models through associations, as the related model will not necessarily have those instances in its association proxy/collection.
# do this
mailbox.conversations.build(attrs)
# or this
mailbox.conversations << Conversation.new(attrs)
# or this
mailbox.conversations.create(attrs)
# or this
mailbox.conversations.create!(attrs)
# NOT this
Conversation.new(mailbox_id: some_id, ....)
I don't know if this explains what's going on, but the ActiveRecord count method queries the database for the number of records stored. The length of the Relation could be different, as discussed in http://archive.railsforum.com/viewtopic.php?id=6255, although in that example, the number of records in the database was less than the number of items in the Rails data structure.
Try
self.mailbox.conversations.reload; self.mailbox.conversations.count
or perhaps
self.mailbox.reload; self.mailbox.conversations.count
or, if neither of those work, just try reloading as many of the objects as possible to see if you can get it to work (self, mailbox, conversations, etc.).
My guess is that something is messed up between memory and the DB. This is definitely a really weird error though, might wanna put in an issue on Rails to see why this would be the case.
The result of mailbox.conversations is cached after the first call. To reload it write mailbox.conversations(true)

Catching errors with Ruby Twitter gem, caching methods using delayed_job: What am I doing wrong?

What I'm doing
I'm using the twitter gem (a Ruby wrapper for the Twitter API) in my app, which is run on Heroku. I use Heroku's Scheduler to periodically run caching tasks that use the twitter gem to, for example, update the list of retweets for a particular user. I'm also using delayed_job so scheduler calls a rake task, which calls a method that is 'delayed' (see scheduler.rake below). The method loops through "authentications" (for users who have authenticated twitter through my app) to update each authorized user's retweet cache in the app.
My question
What am I doing wrong? For example, since I'm using Heroku's Scheduler, is delayed_job redundant? Also, you can see I'm not catching (rescuing) any errors. So, if Twitter is unreachable, or if a user's auth token has expired, everything chokes. This is obviously dumb and terrible because if there's an error, the entire thing chokes and ends up creating a failed delayed_job, which causes ripple effects for my app. I can see this is bad, but I'm not sure what the best solution is. How/where should I be catching errors?
I'll put all my code (from the scheduler down to the method being called) for one of my cache methods. I'm really just hoping for a bulleted list (and maybe some code or pseudo-code) berating me for poor coding practice and telling me where I can improve things.
I have seen this SO question, which helps me a little with the begin/rescue block, but I could use more guidance on catching errors, and one the higher-level "is this a good way to do this?" plane.
Code
Heroku Scheduler job:
rake update_retweet_cache
scheduler.rake (in my app)
task :update_retweet_cache => :environment do
Tweet.delay.cache_retweets_for_all_auths
end
Tweet.rb, update_retweet_cache method:
def self.cache_retweets_for_all_auths
#authentications = Authentication.find_all_by_provider("twitter")
#authentications.each do |authentication|
authentication.user.twitter.retweeted_to_me(include_entities: true, count: 200).each do |tweet|
# Actually build the cache - this is good - removing to keep this short
end
end
end
User.rb, twitter method:
def twitter
authentication = Authentication.find_by_user_id_and_provider(self.id, "twitter")
if authentication
#twitter ||= Twitter::Client.new(:oauth_token => authentication.oauth_token, :oauth_token_secret => authentication.oauth_secret)
end
end
Note: As I was posting this, I noticed that I'm finding all "twitter" authentications in the "cache_retweets_for_all_auths" method, then calling the "User.twitter" method, which specifically limits to "twitter" authentications. This is obviously redundant, and I'll fix it.
First what is the exact error you are getting, and what do you want to happen when there is an error?
Edit:
If you just want to catch the errors and log them then the following should work.
def self.cache_retweets_for_all_auths
#authentications = Authentication.find_all_by_provider("twitter")
#authentications.each do |authentication|
being
authentication.user.twitter.retweeted_to_me(include_entities: true, count: 200).each do |tweet|
# Actually build the cache - this is good - removing to keep this short
end
rescue => e
#Either create an object where the error is log, or output it to what ever log you wish.
end
end
end
This way when it fails it will keep moving on to the next user but will still making a note of the error. Most of the time with twitter its just better to do something like this then try to do with each error on its own. I have seen so many weird things out of the twitter API, and random errors, that trying to track down every error almost always turns into a wild goose chase, though it is still good to keep track just in case.
Next for when you should use what.
You should use a scheduler when you need something to happen based on time only, delayed jobs for when its based on an user action, but the 'action' you are going to delay would take to long for a normal response. Sometimes you can just put the thing plainly in the controller also.
So in other words
The scheduler will be fine as long as the time between updates X is less then the time it will take for the update to happen, time Y.
If X < Y then you might want to look at calling the logic from the controller when each indvidual entry is accessed, isntead of trying to do them all at once. The idea being you would only update it after a certain time as passed so. You could store the last time update either on the model itself in a field like twitter_udpate_time or in a redis or memecache instance at a unquie key for the user/auth.
But if the individual update itself is still too long, then thats when you should do the above, but instead of doing the actually update, call a delayed job.
You could even set it up that it only updates or calls the delayed job after a certain number of views, to further limit stuff.
Possible Fancy Pants
Or if you want to get really fancy you could still do it as a cron job, but have a point system based on views that weights which entries should be updated. The idea being certain actions would add points to certain users, and if their points are over a certain amount you update them, and then remove their points. That way you could target the ones you think are the most important, or have the most traffic or show up in the most search results etc etc.
Next off a nick picky thing.
http://api.rubyonrails.org/classes/ActiveRecord/Batches.html
You should be using
#authentications.find_each do |authentication|
instead of
#authentications.each do |authentication|
find_each pulls in only 1000 entries at a time so if you end up with a lof of Authentications you don't end up pulling a crazy amount of entries into memory.

validates_uniqueness_of failing on heroku?

In my User model, I have:
validates_uniqueness_of :fb_uid (I'm using facebook connect).
However, at times, I'm getting duplicate rows upon user sign up. This is Very Bad.
The creation time of the two records is within 100ms. I haven't been able to determine if it happens in two separate requests or not (heroku logging sucks and only goes back so far and it's only happened twice).
Two things:
Sometimes the request takes some time, because I query FB API for name info, friends, and picture.
I'm using bigint to store fb_uid (backend is postgres).
I haven't been able to replicate in dev.
Any ideas would be extremely appreciated.
The signin function
def self.create_from_cookie(fb_cookie, remote_ip = nil)
return nil unless fb_cookie
return nil unless fb_hash = authenticate_cookie(fb_cookie)
uid = fb_hash["uid"].join.to_i
#Make user and set data
fb_user = FacebookUser.new
fb_user.fb_uid = uid
fb_user.fb_authorized = true
fb_user.email_confirmed = true
fb_user.creation_ip = remote_ip
fb_name_data, fb_friends_data, fb_photo_data, fb_photo_ext = fb_user.query_data(fb_hash)
return nil unless fb_name_data
fb_user.set_name(fb_name_data)
fb_user.set_photo(fb_photo_data, fb_photo_ext)
#Save user and friends to the db
return nil unless fb_user.save
fb_user.set_friends(fb_friends_data)
return fb_user
end
I'm not terribly familiar with facebook connect, but is it possible to get two of the same uuid if two separate users from two separate accounts post a request in very quick succession before either request has completed? (Otherwise known as a race condition) validates_uniqueness_of can still suffer from this sort of race condition, details can be found here:
http://apidock.com/rails/ActiveModel/Validations/ClassMethods/validates_uniqueness_of
Because this check is performed
outside the database there is still a
chance that duplicate values will be
inserted in two parallel transactions.
To guarantee against this you should
create a unique index on the field.
See add_index for more information.
You can really make sure this will never happen by adding a database constraint. Add this to a database migration and then run it:
add_index :user, :fb_uid, :unique => true
Now a user would get an error instead of being able to complete the request, which is usually preferable to generating illegal data in your database which you have to debug and clean out manually.
From Ruby on Rails v3.0.5 Module ActiveRecord::Validations::ClassMethods
http://s831.us/dK6mFQ
Concurrency and integrity
Using this [validates_uniqueness_of]
validation method in conjunction with
ActiveRecord::Base#save does not
guarantee the absence of duplicate
record insertions, because uniqueness
checks on the application level are
inherently prone to race conditions.
For example, suppose that two users
try to post a Comment at the same
time, and a Comment’s title must be
unique. At the database-level, the
actions performed by these users could
be interleaved in the following
manner: ...
It seems like there is some sort of a race condition inside your code. To check this, i would first change the code so that facebook values are first extracted and only then i would create a new facebook object.
Then i would highly suggest that you write a test to check whether your function gets executed once. It seems that it's executed two times.
And upon this, there seems to be a race condition upon waiting to get the facebook results.

Resources