Basically I have this User model which has certain attributes say 'health' and another Battle model which records all the fight between Users. Users can fight with one another and some probability will determine who wins. Both will lose health after a fight.
So in the Battle controller, 'CREATE' action I did,
#battle = Battle.attempt current_user.id, opponent.id
In the Battle model,
def self.attempt current_user.id, opponent_id
battle = Battle.new({:user_id => current_user.id, :opponent_id => opponent_id})
# all the math calculation here
...
# Update Health
...
battle.User.health = new_health
battle.User.save
battle.save
return battle
end
Back to the Battle controller, I did ...
new_user_health = current_user.health
to get the new health value after the Battle. However the value I got is the old health value (the health value before the Battle).
Has anyone face this kind of problem before ???
UPDATE
I just add
current_user.reload
before the line
new_user_health = current_user.health
and that works. Problem solved. Thanks!
It appears that you are getting current_user, then updating battle.user and then expecting current_user to automatically have the updated values. This type of thing is possible using Rails' Identity Map but there are some caveats that you'll want to read up on first.
The problem is that even though the two objects are backed by the same data in the database, you have two objects in memory. To refresh the information, you can call current_user.reload.
As a side note, this wouldn't be classified a race condition because you aren't using more than one process to modify/read the data. In this example, you are reading the data, then updating the data on a different object in memory. A race condition could happen if you were using two threads to access the same information at the same time.
Also, you should use battle.user, not battle.User like Wayne mentioned in the comments.
Related
When I executing query
Mymodel.all.each do |model|
# ..do something
end
It uses allot of memory and amount of used memory increases at all the time and at the and it crashes. I found out that to fix it I need to disable identity_map but when I adding to my mongoid.yml file identity_map_enabled: false I am getting error
Invalid configuration option: identity_map_enabled.
Summary:
A invalid configuration option was provided in your mongoid.yml, or a typo is potentially present. The valid configuration options are: :include_root_in_json, :include_type_for_serialization, :preload_models, :raise_not_found_error, :scope_overwrite_exception, :duplicate_fields_exception, :use_activesupport_time_zone, :use_utc.
Resolution:
Remove the invalid option or fix the typo. If you were expecting the option to be there, please consult the following page with repect to Mongoid's configuration:
I am using Rails 4 and Mongoid 4, Mymodel.all.count => 3202400
How can I fix it or maybe some one know other way to reduce amount of memory used during executing query .all.each ..?
Thank you very much for the help!!!!
I started with something just like you by doing loop through millions of record and the memory just keep increasing.
Original code:
#portal.listings.each do |listing|
listing.do_something
end
I've gone through many forum answers and I tried them out.
1st attempt: I try to use the combination of WeakRef and GC.start but no luck, I fail.
2nd attempt: Adding listing = nil to the first attempt, and still fail.
Success Attempt:
#start_date = 10.years.ago
#end_date = 1.day.ago
while #start_date < #end_date
#portal.listings.where(created_at: #start_date..#start_date.next_month).each do |listing|
listing.do_something
end
#start_date = #start_date.next_month
end
Conclusion
All the memory allocated for the record will never be released during
the query request. Therefore, trying with small number of record every
request does the job, and memory is in good condition since it will be
released after each request.
Your problem isn't the identity map, I don't think Mongoid4 even has an identity map built in, hence the configuration error when you try to turn it off. Your problem is that you're using all. When you do this:
Mymodel.all.each
Mongoid will attempt to instantiate every single document in the db.mymodels collection as a Mymodel instance before it starts iterating. You say that you have about 3.2 million documents in the collection, that means that Mongoid will try to create 3.2 million model instances before it tries to iterate. Presumably you don't have enough memory to handle that many objects.
Your Mymodel.all.count works fine because that just sends a simple count call into the database and returns a number, it won't instantiate any models at all.
The solution is to not use all (and preferably forget that it exists). Depending on what "do something" does, you could:
Page through all the models so that you're only working with a reasonable number of them at a time.
Push the logic into the database using mapReduce or the aggregation framework.
Whenever you're working with real data (i.e. something other than a trivially small database), you should push as much work as possible into the database because databases are built to manage and manipulate big piles of data.
In my Rails application I would like to record the time a user was last_seen.
Right now, I do this as follows in my SessionsHelper:
def sign_in(user)
.....
user.update_column(:last_seen, Time.zone.now)
self.current_user = user
end
But this is not very precise because a user might log in at 8 a.m. and in the evening the last_seen database column will still contain that time.
So I was thinking to update last_seen whenever the user takes an action:
class ApplicationController
before_filter :update_last_seen
private
def update_last_seen
current_user.last_seen = Time.zone.now
current_user.save
end
end
But I don't like that approach either because the database gets hit with every action that a user takes.
So what might be a better alternative to this?
Rails actually has this sort of behavior built in with touch:
User.last.touch
#=> User's updated_at is updated to the current time
The time it takes in any well-provisioned DB to handle updating a single column like this should be well under 5ms, and very likely under 1ms. Provided you're already going to be establishing that database connection (or, in Rails' case, using a previously established connection from a pool), the overhead is negligible.
To answer your question about whether your code is slower, well, you're thinking about this all wrong. You can optimize an already very fast operation for performance, but I instead you worry more about “rightness”. Here is the implementation of ActiveRecord's touch method:
def touch(name = nil)
attributes = timestamp_attributes_for_update_in_model
attributes << name if name
unless attributes.empty?
current_time = current_time_from_proper_timezone
changes = {}
attributes.each do |column|
changes[column.to_s] = write_attribute(column.to_s, current_time)
end
changes[self.class.locking_column] = increment_lock if locking_enabled?
#changed_attributes.except!(*changes.keys)
primary_key = self.class.primary_key
self.class.unscoped.update_all(changes, { primary_key => self[primary_key] }) == 1
end
end
Now you tell me, which is faster? Which is more correct?
Here, I'll give you a hint: thousands of people have used this implementation of touch and this very code has likely been run millions of times. Your code has been used by you alone, probably doesn't even have a test written, and doesn't have any peer review.
“But just because someone else uses it doesn't make it empirically better,” you argue. You're right, of course, but again it's missing the point: while you could go on building your application and making something other humans (your users) could use and benefit from, you are spinning your wheels here wondering what is better for the machine even though a good solution has been arrived upon by others.
To put a nail in the coffin, yes, your code is slower. It executes callbacks, does dirty tracking, and saves all changed attributes to the database. touch bypasses much of this, focusing on doing exactly the work needed to persist timestamp updates to your models.
I'm currently trying my hand at developing a simple web based game using rails and Mongoid. I've ran into some concurrency issues that i'm not sure how to solve.
The issue is i'm not sure how to atomically do a check and take an action based upon it in Mongoid.
Here is a sample of the relevant parts of the controller code to give you an idea of what i'm trying to do:
battle = current_user.battle
battle.submitted = true
battle.save
if Battle.where(opponent: current_user._id, submitted: true, resolving: false).any?
battle.update_attribute(:resolving, true)
#Resolve turn
A battle is between two users, but i only want one of the threads to run the #Resolve turn. Now unless i'm completely off both threads could check the condition one after another, but before setting resolving to true, therefore both end up running the '#Resolve turn' code.
I would much appreciate any ideas on how to solve this issue.
I am however getting an increasing feeling that doing user synchronization in this way is fairly impractical and that there's a better way altogether. So suggestions for other techniques that could accomplish the same thing would be greatly appreciated!
Sounds like you want the mongo findAndModify command which allows you to atomically retrieve and update a row.
Unfortunately mongoid doesn't appear to expose this part of the mongo api, so it looks like you'll have to drop down to the driver level for this one bit:
battle = Battle.collection.find_and_modify(query: {oppenent: current_user._id, ...},
update: {'$set' => {resolving: true})
By default the returned object does not include the modification made, but you can turn this on if you want (pass {:new => true})
The value returned is a raw hash, if my memory is correct you can do Battle.instantiate(doc) to get a Battle object back.
I have the following:
#users = User.all
User has several fields including email.
What I would like to be able to do is get a list of all the #users emails.
I tried:
#users.email.all but that errors w undefined
Ideas? Thanks
(by popular demand, posting as a real answer)
What I don't like about fl00r's solution is that it instantiates a new User object per record in the DB; which just doesn't scale. It's great for a table with just 10 emails in it, but once you start getting into the thousands you're going to run into problems, mostly with the memory consumption of Ruby.
One can get around this little problem by using connection.select_values on a model, and a little bit of ARel goodness:
User.connection.select_values(User.select("email").to_sql)
This will give you the straight strings of the email addresses from the database. No faffing about with user objects and will scale better than a straight User.select("email") query, but I wouldn't say it's the "best scale". There's probably better ways to do this that I am not aware of yet.
The point is: a String object will use way less memory than a User object and so you can have more of them. It's also a quicker query and doesn't go the long way about it (running the query, then mapping the values). Oh, and map would also take longer too.
If you're using Rails 2.3...
Then you'll have to construct the SQL manually, I'm sorry to say.
User.connection.select_values("SELECT email FROM users")
Just provides another example of the helpers that Rails 3 provides.
I still find the connection.select_values to be a valid way to go about this, but I recently found a default AR method that's built into Rails that will do this for you: pluck.
In your example, all that you would need to do is run:
User.pluck(:email)
The select_values approach can be faster on extremely large datasets, but that's because it doesn't typecast the returned values. E.g., boolean values will be returned how they are stored in the database (as 1's and 0's) and not as true | false.
The pluck method works with ARel, so you can daisy chain things:
User.order('created_at desc').limit(5).pluck(:email)
User.select(:email).map(&:email)
Just use:
User.select("email")
While I visit SO frequently, I only registered today. Unfortunately that means that I don't have enough of a reputation to leave comments on other people's answers.
Piggybacking on Ryan's answer above, you can extend ActiveRecord::Base to create a method that will allow you to use this throughout your code in a cleaner way.
Create a file in config/initializers (e.g., config/initializers/active_record.rb):
class ActiveRecord::Base
def self.selected_to_array
connection.select_values(self.scoped)
end
end
You can then chain this method at the end of your ARel declarations:
User.select('email').selected_to_array
User.select('email').where('id > ?', 5).limit(4).selected_to_array
Use this to get an array of all the e-mails:
#users.collect { |user| user.email }
# => ["test#example.com", "test2#example.com", ...]
Or a shorthand version:
#users.collect(&:email)
You should avoid using User.all.map(&:email) as it will create a lot of ActiveRecord objects which consume large amounts of memory, a good chunk of which will not be collected by Ruby's garbage collector. It's also CPU intensive.
If you simply want to collect only a few attributes from your database without sacrificing performance, high memory usage and cpu cycles, consider using Valium.
https://github.com/ernie/valium
Here's an example for getting all the emails from all the users in your database.
User.all[:email]
Or only for users that subscribed or whatever.
User.where(:subscribed => true)[:email].each do |email|
puts "Do something with #{email}"
end
Using User.all.map(&:email) is considered bad practice for the reasons mentioned above.
In my User model, I have:
validates_uniqueness_of :fb_uid (I'm using facebook connect).
However, at times, I'm getting duplicate rows upon user sign up. This is Very Bad.
The creation time of the two records is within 100ms. I haven't been able to determine if it happens in two separate requests or not (heroku logging sucks and only goes back so far and it's only happened twice).
Two things:
Sometimes the request takes some time, because I query FB API for name info, friends, and picture.
I'm using bigint to store fb_uid (backend is postgres).
I haven't been able to replicate in dev.
Any ideas would be extremely appreciated.
The signin function
def self.create_from_cookie(fb_cookie, remote_ip = nil)
return nil unless fb_cookie
return nil unless fb_hash = authenticate_cookie(fb_cookie)
uid = fb_hash["uid"].join.to_i
#Make user and set data
fb_user = FacebookUser.new
fb_user.fb_uid = uid
fb_user.fb_authorized = true
fb_user.email_confirmed = true
fb_user.creation_ip = remote_ip
fb_name_data, fb_friends_data, fb_photo_data, fb_photo_ext = fb_user.query_data(fb_hash)
return nil unless fb_name_data
fb_user.set_name(fb_name_data)
fb_user.set_photo(fb_photo_data, fb_photo_ext)
#Save user and friends to the db
return nil unless fb_user.save
fb_user.set_friends(fb_friends_data)
return fb_user
end
I'm not terribly familiar with facebook connect, but is it possible to get two of the same uuid if two separate users from two separate accounts post a request in very quick succession before either request has completed? (Otherwise known as a race condition) validates_uniqueness_of can still suffer from this sort of race condition, details can be found here:
http://apidock.com/rails/ActiveModel/Validations/ClassMethods/validates_uniqueness_of
Because this check is performed
outside the database there is still a
chance that duplicate values will be
inserted in two parallel transactions.
To guarantee against this you should
create a unique index on the field.
See add_index for more information.
You can really make sure this will never happen by adding a database constraint. Add this to a database migration and then run it:
add_index :user, :fb_uid, :unique => true
Now a user would get an error instead of being able to complete the request, which is usually preferable to generating illegal data in your database which you have to debug and clean out manually.
From Ruby on Rails v3.0.5 Module ActiveRecord::Validations::ClassMethods
http://s831.us/dK6mFQ
Concurrency and integrity
Using this [validates_uniqueness_of]
validation method in conjunction with
ActiveRecord::Base#save does not
guarantee the absence of duplicate
record insertions, because uniqueness
checks on the application level are
inherently prone to race conditions.
For example, suppose that two users
try to post a Comment at the same
time, and a Comment’s title must be
unique. At the database-level, the
actions performed by these users could
be interleaved in the following
manner: ...
It seems like there is some sort of a race condition inside your code. To check this, i would first change the code so that facebook values are first extracted and only then i would create a new facebook object.
Then i would highly suggest that you write a test to check whether your function gets executed once. It seems that it's executed two times.
And upon this, there seems to be a race condition upon waiting to get the facebook results.