I have an array of prices: #price_queue.
It is persisted in PostgreSQL as Prices.find(1).price_list and is seeded.
When a transaction is initiated, the transaction takes the next price in #price_queue, and gets sent to a payment processor for the charge.
def create
price_bucket = Prices.find(1)
price_bucket.with_lock do
#price_queue = price_bucket.price_list
#price = #price_queue.shift
price_bucket.save
end
customer = Stripe::Customer.create(
:email => params[:stripeEmail],
:card => params[:stripeToken],
)
charge = Stripe::Charge.create(
:customer => customer.id,
:amount => #price * 100,
)
if charge["paid"]
Pusher['price_update'].trigger('live', {message: #price_queue[0]})
end
If the transaction succeeds, it should get rid of the #price it holds.
If it fails, the price should be placed back into #price_queue.
rescue Stripe::CardError => e
flash[:error] = e.message
#price_queue.unshift(#price)
Pusher['price_update'].trigger('live', {message: #price_queue[0]})
price_bucket.price_list = #price_queue
price_bucket.save
redirect_to :back
end
I have found a major bug when testing, at milliseconds intervals, two failing transactions and then a passing one.
price_queue = [100, 101, 102, 103, ...]
User 1 gets 100 (confirmed on Stripe dashboard)
User 2 gets 101 (confirmed on Stripe dashboard)
User 3 gets 102 (confirmed on Stripe dashboard)
Expected:
Assuming no unshift has occurred yet
price_queue = [103, 104, ...]
User 1 fails, puts 100 back
price_queue = [100, 103, ...]
User 2 fails, puts 101 back
price_queue = [101, 100, 103, ...]
User 3 passes, 102 disappears
What really happens:
price_queue = [101, 102, 103, 104, ...]
As we can see, 100 is disappearing, although it should be back in the queue, 101 is back in the queue (most probably not by expected behaviour), and 102 is being put back in the queue, even though it shouldn't even traverse the rescue path.
I am using Puma on Heroku.
I have tried storing the price in session[:price], cookie[:price], assigning it to a local variable price, to no avail.
I've been reading around and figured this could be a scope problem cause by a multithreaded environment, where #price is leaking to other controller actions and being reassigned or mutated.
Any help would be greatly appreciated. (also feel free to critic my code)
This is nothing to do with instance variables leaking or anything like that - just some classic race conditions going on here. Two possible timelines:
request 1 fetches prices from the database (the array is [100,101,102])
request 2 fetches prices from the database (the array is [100,101,102] - a separate copy)
request 1 locks prices, removes a price and saves
request 2 locks prices, removes a price and saves
The important thing here is that request 2 is using an old copy of prices that doesn't include the changes made by request 1: Both of the instances will shift the same value off the array (the requests could be the different threads on the same puma worker, different workers or even different dynos - doesn't matter)
Another failure scenario would be
request 1 fetches prices, removes a price and saves. The array in the database is [101,102,103,...], the in memory array is [101,102,103, ...]
request 2 fetches prices, removes a price and saves. The array in the database is [102,103,...]
request 2's stripe transaction succeeds
request 1's stripe transaction fails, so it puts 100 back onto the array and saves. Because you haven't reloaded from the database this overwrites the changes from request 2.
To fix this properly I'd split out the logic for acquiring and replacing a price into their own methods - something like
class Prices < ActiveRecord::Base
def with_locked_row(id)
transaction do
row = lock.find(id)
result = yield row
row.save #on older versions of active record you need to tell rails about in place changes
result
end
def self.get_price(id)
with_locked_row(id) {|row| row.pricelist.shift}
end
def self.restore_price(id, value)
with_locked_row(id) {|row| row.pricelist.unshift(value}
end
end
You'd then do
Prices.get_price(1) # get a price
Prices.restore_price(1, some_value) #put a price back if the charging failed
The key differences between this and your original code are that:
I acquire a locked row and update it, rather than acquiring a row and then locking it. This eliminates the window in the first scenario I outlined
I don't keep using the same active record object after I've released the lock - as soon as the lock has been released someone else might be changing it behind your back.
You could also do this via optimistic locking (i.e without explicit locks). The only thing that would change code-wise would be
def with_locked_row(id)
begin
row = lock.find(id)
result = yield row
row.save #on older versions of active record you need to tell rails about in place changes
result
rescue ActiveRecord::StaleObjectError
retry
end
end
You would need to add a non null integer column with default value 0 called lock_version for this to work.
Which performs better depends on how much concurrency you experience, what other accesses there are to this table and so on. Personally I would default to optimistic locking unless I had a compelling reason to do otherwise.
Related
Imagine you have a Site that has one of those "take a number" thingys, where people take a number, and then wait their turn. Let's say each number is an Order. The relationship is that a Site has many Orders
At a given reset point, the manager of the site goes through and replace the "take a number" thingys with new rolls of numbers, i.e., you can imagine this happening daily. However, the "take a number" thingys are manufacturered with finite numbers, so each roll consists of, let's say 1-100, and then the next roll starts again 100-999.
I'm trying to model the above behavior, how I thought about approaching it:
On the Site parent, there's a start_number attribute. Hitting reset/swapping out the roll, would reset the start_number to 100, i.e., the first number of the roll
On the Order child, there's a callback that assigns a number. If the parent Site number is 100, then this means this is the first Order since the reset as described in step #1, so then this number is 100 as well. Now, the parent Site is auto updated such that it's no longer in a reset state (e.g., start_number no longer 100). For future Orders, the assigned number is just the next number after the previous order
Here's the code:
class Site
has_many :orders
end
class Order
belongs_to :site
before_save :assign_number
def assign_number
if site.start_number == 100
self.number = 100
self.site.update_column(:start_number, nil)
else
self.number = self.site.orders.where.not(number:nil).last.number + 1
end
end
end
But this is crappy, because unlike with real world "take a number" thingy, 2 orders could get processed at the same time, there is NOT a unique constraint on Order.number because the numbers do get reused (the roll gets reset). But, it's obviously not helpful if upon reset, 2 orders that get placed close together are both 100. You only want multiple orders to share a number if there truly has been a reset event, not just coincidental timing.
Another problem with this approach is when 1 order is closely followed by a second. Example, let's say the last number assigned was 415, 2 orders come in fast succession. The first is assigned 416, the second is so close, that self.site.orders.where.not(number:nil).last.number still returns 415 (i.e., 416 hasn't saved yet), and so the second order is now also assigned 416.
Would be great to get ideas on how to better model the desired behavior. Thanks!
UPDATE Per #Fernand's comments, I am going to go with a pessimistic lock, which I'm implementing per notes here. So right now code looks like:
def assign_number
site = self.site.lock!
if site.start_number == 100
self.number = 100
site.start_number = nil
else
last_order = self.site.orders.where.not(number:nil).last.lock!
self.number = last_order.number + 1
last_order.save! # releases lock
end
site.save! #releases lock, whether or not call number was updated to nil
end
I'm not entirely sure how to spec this though... since writing a spec is by definition ordered... how do you force 2 orders to save close together to simulate this behavior?
One way is to create another table that will store the latest number.
#code :string not null
#number :bigint default(0), not null
class Serial < ApplicationRecord
def self.get_latest_number
Serial.transaction do
self.lock.find_or_create_by!(code: 'just_any_identification').increment!(:number).count
end
end
end
then set the number most probably during callback.
class Order
before_create :set_number #or after_create
private
def set_number
self.number=Serial.get_latest_number if self.number.blank?
end
end
take note of the transaction lock.
I have an accounting system I wrote which follows standard dual-entry accounting practices.
There is a feature of dual entry accounting called 'trial balance' where you can verify the entire system is correct because when you run it, it will always equal 0.00
I have written tests and always run my trial balance when the system is 'stopped', but under write-heavy database load during seeding lots of records, I noticed my trial balance is WRONG about 1 out of 10 tries.
When it's at rest (no inserts), its always correct at 0.00 however.
When I insert transactions they're always in a transaction, like this:
2000.times do |i|
ActiveRecord::Base.transaction do
puts "#{i} ==================================================================="
entry = JournalEntry.create!(description: 'Purchase mower on credit', user: user)
entry.line_items.create!(amount: Money.from_amount(1551.75).cents, account: property.accounts.find_by(name: 'Equipment'), side: :debit)
entry.line_items.create!(amount: Money.from_amount(1551.75).cents, account: property.accounts.find_by(name: 'Accounts Payable'), side: :credit)
end
end
The fact it breaks under load makes me think I'm not understanding something vital about how Rails transactions work......
What could be causing this?
FWIW my trial balance function (GeneralLedger.new(property).trial_balance) executes the following pseodo-SQL (NOT in a transaction):
SELECT sum(...) WHERE account = 'asset'
SELECT sum(...) WHERE account = 'liability'
SELECT sum(...) WHERE account = 'equity'
SELECT sum(...) WHERE account = 'income'
SELECT sum(...) WHERE account = 'expense'
I then add them together according to the Accounting Formula to arrive at 0.00:
def trial_balance
balance_category(:asset) - (balance_category(:liability) + balance_category(:equity) + balance_category(:income) - balance_category(:expense))
end
The balance_category function is what triggers each SELECT for a total of 5 times, once for each category.
Because its returning 0 it means it's somehow selecting when it's halfway inserted........... I have no idea how this is happening?
I could understand if the creation of the journal entry/line item was not in a transaction and it was SELECTing halfway inserted rows, but it should only select from the entire group as a whole after the transaction ends?
If you want to avoid repeated statements, collapse it into one, something of this form:
SELECT account, SUM(...) AS amount FROM ...
WHERE account IN ('asset', 'liability', ...)
GROUP BY account
You can fetch these like this:
where(account: ACCOUNT_TYPES).group(:account).pluck('account, SUM(...)')
Where ACCOUNT_TYPES is an array of the account types you need to fetch.
You can always take this pluck form and convert to a quick look-up hash with .to_h then use it like this:
balance_category = ...where(...)...pluck(...).to_h
balance_category[:asset]
If you need a default value, consider:
balance_category = Hash.new(0).merge(...where(...)...pluck(...).to_h)
Where that default can be integer (0) or a float (0.0) or anything at all.
I've flirted with learning web dev in the past and haven't had the time as I am a full time Business Student.
I started digging back in today and decided to take a break from the learning and practice what I've learned today by writing a simple program that allows the user to enter in their bills and will eventually calculate how much disposable income they have after their bills are paid each month.
My problem is that the program runs through perfectly, the loop is continuing/exiting when it should, but either the program is not storing the users input in the hash like I'm wanting it to or it's not displaying all the bills entered as it should. Here is my program:
# This program allows you to assign monthly payments
# to their respective bills and will automatically
# calculate how much disposable income you have
# after your bills are paid
# Prompts user to see if they have any bills to enter
puts "Do you have any bills you would like to enter, Yes or No?"
new_bill = gets.chomp.downcase
until new_bill == 'no'
# Creates a hash to store a key/value pair
# of the bill name and the respection payment amount
bills = {}
puts "Enter the bill name: "
bill_name = gets.chomp
puts "How much is this bill?"
pay_amt = gets.chomp
bills[bill_name] = pay_amt
puts "Would you like to add another bill, Yes or No?"
new_bill = gets.chomp.downcase
end
bills.each do |bill_name, pay_amt|
puts "Your #{bill_name} bill is $#{pay_amt}."
end
My questions are:
Is my hash set up properly to store the key/value pairs from the users input?
If not, how can I correct it?
I'm getting only the last bill that was entered by the user. I've tried several bills at a time but only getting the last entry.
As I stated, I'm a noob but I'm extremely ambitious to learn. I've referred to to the ruby docs on hashes to see if there is an error in my code but was able to locate a solution (still finding my way around ruby docs).
Any help is appreciated! Also, if you have any recommendations on ways I can make my code more efficient, could you point me in the direction where I can obtain the appropriate information to do so?
Thank you.
Edit:
The main question has been answered. This is a follow up question to the same program - I'm getting an error message budget_calculator.rb:35:in -': Hash can't be coerced into Float (TypeError)
from budget_calculator.rb:35:in'
From the following code (keep in mind of the program above) -
# Displays the users bills
bills_hash.each {|key,value| puts "Your #{key} bill is $#{value}."}
# Get users net income
puts "What is your net income?"
net_income = gets.chomp.to_f
#Calculates the disposable income of the user
disposable_income = net_income - bills_hash.each {|value| value}
puts disposable_income
I understand the error is appearing from this line of code:
disposable_income = net_income - bills_hash.each {|value| value}
I'm just not understanding why this is unacceptable. I'm trying to subtract all of the values in the hash (pay_amt) from the net income to derive the disposable income.
This is the part that's getting you:
bills = {}
You're resetting the hash every time the program loops. Try declaring bills at the top of the program.
As to your second question about bills_hash, it's not working because the program is attempting to subtract a hash from a float. You've got the right idea, but the way it's set up, it's not going to just subtract each key from the net_income in turn.
The return value of #each is the original hash that you were looping over. You can see this if you open IRB and type
[1,2,3].each {|n| puts n}
The block is evaluated for each element of the list, but the final return value is the original list:
irb(main):007:0> [1,2,3].each {|n| puts n}
1
2
3
=> [1, 2, 3] # FINAL RETURN VALUE
So according to the order of operations, your #each block is iterating, then returning the original bills_hash hash, and then trying to subtract that hash from net_income, which looks like this (assuming my net_income is 1000):
1000 - {rent: 200, video_games: 800}
hence the error.
There are a couple ways you could go about fixing this. One would be to sum all of the values in bills_hash as its own variable, then subtract that from the net_income:
total_expenditures = bills_hash.values.inject(&:+) # sum the values
disposable_income = net_income - total_expenditures
Using the same #inject method, this could also be done in one function call:
disposable_income = bills_hash.values.inject(net_income, :-)
# starting with net_income, subtract each value in turn
See the documentation for Enumerable#inject.
It's a very powerful and useful method to know. But make sure you go back and understand how return values work and why the original setup was raising an exception.
I noticed that Rails can have concurrency issues with multiple servers and would like to force my model to always lock. Is this possible in Rails, similar to unique constraints to force data integrity? Or does it just require careful programming?
Terminal One
irb(main):033:0* Vote.transaction do
irb(main):034:1* v = Vote.lock.first
irb(main):035:1> v.vote += 1
irb(main):036:1> sleep 60
irb(main):037:1> v.save
irb(main):038:1> end
Terminal Two, while sleeping
irb(main):240:0* Vote.transaction do
irb(main):241:1* v = Vote.first
irb(main):242:1> v.vote += 1
irb(main):243:1> v.save
irb(main):244:1> end
DB Start
select * from votes where id = 1;
id | vote | created_at | updated_at
----+------+----------------------------+----------------------------
1 | 0 | 2013-09-30 02:29:28.740377 | 2013-12-28 20:42:58.875973
After execution
Terminal One
irb(main):040:0> v.vote
=> 1
Terminal Two
irb(main):245:0> v.vote
=> 1
DB End
select * from votes where id = 1;
id | vote | created_at | updated_at
----+------+----------------------------+----------------------------
1 | 1 | 2013-09-30 02:29:28.740377 | 2013-12-28 20:44:10.276601
Other Example
http://rhnh.net/2010/06/30/acts-as-list-will-break-in-production
You are correct that transactions by themselves don't protect against many common concurrency scenarios, incrementing a counter being one of them. There isn't a general way to force a lock, you have to ensure you use it everywhere necessary in your code
For the simple counter incrementing scenario there are two mechanisms that will work well:
Row Locking
Row locking will work as long as you do it everywhere in your code where it matters. Knowing where it matters may take some experience to get an instinct for :/. If, as in your above code, you have two places where a resource needs concurrency protection and you only lock in one, you will have concurrency issues.
You want to use the with_lock form; this does a transaction and a row-level lock (table locks are obviously going to scale much more poorly than row locks, although for tables with few rows there is no difference as postgresql (not sure about mysql) will use a table lock anyway. This looks like this:
v = Vote.first
v.with_lock do
v.vote +=1
sleep 10
v.save
end
The with_lock creates a transaction, locks the row the object represents, and reloads the objects attributes all in one step, minimizing the opportunity for bugs in your code. However this does not necessarily help you with concurrency issues involving the interaction of multiple objects. It can work if a) all possible interactions depend on one object, and you always lock that object and b) the other objects each only interact with one instance of that object, e.g. locking a user row and doing stuff with objects which all belong_to (possibly indirectly) that user object.
Serializable Transactions
The other possibility is to use serializable transaction. Since 9.1, Postgresql has "real" serializable transactions. This can perform much better than locking rows (though it is unlikely to matter in the simple counter incrementing usecase)
The best way to understand what serializable transactions give you is this: if you take all the possible orderings of all the (isolation: :serializable) transactions in your app, what happens when your app is running is guaranteed to always correspond with one of those orderings. With ordinary transactions this is not guaranteed to be true.
However, what you have to do in exchange is to take care of what happens when a transaction fails because the database is unable to guarantee that it was serializable. In the case of the counter increment, all we need to do is retry:
begin
Vote.transaction(isolation: :serializable) do
v = Vote.first
v.vote += 1
sleep 10 # this is to simulate concurrency
v.save
end
rescue ActiveRecord::StatementInvalid => e
sleep rand/100 # this is NECESSARY in scalable real-world code,
# although the amount of sleep is something you can tune.
retry
end
Note the random sleep before the retry. This is necessary because failed serializable transactions have a non-trivial cost, so if we don't sleep, multiple processes contending for the same resource can swamp the db. In a heavily concurrent app you may need to gradually increase the sleep with each retry. The random is VERY important to avoid harmonic deadlocks -- if all the processes sleep the same amount of time they can get into a rhythm with each other, where they all are sleeping and the system is idle and then they all try for the lock at the same time and the system deadlocks causing all but one to sleep again.
When the transaction that needs to be serializable involves interaction with a source of concurrency other than the database, you may still have to use row-level locks to accomplish what you need. An example of this would be when a state machine transition determines what state to transition to based on a query to something other than the db, like a third-party API. In this case you need to lock the row representing the object with the state machine while the third party API is queried. You cannot nest transactions inside serializable transactions, so you would have to use object.lock! instead of with_lock.
Another thing to be aware of is that any objects fetched outside the transaction(isolation: :serializable) should have reload called on them before use inside the transaction.
ActiveRecord always wraps save operations in a transaction.
For your simple case it might be best to just use a SQL update instead of performing logic in Ruby and then saving. Here is an example which adds a model method to do this:
class Vote
def vote!
self.class.update_all("vote = vote + 1", {:id => id})
end
This method avoids the need for locking in your example. If you need more general database locking check see David's suggestion.
You can do the following in your model like so
class Vote < ActiveRecord::Base
validate :handle_conflict, only: :update
attr_accessible :original_updated_at
attr_writer :original_updated_at
def original_updated_at
#original_updated_at || updated_at
end
def handle_conflict
#If we want to use this across multiple models
#then extract this to module
if #conflict || updated_at.to_f> original_updated_at.to_f
#conflict = true
#original_updated_at = nil
#If two updates are made at the same time a validation error
#is displayed and the fields with
errors.add :base, 'This record changed while you were editing'
changes.each do |attribute, values|
errors.add attribute, "was #{values.first}"
end
end
end
end
The original_updated_at is a virtual attribute that is set. handle_conflict is fired when the record is updated. Checks to see if the updated_at attribute is in the database is later than the one hidden(defined on your page). By the way you should define the following in the your app/view/votes/_form.html.erb
<%= f.hidden_field :original_updated_at %>
If a there is a conflict then raise the validation error.
And if you are using Rails 4 you will won't have the attr_accessible and will need to add :original_updated_at to your vote_params method in your controller.
Hopefully this sheds some light.
For simple +1
Vote.increment_counter :vote, Vote.first.id
Because vote was used both for the table name and the field, this is how the 2 are used
TableName.increment_counter :field_name, id_of_the_row
How can I update/save multiple instances of a model in one shot, using a transaction block in Rails?
I would like to update values for hundreds of records; the values are different for each record. This is not a mass-update situation for one attribute. Model.update_all(attr: value) is not appropriate here.
MyModel.transaction do
things_to_update.each do |thing|
thing.score = rand(100) + rand(100)
thing.save
end
end
save seems to issue it's own transaction, rather than batching the updates into the surrounding transaction. I want all the updates to go in one big transaction.
How can I accomplish this?
Say you knew that you wanted to set the things with ids 1, 2, and 3 to have scores, 2, 8, and 64 (as opposed to just random numbers), you could:
UPDATE
things AS t
SET
score = c.score
FROM
(values
(1, 2),
(2, 30),
(4, 50)
) as c(id, score)
where c.id = t.id;
So with Rails, you'd use ActiveRecord::Base.connection#execute to execute a block similar to the above, but with the correct value string interpolated.
I'm not sure, but possibly you are confusing multiple transactions with multiple queries.
The code you've posted will create a single transaction (e.g. if an exception occurred then all of the updates would be rolled back), but each save would result in a separate update query.
If it's possible to perform the update using SQL rather than Ruby code then that would probably be the best way to go.
I think that you need just change method "save" to "save!".
If any of the updates fail, the method save! generates an exception. When an exception occurs within a transaction block, the transaction reverses the entire operation (rollback)
MyModel.transaction do
things_to_update.each do |thing|
thing.score = rand(100) + rand(100)
thing.save!
end