My API allows users to buy certain unique items, where each item can only be sold to one user. So when multiple users try to buy the same item, one user should get the response: ok and the other user should get the response too_late.
Now, there seems to be bug in my code. A race condition. If two users try to buy the same item at the same time, they both get the answer ok. The issue is clearly reproducable in production. Now I have written a simple test that tries to reproduce it via rspec:
context "when I try to provoke a race condition" do
# ...
before do
#concurrent_requests = 2.times.map do
Thread.new do
Thread.current[:answer] = post "/api/v1/item/buy.json", :id => item.id
end
end
#answers = #concurrent_requests.map do |th|
th.join
th[:answer].body
end
end
it "should only sell the item to one user" do
#answers.sort.should == ["ok", "too_late"].sort
end
end
It seems like does not execute the queries at the same time. To test this, I put the following code into my controller action:
puts "Is it concurrent?"
sleep 0.2
puts "Oh Noez."
Expected output would be, if the requests are concurrent:
Is it concurrent?
Is it concurrent?
Oh Noez.
Oh Noez.
However, I get the following output:
Is it concurrent?
Oh Noez.
Is it concurrent?
Oh Noez.
Which tells me, that capybara requests are not run at the same time, but one at a time. How do I make my capabara requests concurrent?
Multithreading and capybara does not work, because Capabara uses a seperate server thread which handles connection sequentially. But if you fork, it works.
I am using exit codes as an inter-process communication mechanism. If you do more complex stuff, you may want to use sockets.
This is my quick and dirty hack:
before do
#concurrent_requests = 2.times.map do
fork do
# ActiveRecord explodes when you do not re-establish the sockets
ActiveRecord::Base.connection.reconnect!
answer = post "/api/v1/item/buy.json", :id => item.id
# Calling exit! instead of exit so we do not invoke any rspec's `at_exit`
# handlers, which cleans up, measures code coverage and make things explode.
case JSON.parse(answer.body)["status"]
when "accepted"
exit! 128
when "too_late"
exit! 129
end
end
end
# Wait for the two requests to finish and get the exit codes.
#exitcodes = #concurrent_requests.map do |pid|
Process.waitpid(pid)
$?.exitstatus
end
# Also reconnect in the main process, just in case things go wrong...
ActiveRecord::Base.connection.reconnect!
# And reload the item that has been modified by the seperate processs,
# for use in later `it` blocks.
item.reload
end
it "should only accept one of two concurrent requests" do
#exitcodes.sort.should == [128, 129]
end
I use rather exotic exit codes like 128 and 129, because processes exit with code 0 if the case block is not reached and 1 if an exception occurs. Both should not happen. So by using higher codes, I notice when things go wrong.
You can't make concurrent capybara requests. However, you can create multiple capybara sessions and use them within the same test to simulate concurrent users.
user_1 = Capybara::Session.new(:webkit) # or whatever driver
user_2 = Capybara::Session.new(:webkit)
user_1.visit 'some/page'
user_2.visit 'some/page'
# ... more tests ...
user_1.click_on 'Buy'
user_2.click_on 'Buy'
Related
I'm using the with_advisory_lock gem to try and ensure that a record is created only once. Here's the github url to the gem.
I have the following code, which sits in an operation class that I wrote to handle creating user subscriptions:
def create_subscription_for user
subscription = UserSubscription.with_advisory_lock("lock_%d" % user.id) do
UserSubscription.where({ user_id: user.id }).first_or_create
end
# do more stuff on that subscription
end
and the accompanying test:
threads = []
user = FactoryBot.create(:user)
rand(5..10).times do
threads << Thread.new do
subject.create_subscription_for(user)
end
end
threads.each(&:join)
expect(UserSubscription.count).to eq(1)
What I expect to happen:
The first thread to get to the block acquires the lock and creates a record.
Any other thread that gets to the block while it's being held by another thread waits indefinitely until the lock is released (as per docs)
As soon as the lock is released by the first thread that created the record, another thread acquires the lock and now finds the record because it was already created by the first thread.
What actually happens:
The first thread to get to the block acquires the lock and creates a record.
Any other thread that gets to the block while it's being held by another thread goes and executes the code in the block anyway and as a result, when running the test, it sometimes fails with a ActiveRecord::RecordNotUnique error (I have a unique index on the table that allows for a single user_subscription with the same user_id)
What is more weird is that if I add a sleep for a few hundred milliseconds in my method just before the find_or_create method, the test never fails:
def create_subscription_for user
subscription = UserSubscription.with_advisory_lock("lock_%d" % user.id) do
sleep 0.2
UserSubscription.where({ user_id: user.id }).first_or_create
end
# do more stuff on that subscription
end
My questions are: "Why is adding the sleep 0.2 making the tests always pass?" and "Where do I look to debug this?"
Thanks!
UPDATE: Tweaking the tests a little bit causes them to always fail:
threads = []
user = FactoryBot.create(:user)
rand(5..10).times do
threads << Thread.new do
sleep
subject.create_subscription_for(user)
end
end
until threads.all? { |t| t.status == 'sleep' }
sleep 0.1
end
threads.each(&:wakeup)
threads.each(&:join)
expect(UserSubscription.count).to eq(1)
I have also wrapped first_or_create in a transaction, which makes the test pass and everything to work as expected:
def create_subscription_for user
subscription = UserSubscription.with_advisory_lock("lock_%d" % user.id) do
UserSubscription.transaction do
UserSubscription.where({ user_id: user.id }).first_or_create
end
end
# do more stuff on that subscription
end
So why is wrapping first_or_create in a transaction necessary to make things work?
Are you turning off transactional tests for this test case? I'm working on something similar and that proved to be important to actually simulating the concurrency.
See uses_transaction https://api.rubyonrails.org/classes/ActiveRecord/TestFixtures/ClassMethods.html
If transactions are not turned off, Rails will wrap the entire test in a transaction and this will cause all the threads to share one DB connection. Furthermore, in Postgres a session-level advisory lock can always be re-acquired within the same session. From the docs:
If a session already holds a given advisory lock, additional requests
by it will always succeed, even if other sessions are awaiting the
lock; this statement is true regardless of whether the existing lock
hold and new request are at session level or transaction level.
Based on that I'm suspecting that your lock is always able to be acquired and therefore the .first_or_create call is always executed which results in the intermittent RecordNotUnique exceptions.
Inside a Rails application, users visit a page where I show a popup.
I want to update a record every time users see that popup.
To avoid race condition I use optimistic locking (so I added a field called lock_version in the popups table).
The code is straightforward:
# inside pages/show.html.erb
<%= render #popup %>
# and inside the popup partial...
...
<%
Popup.transaction do
begin
popup.update_attributes(:views => popup.views + 1)
rescue ActiveRecord::StaleObjectError
retry
end
end
%>
The problem is that lots of users access the page, and mysql exceeds timeout for locking.
So the website freeze and I get lots of these errors:
Lock wait timeout exceeded; try restarting transaction
That's because there are lots of pending requests trying to update the record with an outdated lock_version value.
How can I solve my problem?
You can use increment_counter because it produce one SQL UPDATE query without locking.
But I think will be better in your case to use any key-value DB like Redis to store and update your popup counter because it can do it faster than SQL DB.
If you cannot go with an approach like #maxd noted in their reply, you can utilize an asynchronous library such as Sidekiq to process these sort of requests (wherein they'll just get backed up in the job queue).
lib/some_made_up_class.rb
def increment_popup(popup)
Popup.transaction do
begin
popup.update_attributes(:views => popup.views + 1)
rescue ActiveRecord::StaleObjectError
retry
end
end
end
Then, in another piece of code (your controller, a service, or the view (less ideal to put logic in the view layer).
SomeMadeUpClass.delay.increment_popup(popup)
# OR you can schedule it
SomeMadeUpClass.delay_for(1.second).increment_popup(popup)
This would have the effect of, essentially, queueing up your inserts, while freeing your page and, in theory, helping to reduce the timeouts you're hitting, etc.
While there is certainly more to it than just adding a library (gem) like Sidekiq and the sample code I have here, I think asynchronous libraries/tools will help tremendously.
At first glance, I thought the new ruby 2.0 Thread.handle_interrupt was going to solve all my asynchronous interrupt problems, but unless I'm mistaken I can't get it to do what I want (my question is at the end and in the title).
From the documentation, I can see how I can avoid receiving interrupts in a certain block, deferring them to another block. Here's an example program:
duration = ARGV.shift.to_i
t = Thread.new do
Thread.handle_interrupt(RuntimeError => :never) do
5.times { putc '-'; sleep 1 }
Thread.handle_interrupt(RuntimeError => :immediate) do
begin
5.times { putc '+'; sleep 1}
rescue
puts "received #{$!}"
end
end
end
end
sleep duration
puts "sending"
t.raise "Ka-boom!"
if t.join(20 + duration).nil?
raise "thread failed to join"
end
When run with argument 2 it outputs something like this:
--sending-
--received Ka-boom!
That is, the main thread sends a RuntimeError to the other thread after two seconds, but that thread doesn't handle it until it gets into the inner Thread.handle_interrupt block.
Unfortunately, I don't see how this can help me if I don't know where my thread is getting created, because I can't wrap everything it does in a block. For example, in Rails, what would I wrap the Thread.handle_interrupt or begin...rescue...end blocks around? And wouldn't this differ depending on what webserver is running?
What I was hoping for is a way to register a handler, like the way Kernel.trap works. Namely, I'd like to specify handling code that's context-independent that will handle all exceptions of a certain type:
register_handler_for(SomeExceptionClass) do
... # handle the exception
end
What precipitated this question was how the RabbitMQ gem, bunny sends connection-level errors to the thread that opened the Bunny::Session using Thread#raise. These exceptions could end up anywhere and all I want to do is log them, flag that the connection is unavailable, and continue on my way.
Ideas?
Ruby provides for this with the ruby Queueobject (not to be confused with an AMQP queue). It would be nice if Bunny required you to create a ruby Queue before opening a Bunny::Session, and you passed it that Queue object, to which it would send connection-level errors instead of using Thread#raise to send it back to where ever. You could then simply provide your own Thread to consume messages through the Queue.
It might be worth looking inside the RabbitMQ gem code to see if you could do this, or asking the maintainers of that gem about it.
In Rails this is not likely to work unless you can establish a server-wide thread to consume from the ruby Queue, which of course would be web server specific. I don't see how you can do this from within a short-lived object, e.g. code for a Rails view, where the threads are reused but Bunny doesn't know that (or care).
I'd like to raise (ha-ha!) a pragmatic workaround. Here be dragons. I'm assuming you're building an application and not a library to be redistributed, if not then don't use this.
You can patch Thread#raise, specifically on your session thread instance.
module AsynchronousExceptions
#exception_queue = Queue.new
class << self
attr_reader :exception_queue
end
def raise(*args)
# We do this dance to capture an actual error instance, because
# raise may be called with no arguments, a string, 3 arguments,
# an error, or any object really. We want an actual error.
# NOTE: This might need to be adjusted for proper stack traces.
error = begin
Kernel.raise(*args)
rescue => error
error
end
AsynchronousExceptions.exception_queue.push(error)
end
end
session_thread = Thread.current
session_thread.singleton_class.prepend(AsynchronousExceptions)
Bear in mind that exception_queue is essentially a global. We also patch for everybody, not just the reader loop. Luckily there are few legitimate reasons to do Thread.raise, so you might just get away with this safely.
I'm trying to make a unit test to ensure that certain operations do / do not query the database. Is there some way I can watch for queries, or some counter I can check at the very worst?
If your intent is to discern whether or not Rails (ActiveRecord) actually caches queries, you don't have to write a unit test for those - they already exist and are part of Rails itself.
Edit:
In that case, I would probably see if I could adapt one of the strategies the rails team uses to test ActiveRecord itself. Check the following test from my link above:
def test_middleware_caches
mw = ActiveRecord::QueryCache.new lambda { |env|
Task.find 1
Task.find 1
assert_equal 1, ActiveRecord::Base.connection.query_cache.length
}
mw.call({})
end
You may be able to do something like the following:
def check_number_of_queries
mw = ActiveRecord::QueryCache.new lambda { |env|
# Assuming this object is set up to perform all its operations already
MyObject.first.do_something_and_perform_side_operations
puts ActiveRecord::Base.connection.query_cache.length.to_s
}
end
I haven't tried such a thing, but it might be worth investigating further. If the above actually does return the number of cached queries waiting to happen, it should be trivial to change the puts to an assert for your test case.
My basic logic is to have an infinite loop running somewhere and test it as best as possible. The reason for having an infinite loop is not important (main loop for games, daemon-like logic...) and I'm more asking about best practices regarding a situation like that.
Let's take this code for example:
module Blah
extend self
def run
some_initializer_method
loop do
some_other_method
yet_another_method
end
end
end
I want to test the method Blah.run using Rspec (also I use RR, but plain rspec would be an acceptable answer).
I figure the best way to do it would be to decompose a bit more, like separating the loop into another method or something:
module Blah
extend self
def run
some_initializer_method
do_some_looping
end
def do_some_looping
loop do
some_other_method
yet_another_method
end
end
end
... this allows us to test run and mock the loop... but at some point the code inside the loop needs to be tested.
So what would you do in such a situation?
Simply not testing this logic, meaning test some_other_method & yet_another_method but not do_some_looping ?
Have the loop break at some point via a mock?
... something else?
What might be more practical is to execute the loop in a separate thread, assert that everything is working correctly, and then terminate the thread when it is no longer required.
thread = Thread.new do
Blah.run
end
assert_equal 0, Blah.foo
thread.kill
in rspec 3.3, add this line
allow(subject).to receive(:loop).and_yield
to your before hook will simple yield to the block without any looping
How about having the body of the loop in a separate method, like calculateOneWorldIteration? That way you can spin the loop in the test as needed. And it doesn’t hurt the API, it’s quite a natural method to have in the public interface.
You can not test that something that runs forever.
When faced with a section of code that is difficult (or impossible) to test you should:-
Refactor to isolate the difficult to test part of the code. Make the untestable parts tiny and trivial. Comment to ensure they are not later expanded to become non-trivial
Unit test the other parts, which are now separated from the difficult to test section
The difficult to test part would be covered by an integration or acceptance test
If the main loop in your game only goes around once, this will be immediately obvious when you run it.
What about mocking the loop so that it gets executed only the number of times you specify ?
Module Object
private
def loop
3.times { yield }
end
end
Of course, you mock this only in your specs.
I know this is a little old, but you can also use the yields method to fake a block and pass a single iteration to a loop method. This should allow you to test the methods you're calling within your loop without actually putting it into an infinite loop.
require 'test/unit'
require 'mocha'
class Something
def test_method
puts "test_method"
loop do
puts String.new("frederick")
end
end
end
class LoopTest < Test::Unit::TestCase
def test_loop_yields
something = Something.new
something.expects(:loop).yields.with() do
String.expects(:new).returns("samantha")
end
something.test_method
end
end
# Started
# test_method
# samantha
# .
# Finished in 0.005 seconds.
#
# 1 tests, 2 assertions, 0 failures, 0 errors
I almost always use a catch/throw construct to test infinite loops.
Raising an error may also work, but that's not ideal especially if your loop's block rescue all errors, including Exceptions. If your block doesn't rescue Exception (or some other error class), then you can subclass Exception (or another non-rescued class) and rescue your subclass:
Exception example
Setup
class RspecLoopStop < Exception; end
Test
blah.stub!(:some_initializer_method)
blah.should_receive(:some_other_method)
blah.should_receive(:yet_another_method)
# make sure it repeats
blah.should_receive(:some_other_method).and_raise RspecLoopStop
begin
blah.run
rescue RspecLoopStop
# all done
end
Catch/throw example:
blah.stub!(:some_initializer_method)
blah.should_receive(:some_other_method)
blah.should_receive(:yet_another_method)
blah.should_receive(:some_other_method).and_throw :rspec_loop_stop
catch :rspec_loop_stop
blah.run
end
When I first tried this, I was concerned that using should_receive a second time on :some_other_method would "overwrite" the first one, but this is not the case. If you want to see for yourself, add blocks to should_receive to see if it's called the expected number of times:
blah.should_receive(:some_other_method) { puts 'received some_other_method' }
Our solution to testing a loop that only exits on signals was to stub the exit condition method to return false the first time but true the second time, ensuring the loop is only executed once.
Class with infinite loop:
class Scheduling::Daemon
def run
loop do
if daemon_received_stop_signal?
break
end
# do stuff
end
end
end
spec testing the behaviour of the loop:
describe Scheduling::Daemon do
describe "#run" do
before do
Scheduling::Daemon.should_receive(:daemon_received_stop_signal?).
and_return(false, true) # execute loop once then exit
end
it "does stuff" do
Scheduling::Daemon.run
# assert stuff was done
end
end
end
:) I had this query a few months ago.
The short answer is there is no easy way to test that. You test drive the internals of the loop. Then you plonk it into a loop method & do a manual test that the loop works till the terminating condition occurs.
The easiest solution I found is to yield the loop one time and than return. I've used mocha here.
require 'spec_helper'
require 'blah'
describe Blah do
it 'loops' do
Blah.stubs(:some_initializer_method)
Blah.stubs(:some_other_method)
Blah.stubs(:yet_another_method)
Blah.expects(:loop).yields().then().returns()
Blah.run
end
end
We're expecting that the loop is actually executed and it's ensured it will exit after one iteration.
Nevertheless as stated above it's good practice to keep the looping method as small and stupid as possible.
Hope this helps!