For context, this question arose because we are migration from Rails 5 to Rails 6, and introducing reader / writer database connections via the new replication features.
Our specific problem is with request specs, with an eye towards using transactional fixtures. When we run our request specs files in isolation, they pass. When run as part of a multiple-file pass (such as a full bundle exec parallel_rspec pass used on circle CI) they fail. If we turn off transactional fixtures, the tests take far too long to run, but pass.
Using byebug, we've poked in and determined that the problem is that our test data has been written to / is accessible by the writer DB connection, but the route is attempting to use the reader DB connection to read it. I. E. ActiveRecord::Base.connected_to(role: :reading) { puts Foo.count } is 0, while the same code connecting to writing role is non-zero.
The problem from there seems fairly obvious: because we're using transactional tests / fixtures, the code is never committed to the DB. It's only available on the connection it was made on. The request spec is reading from the 'right' db for the call (a GET request should use the reader db), but in the use-case of tests that's producing errors.
It seems like this is a fairly obvious use case that either Rails or rspec should have a tool for handling, we just don't seem to be able to find the relevant documentation.
You need to tell the test environment that it should be using a single connection for both. There are multiple ways of doing this:
You can configure your test environment not to use replicas at all. See Setting up your application for examples of using a replica and not using a replica then reproduce the non-replica version in your database.yml for the test environment only.
You can use connected_to within your specs themselves so that those tests are forced to use the specific connection you want them to use. One way to do this is with around hooks:
describe "around filter" do
around(:each) do |example|
puts "around each before"
ActiveRecord::Base.connected_to(role: :writing) { example.run }
puts "around each after"
end
it "gets run in order" do
puts "in the example"
end
end
You can monkey patch your ActiveRecord configuration in rails_helper so that it doesn't use replicas (but I'd really recommend #1 over this option)
I have a fairly complex service that creates and saves a lot of domain instances, and because of the logic, I need to create different instances at different times, and in the middle check for certain conditions, that are not only that each instance is valid, but for example I need to check if certain files exist in the file system, etc.
I'm testing the incorrect cases, where the service throws an exception, and I need to test that no instances where persisted if an exception is thrown.
One specific test case was failing, even if the expected exception was thrown, a domain instance was saved to the DB. Then I read that because the integration test was transactional itself, the rollback really occurs at the end of the test, instead of after using the service, so I can check conditions on the "then" section of the spock test case.
So if the rollback occurs after I can test it, I can't test it :(
Then I read that making the integration test not transaction might help, so I added:
static transactional = false
After this, all my other tests started to fail!
My question is: what is the right way of testing services that should rollback when an exception is thrown? I just need to verify that after an exception occurs, there are no alterations to the database (since this is a healthcare application, data consistency is key)
FYI:
This is the service I need to test: https://github.com/ppazos/cabolabs-ehrserver/blob/master/grails-app/services/com/cabolabs/ehrserver/parsers/XmlService.groovy
This is my current test: https://github.com/ppazos/cabolabs-ehrserver/blob/master/test/integration/com/cabolabs/ehrserver/parsers/XmlServiceIntegrationSpec.groovy
Thanks!
If you can modify your code to rely on the framework, then you don't need to test the framework.
Grails uses Spring and the Spring transaction manager rolls back automatically for RuntimeExceptions but not for checked exceptions. If you throw a RuntimeException for your various validation scenarios then Spring will handle it for you. If you rely on those facts, your test could then stop at verifying that a RuntimeException is thrown.
If you want to use a checked exception (such as javax.xml.bind.ValidationException ) then you can use an annotation to ensure it causes a rollback:
#Transactional(rollbackFor = ValidationException.class)
def processCommit( ....
and then your test need only check
def ex = thrown(ValidationException)
See http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html#transaction-declarative-attransactional-settings for more information.
Background: I am unit testing a game server which is built upon rails 4.1.1 and separate socket.io/node.js for socket messaging. Messages from node.js to rails are going through RESTful http requests.
Single test case runs as follows:
(1) rake unit test --> (2) rails controller --> (3) node.js/socket.io --> (4) rails controller
Problem description: Some DB entries are created with ActiveRecord at step (2), then upon receiving a socket message at step (3) node.js sends HTTP request back to rails controller and finally(!!) at step (4) rails controller tries to access DB entries from step (2), but TEST DB contents are empty at this point.
Question: It seems like desired behavior of rake to cleanup TEST DB, but how can I persist TEST db across test cases and prevent such problem?
Thanks in advance
You should prepare and send request to node app inside a test and assert response there.
But it's not a good practice. The better solution would be HTTP mocks (like webmock gem). This approach will save lots of time in the future.
Luckily, I figured out the solution.
By default, rake is wrapping all tests in separate DB transactions and rolls back on cleanup. Moreover, whatever requests/queries are coming outside of TestCase are not included in transaction and not visible inside the test case.
To avoid such behavior, we have to disable transactional fixtures in test/test_helper.rb
class ActiveSupport::TestCase
self.use_transactional_fixtures = false
end
As downside, we have to cleanup test db manually. So #Alexander Shlenchack points out to avoid such practice in the first place and use http/socket mocks in future.
Here is brief summary http://devblog.avdi.org/2012/08/31/configuring-database_cleaner-with-rails-rspec-capybara-and-selenium/
And related question Rails minitest, database cleaner how to turn use_transactional_fixtures = false
I am trying to test an app that uses gem devise_token_auth, which basically includes a couple extra DB read/writes on almost every request (to verify and update user access tokens).
Everything is working fine, except when testing a controller action that includes several additional db read/writes. In these cases, the terminal locks up and I'm forced to kill the ruby process via activity monitor.
Sometimes I get error messages like this:
ruby /Users/evan/.rvm/gems/ruby-2.1.1/bin/rspec spec/controllers/api/v1/messages_controller_spec.rb(1245,0x7fff792bf310) malloc: *** error for object 0x7ff15fb73c00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
I have no idea how to interpret that. I'm 90% sure the problem is due to this gem and the extra DB activity it causes on each request because I when I revert to my previous, less intensive auth, all the issues go away. I've also gotten things under control by giving postgres some extra time on the offending tests:
after :each do
sleep 2
end
This works fine for all cases except one, which requires a timeout before the expect, otherwise it throws this error:
Failure/Error: expect(#user1.received_messages.first.read?).to eq true
ActiveRecord::StatementInvalid:
PG::UnableToSend: another command is already in progress
: SELECT "messages".* FROM "messages" WHERE "messages"."receiver_id" = $1 ORDER BY "messages"."id" ASC LIMIT 1
which, to me, points to the DB issue again.
Is there anything else I could be doing to track down/control these errors? Any rspec settings I should look into?
If you are running parallel rspec tasks, that could be triggering this. When we've run into issues like this, we have forced those tests to run in single, non-parallel instance of rspec in our CI using tags.
Try something like this:
context 'when both records get updated in one job', non_parallel do
it { is_expected.to eq 2 }
end
And then invoke rspec singularly on the non_parallel tag:
rspec --tag non_parallel
The bulk of your tests (not tagged with non_parallel) can still be run in parallel in your CI solution (e.g. Jenkins) for performance.
Of course, be careful applying this band-aid. It is always better to identify what is not race-safe in your code, since that race could happen in the real world.
Running a rails site right now using SQLite3.
About once every 500 requests or so, I get a
ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:...
What's the way to fix this that would be minimally invasive to my code?
I'm using SQLLite at the moment because you can store the DB in source control which makes backing up natural and you can push changes out very quickly. However, it's obviously not really set up for concurrent access. I'll migrate over to MySQL tomorrow morning.
You mentioned that this is a Rails site. Rails allows you to set the SQLite retry timeout in your database.yml config file:
production:
adapter: sqlite3
database: db/mysite_prod.sqlite3
timeout: 10000
The timeout value is specified in miliseconds. Increasing it to 10 or 15 seconds should decrease the number of BusyExceptions you see in your log.
This is just a temporary solution, though. If your site needs true concurrency then you will have to migrate to another db engine.
By default, sqlite returns immediatly with a blocked, busy error if the database is busy and locked. You can ask for it to wait and keep trying for a while before giving up. This usually fixes the problem, unless you do have 1000s of threads accessing your db, when I agree sqlite would be inappropriate.
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
All of these things are true, but it doesn't answer the question, which is likely: why does my Rails app occasionally raise a SQLite3::BusyException in production?
#Shalmanese: what is the production hosting environment like? Is it on a shared host? Is the directory that contains the sqlite database on an NFS share? (Likely, on a shared host).
This problem likely has to do with the phenomena of file locking w/ NFS shares and SQLite's lack of concurrency.
If you have this issue but increasing the timeout does not change anything, you might have another concurrency issue with transactions, here is it in summary:
Begin a transaction (aquires a SHARED lock)
Read some data from DB (we are still using the SHARED lock)
Meanwhile, another process starts a transaction and write data (acquiring the RESERVED lock).
Then you try to write, you are now trying to request the RESERVED lock
SQLite raises the SQLITE_BUSY exception immediately (indenpendently of your timeout) because your previous reads may no longer be accurate by the time it can get the RESERVED lock.
One way to fix this is to patch the active_record sqlite adapter to aquire a RESERVED lock directly at the begining of the transaction by padding the :immediate option to the driver. This will decrease performance a bit, but at least all your transactions will honor your timeout and occurs one after the other. Here is how to do this using prepend (Ruby 2.0+) put this in a initializer:
module SqliteTransactionFix
def begin_db_transaction
log('begin immediate transaction', nil) { #connection.transaction(:immediate) }
end
end
module ActiveRecord
module ConnectionAdapters
class SQLiteAdapter < AbstractAdapter
prepend SqliteTransactionFix
end
end
end
Read more here: https://rails.lighthouseapp.com/projects/8994/tickets/5941-sqlite3busyexceptions-are-raised-immediately-in-some-cases-despite-setting-sqlite3_busy_timeout
Just for the record. In one application with Rails 2.3.8 we found out that Rails was ignoring the "timeout" option Rifkin Habsburg suggested.
After some more investigation we found a possibly related bug in Rails dev: http://dev.rubyonrails.org/ticket/8811. And after some more investigation we found the solution (tested with Rails 2.3.8):
Edit this ActiveRecord file: activerecord-2.3.8/lib/active_record/connection_adapters/sqlite_adapter.rb
Replace this:
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction }
end
with
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction(:immediate) }
end
And that's all! We haven't noticed a performance drop and now the app supports many more petitions without breaking (it waits for the timeout). Sqlite is nice!
bundle exec rake db:reset
It worked for me it will reset and show the pending migration.
Sqlite can allow other processes to wait until the current one finished.
I use this line to connect when I know I may have multiple processes trying to access the Sqlite DB:
conn = sqlite3.connect('filename', isolation_level = 'exclusive')
According to the Python Sqlite Documentation:
You can control which kind of BEGIN
statements pysqlite implicitly
executes (or none at all) via the
isolation_level parameter to the
connect() call, or via the
isolation_level property of
connections.
I had a similar problem with rake db:migrate. Issue was that the working directory was on a SMB share.
I fixed it by copying the folder over to my local machine.
Most answers are for Rails rather than raw ruby, and OPs question IS for rails, which is fine. :)
So I just want to leave this solution over here should any raw ruby user have this problem, and is not using a yml configuration.
After instancing the connection, you can set it like this:
db = SQLite3::Database.new "#{path_to_your_db}/your_file.db"
db.busy_timeout=(15000) # in ms, meaning it will retry for 15 seconds before it raises an exception.
#This can be any number you want. Default value is 0.
Source: this link
- Open the database
db = sqlite3.open("filename")
-- Ten attempts are made to proceed, if the database is locked
function my_busy_handler(attempts_made)
if attempts_made < 10 then
return true
else
return false
end
end
-- Set the new busy handler
db:set_busy_handler(my_busy_handler)
-- Use the database
db:exec(...)
What table is being accessed when the lock is encountered?
Do you have long-running transactions?
Can you figure out which requests were still being processed when the lock was encountered?
Argh - the bane of my existence over the last week. Sqlite3 locks the db file when any process writes to the database. IE any UPDATE/INSERT type query (also select count(*) for some reason). However, it handles multiple reads just fine.
So, I finally got frustrated enough to write my own thread locking code around the database calls. By ensuring that the application can only have one thread writing to the database at any point, I was able to scale to 1000's of threads.
And yea, its slow as hell. But its also fast enough and correct, which is a nice property to have.
I found a deadlock on sqlite3 ruby extension and fix it here: have a go with it and see if this fixes ur problem.
https://github.com/dxj19831029/sqlite3-ruby
I opened a pull request, no response from them anymore.
Anyway, some busy exception is expected as described in sqlite3 itself.
Be aware with this condition: sqlite busy
The presence of a busy handler does not guarantee that it will be invoked when there is
lock contention. If SQLite determines that invoking the busy handler could result in a
deadlock, it will go ahead and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of
invoking the busy handler. Consider a scenario where one process is holding a read lock
that it is trying to promote to a reserved lock and a second process is holding a reserved
lock that it is trying to promote to an exclusive lock. The first process cannot proceed
because it is blocked by the second and the second process cannot proceed because it is
blocked by the first. If both processes invoke the busy handlers, neither will make any
progress. Therefore, SQLite returns SQLITE_BUSY for the first process, hoping that this
will induce the first process to release its read lock and allow the second process to
proceed.
If you meet this condition, timeout isn't valid anymore. To avoid it, don't put select inside begin/commit. or use exclusive lock for begin/commit.
Hope this helps. :)
this is often a consecutive fault of multiple processes accessing the same database, i.e. if the "allow only one instance" flag was not set in RubyMine
Try running the following, it may help:
ActiveRecord::Base.connection.execute("BEGIN TRANSACTION; END;")
From: Ruby: SQLite3::BusyException: database is locked:
This may clear up the any transaction holding up the system
I believe this happens when a transaction times out. You really should be using a "real" database. Something like Drizzle, or MySQL. Any reason why you prefer SQLite over the two prior options?