Rails - Close current DB connection - ruby-on-rails

I am trying to do some DB experiments in which I reset the DB a few times as part of Rails runner script. Here is a simple example:
`rake db:reset`
puts User.count
`rake db:reset`
This fails with:
PG::ObjectInUse: ERROR: database "my_db" is being accessed by other users
DETAIL: There is 1 other session using the database.
I understand why this happens. The User.count opens a connection so the reset cannot happen. I have tried a few things to close said connection, including calling close and reset_active_connections!, to no avail. Any idea how I can achieve this? Some other pursuits were not fruitful as well, such as trying to close the connection by passing the process ID to psql.

The right call is:
ApplicationRecord.connection_pool.connections.map &:disconnect!
I simply couldn't find the right method the first time, had to read the AR code in more detail to find it.

Related

Model being picked up OK by controller but not by rspec

I am following through a simple tutorial and running into the following issue;
Task.create task: 'This is my task'
Returns an error when rspec tries to run it;
ActiveRecord::StatementInvalid:
Could not find table 'tasks'
But when I call the exact same line from the rails console or from a controller the task is created and I am able to see the new row from within the rails console.
Initially I thought it was maybe something going weird with guard, because I have noticed a few odd things (Ctrl+C doesn't kill it for one) but I decided to run the test directly using rspec and it returns the same result.
Any help would be greatly appreciated.
You have to set up and prepare your db first and you can do that by running rake db:test:prepare

Django Polls tutorial. I'm getting a database error 1146

So I am running through the Django polls tutorial. Trying to get the admin site to work for me. I can load it up and it looks great then I click on polls and then add a poll. Then I enter the name and the pub_date and time. Then I hit the save button and I get this big long error:
DatabaseError at /admin/pollsTest/poll/add/
(1146, "Table 'mydb.django_admin_log' doesn't exist")
I have been following the tutorial to the T. I have django.contrib.admin in the installed_apps file. I ran a syncdb. I edited the urls.py file to include all the admin code. And still no luck. Am I missing something obvious?
I have opened up my sql database and in fact the table does not exist. So I think somehow the syncdb command is not going through. Also when I run the syncdb command the output from the command line says:
...File "C:\Python27\lib\site-packages\MySWLdb\connections.py", line 36, in defaulterrorhandler raiseclass, errorvalue django.db.utils.DatabaseError: (1050, "Table 'polls_poll' already exists")
Which makes sense. the syncdb is only supposed to add tables that don't currently exist right? Am I supposed to get that kind of an error (like some django module is catching it and taking care of it)? I feel like it may be hitting this error and then crashing.
Happens if you run manage.py syncdb before adding django.contrib.admin to installed_apps. Just run syncdb again to fix.
Cheers!

Cucumber step to pause and hand control over to the user

I'm having trouble debugging cucumber steps due to unique conditions of the testing environment. I wish there was a step that could pause a selenium test and let me take over.
E.g.
Scenario: I want to take over here
Given: A bunch of steps have already run
When: I'm stuck on an error
Then: I want to take control of the mouse
At that point I could interact with the application exactly as if I had done all the previous steps myself after running rails server -e test
Does such a step exist, or is there a way to make it happen?
You can integrate ruby-debug into your Cucumber tests. Nathaniel Ritmeyer has directions here and here which worked for me. You essentially require ruby-debug, start the debugger in your environment file, and then put "breakpoint" where ever you want to see what's going on. You can both interact with the browser/application and see the values of your ruby variables in the test. (I'm not sure whether it'll let you see the variables in your rails application itself - I'm not testing against a rails app to check that).
I came up with the idea to dump the database. It doesn't let you continue work from the same page, but if you have the app running during the test, you can immediately act on the current state of things in another browser (not the one controlled by Selenium).
Here is the step:
When /I want to take control/i do
exec "mysqldump -u root --password=* test > #{Rails.root}/support/snapshot.sql"
end
Because it is called by exec, DatabaseCleaner has no chance to truncate tables, so actually it's irrelevant that the command is a database dump. You don't have to import the sql to use the app in its current state, but it's there if you need it.
My teammate has done this using selenium, firebug a hook (#selenium_with_firebug)
Everything he learned came from this blogpost:
http://www.allenwei.cn/tips-add-firebug-extension-to-capybara/
Add the step
And show me the page
Where you want to interact with it
Scenario: I want to take over here
Given: A bunch of steps have already run
When: I'm stuck on an error
Then show me the page
use http://www.natontesting.com/2009/11/09/debugging-cucumber-tests-with-ruby-debug/
Big thank you to #Reed G. Law for the idea of dumping the database. Then loading it into development allowed me to determine exactly why my cucumber feature was not impacting database state as I had expected. Here's my minor tweak to his suggestion:
When /Dump the database/i do
`MYSQL_PWD=password mysqldump -u root my_test > #{Rails.root}/snapshot.sql`
# To replicate state in development run:
# `MYSQL_PWD=password mysql -u root my_development < snapshot.sql`
end
You can also use the following in feature/support/debugging.rb to let you step through the feature one step at a time:
# `STEP=1 cucumber` to pause after each step
AfterStep do |scenario|
next unless ENV['STEP']
unless defined?(#counter)
puts "Stepping through #{scenario.title}"
#counter = 0
end
#counter += 1
print "At step ##{#counter} of #{scenario.steps.count}. Press Return to"\
' execute...'
STDIN.getc
end

Rails test db doesn't persist record changes

I've been trying to solve a problem for a few weeks now. I am running rspec tests for my Rails app, and they are working fine except for one error that I can't seem get my head around.
I am using MySQL with the InnoDB engine.
I have set config.use_transactional_fixtures = true in spec_helper.rb
I load my test fixtures manually with the command rake spec:db:fixtures:load.
The rspec test is being written for a BackgrounDRb worker, and it is testing that a record can have its state updated (through the state_machine gem).
Here is my problem:
I have a model called Listings. The rspec test calls the update_sold_items method within a file called listing_worker.rb.
This method calls listing.sell for a particular record, which sets the listing record's 'state' column to 'sold'.
So far, this is all working fine, but when the update_sold_items method finishes, my rspec test fails here:
listing = Listing.find_by_listing_id(listing_id)
listing.state.should == "sold"
expected: "sold",
got: "current" (using ==)
I've been trying to track down why the state change is not persisting, but am pretty much lost. Here is the result of some debugging code that I placed in the update_sold_items method during the test:
pp listing.state # => "current"
listing.sell!
listing.save!
pp listing.state # => "sold"
listing.reload
pp listing.state # => "current"
I cannot understand why it saves perfectly fine, but then reverts back to the original record whenever I call reload, or Listing.find etc.
Thanks for reading this, and please ask any questions if I haven't given enough information.
Thanks for your help,
Nathan B
P.S. I don't have a problem creating new records for other classes, and testing those records. It only seems to be a problem when I am updating records that already exist in the database.
I suspect, like nathan, transaction issues. Try putting a Listing.connection.execute("COMMIT") right before your first save call to break the transaction and see what changes. That will break you out of the transaction so any additional rollback calls will be non-effectual.
Additionally, by running a "COMMIT" command, you could pause the test with a debugger and inspect the database from another client to see what's going on.
The other hypothesis, if the transaction experimentation doesn't yield any results, is that perhaps your model really isn't saving to the database. Check your query logs. (Specifically find the update query).
These kind of issues really stink! Good luck!
If you want to investigate what you have in DB while running tests you might find this helpful...
I have a rspec test where I save #user.save and it works like a charm, but then I wanted to see if it's really saved in the DB.
I opened rails console for test environment
rails c test
ran
User.all
and as expected got nothing
I ran my spec that contains:
user_attr_hash = FactoryGirl.attributes_for(:user)
#user = User.new user_attr_hash
#user.save
binding.pry
I thought that stopping the test after save would mean that it's persisted, but that's not the case. It seems that COMMIT on the connection is fired later (I have no idea when:\ )
So, as #Tim Harper suggests, you have to fire that commit yourself in the pry console:
pry(#<RSpec::Core::ExampleGroup::Nested_1>)> User.connection.execute("COMMIT")
Now, if you run User.all in your rails console you should see it ;)

SQLite3::BusyException

Running a rails site right now using SQLite3.
About once every 500 requests or so, I get a
ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:...
What's the way to fix this that would be minimally invasive to my code?
I'm using SQLLite at the moment because you can store the DB in source control which makes backing up natural and you can push changes out very quickly. However, it's obviously not really set up for concurrent access. I'll migrate over to MySQL tomorrow morning.
You mentioned that this is a Rails site. Rails allows you to set the SQLite retry timeout in your database.yml config file:
production:
adapter: sqlite3
database: db/mysite_prod.sqlite3
timeout: 10000
The timeout value is specified in miliseconds. Increasing it to 10 or 15 seconds should decrease the number of BusyExceptions you see in your log.
This is just a temporary solution, though. If your site needs true concurrency then you will have to migrate to another db engine.
By default, sqlite returns immediatly with a blocked, busy error if the database is busy and locked. You can ask for it to wait and keep trying for a while before giving up. This usually fixes the problem, unless you do have 1000s of threads accessing your db, when I agree sqlite would be inappropriate.
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
All of these things are true, but it doesn't answer the question, which is likely: why does my Rails app occasionally raise a SQLite3::BusyException in production?
#Shalmanese: what is the production hosting environment like? Is it on a shared host? Is the directory that contains the sqlite database on an NFS share? (Likely, on a shared host).
This problem likely has to do with the phenomena of file locking w/ NFS shares and SQLite's lack of concurrency.
If you have this issue but increasing the timeout does not change anything, you might have another concurrency issue with transactions, here is it in summary:
Begin a transaction (aquires a SHARED lock)
Read some data from DB (we are still using the SHARED lock)
Meanwhile, another process starts a transaction and write data (acquiring the RESERVED lock).
Then you try to write, you are now trying to request the RESERVED lock
SQLite raises the SQLITE_BUSY exception immediately (indenpendently of your timeout) because your previous reads may no longer be accurate by the time it can get the RESERVED lock.
One way to fix this is to patch the active_record sqlite adapter to aquire a RESERVED lock directly at the begining of the transaction by padding the :immediate option to the driver. This will decrease performance a bit, but at least all your transactions will honor your timeout and occurs one after the other. Here is how to do this using prepend (Ruby 2.0+) put this in a initializer:
module SqliteTransactionFix
def begin_db_transaction
log('begin immediate transaction', nil) { #connection.transaction(:immediate) }
end
end
module ActiveRecord
module ConnectionAdapters
class SQLiteAdapter < AbstractAdapter
prepend SqliteTransactionFix
end
end
end
Read more here: https://rails.lighthouseapp.com/projects/8994/tickets/5941-sqlite3busyexceptions-are-raised-immediately-in-some-cases-despite-setting-sqlite3_busy_timeout
Just for the record. In one application with Rails 2.3.8 we found out that Rails was ignoring the "timeout" option Rifkin Habsburg suggested.
After some more investigation we found a possibly related bug in Rails dev: http://dev.rubyonrails.org/ticket/8811. And after some more investigation we found the solution (tested with Rails 2.3.8):
Edit this ActiveRecord file: activerecord-2.3.8/lib/active_record/connection_adapters/sqlite_adapter.rb
Replace this:
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction }
end
with
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction(:immediate) }
end
And that's all! We haven't noticed a performance drop and now the app supports many more petitions without breaking (it waits for the timeout). Sqlite is nice!
bundle exec rake db:reset
It worked for me it will reset and show the pending migration.
Sqlite can allow other processes to wait until the current one finished.
I use this line to connect when I know I may have multiple processes trying to access the Sqlite DB:
conn = sqlite3.connect('filename', isolation_level = 'exclusive')
According to the Python Sqlite Documentation:
You can control which kind of BEGIN
statements pysqlite implicitly
executes (or none at all) via the
isolation_level parameter to the
connect() call, or via the
isolation_level property of
connections.
I had a similar problem with rake db:migrate. Issue was that the working directory was on a SMB share.
I fixed it by copying the folder over to my local machine.
Most answers are for Rails rather than raw ruby, and OPs question IS for rails, which is fine. :)
So I just want to leave this solution over here should any raw ruby user have this problem, and is not using a yml configuration.
After instancing the connection, you can set it like this:
db = SQLite3::Database.new "#{path_to_your_db}/your_file.db"
db.busy_timeout=(15000) # in ms, meaning it will retry for 15 seconds before it raises an exception.
#This can be any number you want. Default value is 0.
Source: this link
- Open the database
db = sqlite3.open("filename")
-- Ten attempts are made to proceed, if the database is locked
function my_busy_handler(attempts_made)
if attempts_made < 10 then
return true
else
return false
end
end
-- Set the new busy handler
db:set_busy_handler(my_busy_handler)
-- Use the database
db:exec(...)
What table is being accessed when the lock is encountered?
Do you have long-running transactions?
Can you figure out which requests were still being processed when the lock was encountered?
Argh - the bane of my existence over the last week. Sqlite3 locks the db file when any process writes to the database. IE any UPDATE/INSERT type query (also select count(*) for some reason). However, it handles multiple reads just fine.
So, I finally got frustrated enough to write my own thread locking code around the database calls. By ensuring that the application can only have one thread writing to the database at any point, I was able to scale to 1000's of threads.
And yea, its slow as hell. But its also fast enough and correct, which is a nice property to have.
I found a deadlock on sqlite3 ruby extension and fix it here: have a go with it and see if this fixes ur problem.
https://github.com/dxj19831029/sqlite3-ruby
I opened a pull request, no response from them anymore.
Anyway, some busy exception is expected as described in sqlite3 itself.
Be aware with this condition: sqlite busy
The presence of a busy handler does not guarantee that it will be invoked when there is
lock contention. If SQLite determines that invoking the busy handler could result in a
deadlock, it will go ahead and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of
invoking the busy handler. Consider a scenario where one process is holding a read lock
that it is trying to promote to a reserved lock and a second process is holding a reserved
lock that it is trying to promote to an exclusive lock. The first process cannot proceed
because it is blocked by the second and the second process cannot proceed because it is
blocked by the first. If both processes invoke the busy handlers, neither will make any
progress. Therefore, SQLite returns SQLITE_BUSY for the first process, hoping that this
will induce the first process to release its read lock and allow the second process to
proceed.
If you meet this condition, timeout isn't valid anymore. To avoid it, don't put select inside begin/commit. or use exclusive lock for begin/commit.
Hope this helps. :)
this is often a consecutive fault of multiple processes accessing the same database, i.e. if the "allow only one instance" flag was not set in RubyMine
Try running the following, it may help:
ActiveRecord::Base.connection.execute("BEGIN TRANSACTION; END;")
From: Ruby: SQLite3::BusyException: database is locked:
This may clear up the any transaction holding up the system
I believe this happens when a transaction times out. You really should be using a "real" database. Something like Drizzle, or MySQL. Any reason why you prefer SQLite over the two prior options?

Resources