When replicating an app to production, my POSTGIS table columns started misbehaving, with Rails informing me there was an "unknown OID 26865" and that the fields would be treated as String.
Instead of current_pos yielding e. g.
#<RGeo::Geographic::SphericalPointImpl:0x22fabdc "POINT (13.39318248760133 52.52908798020595)"> I would get 0101000020E6100000FFDD958664C92A403619DEE6B2434A40. It looked like the activerecord-postgis-adapter was not installed, or installed badly, but I eliminated that possibility by testing for the existence of data type RGeo::Feature::Point and by test-assigning
current_pos = "POINT (13.39318248760133 52.52908798020595)"
to the field - which proceeded without error but then yielded another incomprehensible hex string like the above.
Also, strangely enough, POSTGIS was working correctly within the database, e.g. giving correct results for a ST_DISTANCE query. A very limited problem thus, where writing, writing-parsing (from Point to hex format), manipulating by SQL and reading all worked, only the parsing upon read didn't.
When I tried to use migrations to ensure the database column would have the correct type, the migrations failed, giving
undefined method `st_point' for #<ActiveRecord::ConnectionAdapters::PostgreSQL::TableDefinition:0x00000005cb80b8>
I spent several hours trying all kinds of solutions, even re-installing the server from scratch, double-checking version numbers of everything, installing a slightly newer version of Ruby and a slightly older version of POSTGIS (to match my other environment), exporting the database and starting with a clean one, and so on. After I had done migrations and arrived at the "undefined method st_point" error, I was finally able to find the solution via Google, way down in a Github issue, and it's really simple:
In config/database.yml, swap out postgres:// for postgis:// in the database url. If you're using Heroku, this may require some ugly manipulation:
production:
url: <%= ENV.fetch('DATABASE_URL', '').sub(/^postgres/, "postgis") %>
So silly...
Do not forget to add activerecord-postgis-adapter to your Gemfile so #Sprachprofi's solution can run.
We are experiencing a strange problem on a Rails application on Heroku. Juste after migrate from Rails 3.2.17 to Rails 4.0.3 our postgresql server show an infinite increase of memory usage, then it returns the following error on every request :
ERROR: out of memory
DETAIL: Failed on request of size xxx
Juste after releasing the application with rails 4, postgresql memory start to increase.
As you can see in the screenshot below, It increase from 500 MO to more than 3,5Go in 3 hours
Simultaneously, commit per second doubled. It passed from 120 commit per second :
to 280 commit per second :
It is worth noting that when we restart the application, memory go down to a normal value of 600 Mo before going up to more than 3 Go few hours later (then every sql request show the 'out of memory' error). It is like if killing ActiveRecord connections were releasing memory on postgresql server.
We may well have a memory leak somewhere.
However :
It was working very well with Rails 3.2. Maybe this problem is a conjunction between changes we made to adapt our code to Rails 4 and Rails 4 code itself.
Ihe increase of the number of commit per second juste after Rails 4 upgrade seems very odd.
Our stack is :
Heroku, x2 dynos
Postgresql, Ika plan on heroku
Unicorn, 3 workers per instance
Rails 4.0.3
Redis Cache.
Noteworthy Gems : Delayed jobs (4.0.0), Active Admin (on master branch), Comfortable Mexican Sofa (1.11.2)
Nothing seems really fancy in our code.
Our postgresql config is :
work_mem : 100MB
shared_buffers : 1464MB
max_connections : 500
maintenance_work_mem : 64MB
Did someone ever experienced such a behaviour when switching to Rails 4 ? I am looking for idea to reproduce as well.
All help is very welcome.
Thanks in advance.
I don't know what is better : answer my question or update it ... so I choose to answer. Please let me know if it's better to update
We finally find out the problem. Since version 3.1, Rails added prepared statements on simple request like User.find(id). Version 4.0, added prepared statements to requests on associations (has_many, belongs_to, has_one).
For exemple following code :
class User
has_many :adresses
end
user.addresses
generate request
SELECT "addresses".* FROM "addresses" WHERE "addresses"."user_id" = $1 [["user_id", 1]]
The problem is that Rails only add prepared statement variables for foreign keys (here user_id). If you use custom sql request like
user.addresses.where("moved_at < ?", Time.now - 3.month)
it will not add a variable to the prepared statements for moved_at. So it generate a prepared statements every time the request is called. Rails handle prepared statements with a pool of max size 1000.
However, postgresql prepared statements are not shared across connection, so in one or two hours each connection has 1000 prepared statements. Some of them are very big. This lead to very high memory consumption on postgreqsl server.
I would very much like to use the mongo geoNear command as discussed here.
Here is the command I entered in my rails console with the accompanying error message.
MongoMapper.database.command({ 'geoNear' => "trips", 'near' => [45,45]})
Mongo::OperationFailure: Database command 'geoNear' failed:
(errmsg: 'more than 1 geo indexes :('; ok: '0.0').
I can not make sense of the error message, it is supposedly impossible to have more than 1 geo index and I am certain that I have only created one.
Based on this stackoverflow question I believe I am wording the query correctly. Does anyone understand that error message? How would I go about destroying and recreating my indexes?
I am using rails 3.1 with mongodb v2.0 and the mongo ruby gem v1.5.1.
I really asked this too soon, maybe I should delete it? Somehow there were in fact too many geo indexes, because deleting the index and recreating it fixed the problem.
MongoMapper.database.collection('trips').drop_indexes
Trip.ensure_index [[:route, '2d']]
I'm using delta indexing for my Thinking Sphinx indexes in my Rails project. In my machine (Mac OS X) it's working fine. I change a record and it immediately finds it. On the servers (Debian) it doesn't.
I did run a sql query for delta = true and the I've got the expected recently changed records, so that part is working. In the log/searchd.query.log I see the proper request:
[Fri Oct 22 10:25:29.193 2010] 0.000 sec [all/3/rel 0 (0,20)] [customer_core,customer_delta] Jonas4
Any ideas what else could it be?
Thanks.
I'll answer here, even though you've posted to the support list as well...
Which user is running the TS rake tasks? And which user owns the Rails site on your server? They should be the same.
Also: are you using Passenger? If so, you'll want to make sure the bin_path setting is set in your config/sphinx.yml file. The documentation runs through both points.
Let me know if neither of these things help matters.
Running a rails site right now using SQLite3.
About once every 500 requests or so, I get a
ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:...
What's the way to fix this that would be minimally invasive to my code?
I'm using SQLLite at the moment because you can store the DB in source control which makes backing up natural and you can push changes out very quickly. However, it's obviously not really set up for concurrent access. I'll migrate over to MySQL tomorrow morning.
You mentioned that this is a Rails site. Rails allows you to set the SQLite retry timeout in your database.yml config file:
production:
adapter: sqlite3
database: db/mysite_prod.sqlite3
timeout: 10000
The timeout value is specified in miliseconds. Increasing it to 10 or 15 seconds should decrease the number of BusyExceptions you see in your log.
This is just a temporary solution, though. If your site needs true concurrency then you will have to migrate to another db engine.
By default, sqlite returns immediatly with a blocked, busy error if the database is busy and locked. You can ask for it to wait and keep trying for a while before giving up. This usually fixes the problem, unless you do have 1000s of threads accessing your db, when I agree sqlite would be inappropriate.
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
All of these things are true, but it doesn't answer the question, which is likely: why does my Rails app occasionally raise a SQLite3::BusyException in production?
#Shalmanese: what is the production hosting environment like? Is it on a shared host? Is the directory that contains the sqlite database on an NFS share? (Likely, on a shared host).
This problem likely has to do with the phenomena of file locking w/ NFS shares and SQLite's lack of concurrency.
If you have this issue but increasing the timeout does not change anything, you might have another concurrency issue with transactions, here is it in summary:
Begin a transaction (aquires a SHARED lock)
Read some data from DB (we are still using the SHARED lock)
Meanwhile, another process starts a transaction and write data (acquiring the RESERVED lock).
Then you try to write, you are now trying to request the RESERVED lock
SQLite raises the SQLITE_BUSY exception immediately (indenpendently of your timeout) because your previous reads may no longer be accurate by the time it can get the RESERVED lock.
One way to fix this is to patch the active_record sqlite adapter to aquire a RESERVED lock directly at the begining of the transaction by padding the :immediate option to the driver. This will decrease performance a bit, but at least all your transactions will honor your timeout and occurs one after the other. Here is how to do this using prepend (Ruby 2.0+) put this in a initializer:
module SqliteTransactionFix
def begin_db_transaction
log('begin immediate transaction', nil) { #connection.transaction(:immediate) }
end
end
module ActiveRecord
module ConnectionAdapters
class SQLiteAdapter < AbstractAdapter
prepend SqliteTransactionFix
end
end
end
Read more here: https://rails.lighthouseapp.com/projects/8994/tickets/5941-sqlite3busyexceptions-are-raised-immediately-in-some-cases-despite-setting-sqlite3_busy_timeout
Just for the record. In one application with Rails 2.3.8 we found out that Rails was ignoring the "timeout" option Rifkin Habsburg suggested.
After some more investigation we found a possibly related bug in Rails dev: http://dev.rubyonrails.org/ticket/8811. And after some more investigation we found the solution (tested with Rails 2.3.8):
Edit this ActiveRecord file: activerecord-2.3.8/lib/active_record/connection_adapters/sqlite_adapter.rb
Replace this:
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction }
end
with
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction(:immediate) }
end
And that's all! We haven't noticed a performance drop and now the app supports many more petitions without breaking (it waits for the timeout). Sqlite is nice!
bundle exec rake db:reset
It worked for me it will reset and show the pending migration.
Sqlite can allow other processes to wait until the current one finished.
I use this line to connect when I know I may have multiple processes trying to access the Sqlite DB:
conn = sqlite3.connect('filename', isolation_level = 'exclusive')
According to the Python Sqlite Documentation:
You can control which kind of BEGIN
statements pysqlite implicitly
executes (or none at all) via the
isolation_level parameter to the
connect() call, or via the
isolation_level property of
connections.
I had a similar problem with rake db:migrate. Issue was that the working directory was on a SMB share.
I fixed it by copying the folder over to my local machine.
Most answers are for Rails rather than raw ruby, and OPs question IS for rails, which is fine. :)
So I just want to leave this solution over here should any raw ruby user have this problem, and is not using a yml configuration.
After instancing the connection, you can set it like this:
db = SQLite3::Database.new "#{path_to_your_db}/your_file.db"
db.busy_timeout=(15000) # in ms, meaning it will retry for 15 seconds before it raises an exception.
#This can be any number you want. Default value is 0.
Source: this link
- Open the database
db = sqlite3.open("filename")
-- Ten attempts are made to proceed, if the database is locked
function my_busy_handler(attempts_made)
if attempts_made < 10 then
return true
else
return false
end
end
-- Set the new busy handler
db:set_busy_handler(my_busy_handler)
-- Use the database
db:exec(...)
What table is being accessed when the lock is encountered?
Do you have long-running transactions?
Can you figure out which requests were still being processed when the lock was encountered?
Argh - the bane of my existence over the last week. Sqlite3 locks the db file when any process writes to the database. IE any UPDATE/INSERT type query (also select count(*) for some reason). However, it handles multiple reads just fine.
So, I finally got frustrated enough to write my own thread locking code around the database calls. By ensuring that the application can only have one thread writing to the database at any point, I was able to scale to 1000's of threads.
And yea, its slow as hell. But its also fast enough and correct, which is a nice property to have.
I found a deadlock on sqlite3 ruby extension and fix it here: have a go with it and see if this fixes ur problem.
https://github.com/dxj19831029/sqlite3-ruby
I opened a pull request, no response from them anymore.
Anyway, some busy exception is expected as described in sqlite3 itself.
Be aware with this condition: sqlite busy
The presence of a busy handler does not guarantee that it will be invoked when there is
lock contention. If SQLite determines that invoking the busy handler could result in a
deadlock, it will go ahead and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of
invoking the busy handler. Consider a scenario where one process is holding a read lock
that it is trying to promote to a reserved lock and a second process is holding a reserved
lock that it is trying to promote to an exclusive lock. The first process cannot proceed
because it is blocked by the second and the second process cannot proceed because it is
blocked by the first. If both processes invoke the busy handlers, neither will make any
progress. Therefore, SQLite returns SQLITE_BUSY for the first process, hoping that this
will induce the first process to release its read lock and allow the second process to
proceed.
If you meet this condition, timeout isn't valid anymore. To avoid it, don't put select inside begin/commit. or use exclusive lock for begin/commit.
Hope this helps. :)
this is often a consecutive fault of multiple processes accessing the same database, i.e. if the "allow only one instance" flag was not set in RubyMine
Try running the following, it may help:
ActiveRecord::Base.connection.execute("BEGIN TRANSACTION; END;")
From: Ruby: SQLite3::BusyException: database is locked:
This may clear up the any transaction holding up the system
I believe this happens when a transaction times out. You really should be using a "real" database. Something like Drizzle, or MySQL. Any reason why you prefer SQLite over the two prior options?