I have a Rails 5.2 app that's configured to use the new much touted feature of recyclable cache keys.
I can confirm the setting is enabled in the console:
Rails.application.config.active_record.cache_versioning
=> true
ActiveRecord::Base.cache_versioning
=> true
BlogPost.cache_versioning
=> true
With this setting, blog_post.cache_key now returns a stable string, because the cache_version is actually stored inside the cache entry (as this article details):
blog_post.cache_key
=> "blog_posts/10317"
blog_post.cache_version
=> "20190417193345000000"
But the problem is, even tough everything works as expected in the console, I can't seem to see this working watching the server logs, because it keeps generating cache_keys that contain the cache_version:
In my view:
<% cache(['blog_post_list_item_v2', blog_post, I18n.locale, browser.device.mobile?]) do %>
...
<% end %>
In the server logs:
Rendered blog/blog_posts/_blog_post_list_item.html.erb (2.5ms) [cache miss]
Read fragment views/blog/blog_posts/_blog_post_list_item:0bdff42a9193ea497e5ed4a9cc2f51e8/blog_post_list_item_v2/blog_posts/10317-20190417193345000000/pt-br/ (0.5ms)
As you see, the cache key should be .../blog_posts/10317/, but it actually contains the timestamp.
After debugging through the Rails code, I could confirm that the key was actually stable. What gets printed in the server log includes the version for debugging purposes only, but the key being stored on your cache doesn't actually contain the version.
The version is stored instead within the serialized object in the cache, which is an instance of ActiveSupport::Cache::Entry and contains an attr_reader :version. So, if you're like me, you'd assume that the cache (for instance, raw HTML) was stored directly in memcached, but it actually is stored in the value attribute of that ActiveSupport::Cache::Entry (which also has the version attribute if you have cache_versioning turned on), and that entire object is saved serialized into the cache.
If you want to confirm it yourself, you can check your own memcached realtime log. If you're on a mac, first stop it (I'm assuming it was installed with homebrew) with brew services stop memcached, start it on the foreground with verbose mode with memcached -vv and take a look at the keys requested by rails. After you finish your study, brew services start memcached will re-enable memcached as a daemon.
Also, if you are migrating from the old way (without recyclable cache keys), you should wipe your cache first with Rails.cache.clear in the console. Remember to do that in production as well.
If you want to understand more about how this works, a good read is https://dzone.com/articles/cache-invalidation-complexity-rails-52-and-dalli-c, but debugging through the rails code with binding.pry was what got things clear to me.
In a nutshell, it's a very brilliant implementation in my opinion, and the cache recycling just makes it so much better (the article quotes DHH saying that 'We went from only being able to keep 18 hours of caching to, I believe, 3 weeks. It was the single biggest performance boost that Basecamp 3 has ever seen.')
I built a pretty small Rails 5.1.4 (Ruby 2.3.1) application. Once I've deployed it to production, I'm getting this particular error from time to time:
RuntimeError: can't add a new key into hash during iteration
Pointing here:
# rack/request.rb, line 67
def set_header(name, v)
#env[name] = v
end
I understand, this error happens when you're attempting to add new key to the hash while iterating over that hash. Since #env is a hash, it makes sense. But:
in a stacktrace I found nothing related to iterations over #env, it's a dead simple chain of app.call(env) calls.
this error occurs not always, but just once per hour or two, so this is also super weird to me
I can't reproduce it locally: I've tried to load server with multiple request hits, assuming this might be thread-safety issue, but locally it works like a charm...
Full stacktrace only consists of rack middlewares can be found here:
https://gist.github.com/Nattfodd/e513122400b4115a653ea38d69917a9a
Gemfile.lock:
https://gist.github.com/Nattfodd/a9015e9204544302bf3959cec466b715
Server is running with puma, config is very simple: just amount of threads and workers:
threads 0, 5
workers 5
My current ideas are:
one of monitoring gems has a bug (sentry-raven, new_relic)
concurrent-ruby has a bug (I read about one, but it was fixed in 1.0.2, and actual version I'm using for Puma is 1.0.5)
something super stupid, I missed, but I have no idea where to look, since the controller's action contains 3 lines of code and application config is mostly default...
this is something config-related, since backtrace does not contain the controller at all...
can you can paste the full stack-trace?
My assumption is set_header is getting called from method which iterating env.
We are running two rails applications against the same database. When we deploy, we typically deploy to App A, then App B, restarting all rails processes during the deploy. App A runs on 7 servers with at least 20 processes connections to the database. App B runs on 4 servers with at least 8 connections to the database.
Today, when we deployed App A, we added a column to an existing table:
change_table :organizations do |t|
t.integer :users_count, default: 0
end
We expected this to be fine: its new column on an existing table and it has a default. Shortly after the migration ran, a number of errors showed up, from both App A (before it was restarted) and App B (before it was deployed to).
These errors were:
FATAL ActiveRecord::StatementInvalid error: PG::InFailedSqlTransaction:
ERROR: current transaction is aborted, commands ignored until end of
transaction block
In the postgres log, I have 58 errors like this:
postgres[12283]: ERROR: cached plan must not change result type
postgres[12283]: STATEMENT: SELECT "organizations".* FROM
"organizations" WHERE "organizations"."id" = $1 LIMIT $2
This repeats a number of times and goes away after all deploys have finished and all processes restarted.
It appeared that Rails bug #12330 and Rails PR 22170 addressed this in Rails 5.0, but I have that commit and am still seeing this error.
Relevant software versions
Rails 5.0.2
PG 0.19.0
Postgres 9.5
One comment on Rails bug #12330 suggests that I have to add the columns with null defaults. Another suggests performing multiple deploys, one to disable prepare statements, then another to perform the migration and and re-enable prepared statements.
Is there away to avoid this? It clears up when we restart the servers, but I feel like I'm missing something - like only using nullable columns perhaps that would avoid these errors all together. This doesn't happen on every deploy and I don't know how to reproduce it - but this wasn't the first time it has happened.
You have changed table structure and thus what your prepared SELECT returns, because you use "organizations".*, with is returning all columns. PostgreSQL apparently doesn't support updating of prepared statements, so you need to either create new session (reconnect) or use DEALLOCATE to remove that prepared statement.
EDIT: You could also stop using SELECT *.
I’m using Rails4 with postgres as my database – because I’m using quite a lot of postgres–specific features, my schema is stored in .sql format (I’m not sure if it changes anything, but just in case…)
Currently, when I’m running tests, my console output is flooded with following logs:
SET
Time: 2.822 ms
CREATE EXTENSION
Time: 11.300 ms
(...)
COMMENT
CREATE TABLE
Time: 1.532 ms
(...)
INSERT 0 1
Time: 0.326 ms
I traced which task adds them, and they are created in db:structure:load task.
Is there any way to suppress them?
I added min_messages: warning to my database.yml config, but unfortunately it didn’t change anything.
Ok, I found it – the root of that cause was slightly different.
Couple of days ago, I stumbled upon Thoughtbots’ post about improving postgres command–line experience – there is link to nice psqlrc settings, which I (after cosmetic modifications) applied on my machine. Apparently, the very last line (\unset QUIET) was causing this. After commenting out that, it works like a charm. Thank you all for your help!
Running a rails site right now using SQLite3.
About once every 500 requests or so, I get a
ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:...
What's the way to fix this that would be minimally invasive to my code?
I'm using SQLLite at the moment because you can store the DB in source control which makes backing up natural and you can push changes out very quickly. However, it's obviously not really set up for concurrent access. I'll migrate over to MySQL tomorrow morning.
You mentioned that this is a Rails site. Rails allows you to set the SQLite retry timeout in your database.yml config file:
production:
adapter: sqlite3
database: db/mysite_prod.sqlite3
timeout: 10000
The timeout value is specified in miliseconds. Increasing it to 10 or 15 seconds should decrease the number of BusyExceptions you see in your log.
This is just a temporary solution, though. If your site needs true concurrency then you will have to migrate to another db engine.
By default, sqlite returns immediatly with a blocked, busy error if the database is busy and locked. You can ask for it to wait and keep trying for a while before giving up. This usually fixes the problem, unless you do have 1000s of threads accessing your db, when I agree sqlite would be inappropriate.
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
All of these things are true, but it doesn't answer the question, which is likely: why does my Rails app occasionally raise a SQLite3::BusyException in production?
#Shalmanese: what is the production hosting environment like? Is it on a shared host? Is the directory that contains the sqlite database on an NFS share? (Likely, on a shared host).
This problem likely has to do with the phenomena of file locking w/ NFS shares and SQLite's lack of concurrency.
If you have this issue but increasing the timeout does not change anything, you might have another concurrency issue with transactions, here is it in summary:
Begin a transaction (aquires a SHARED lock)
Read some data from DB (we are still using the SHARED lock)
Meanwhile, another process starts a transaction and write data (acquiring the RESERVED lock).
Then you try to write, you are now trying to request the RESERVED lock
SQLite raises the SQLITE_BUSY exception immediately (indenpendently of your timeout) because your previous reads may no longer be accurate by the time it can get the RESERVED lock.
One way to fix this is to patch the active_record sqlite adapter to aquire a RESERVED lock directly at the begining of the transaction by padding the :immediate option to the driver. This will decrease performance a bit, but at least all your transactions will honor your timeout and occurs one after the other. Here is how to do this using prepend (Ruby 2.0+) put this in a initializer:
module SqliteTransactionFix
def begin_db_transaction
log('begin immediate transaction', nil) { #connection.transaction(:immediate) }
end
end
module ActiveRecord
module ConnectionAdapters
class SQLiteAdapter < AbstractAdapter
prepend SqliteTransactionFix
end
end
end
Read more here: https://rails.lighthouseapp.com/projects/8994/tickets/5941-sqlite3busyexceptions-are-raised-immediately-in-some-cases-despite-setting-sqlite3_busy_timeout
Just for the record. In one application with Rails 2.3.8 we found out that Rails was ignoring the "timeout" option Rifkin Habsburg suggested.
After some more investigation we found a possibly related bug in Rails dev: http://dev.rubyonrails.org/ticket/8811. And after some more investigation we found the solution (tested with Rails 2.3.8):
Edit this ActiveRecord file: activerecord-2.3.8/lib/active_record/connection_adapters/sqlite_adapter.rb
Replace this:
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction }
end
with
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction(:immediate) }
end
And that's all! We haven't noticed a performance drop and now the app supports many more petitions without breaking (it waits for the timeout). Sqlite is nice!
bundle exec rake db:reset
It worked for me it will reset and show the pending migration.
Sqlite can allow other processes to wait until the current one finished.
I use this line to connect when I know I may have multiple processes trying to access the Sqlite DB:
conn = sqlite3.connect('filename', isolation_level = 'exclusive')
According to the Python Sqlite Documentation:
You can control which kind of BEGIN
statements pysqlite implicitly
executes (or none at all) via the
isolation_level parameter to the
connect() call, or via the
isolation_level property of
connections.
I had a similar problem with rake db:migrate. Issue was that the working directory was on a SMB share.
I fixed it by copying the folder over to my local machine.
Most answers are for Rails rather than raw ruby, and OPs question IS for rails, which is fine. :)
So I just want to leave this solution over here should any raw ruby user have this problem, and is not using a yml configuration.
After instancing the connection, you can set it like this:
db = SQLite3::Database.new "#{path_to_your_db}/your_file.db"
db.busy_timeout=(15000) # in ms, meaning it will retry for 15 seconds before it raises an exception.
#This can be any number you want. Default value is 0.
Source: this link
- Open the database
db = sqlite3.open("filename")
-- Ten attempts are made to proceed, if the database is locked
function my_busy_handler(attempts_made)
if attempts_made < 10 then
return true
else
return false
end
end
-- Set the new busy handler
db:set_busy_handler(my_busy_handler)
-- Use the database
db:exec(...)
What table is being accessed when the lock is encountered?
Do you have long-running transactions?
Can you figure out which requests were still being processed when the lock was encountered?
Argh - the bane of my existence over the last week. Sqlite3 locks the db file when any process writes to the database. IE any UPDATE/INSERT type query (also select count(*) for some reason). However, it handles multiple reads just fine.
So, I finally got frustrated enough to write my own thread locking code around the database calls. By ensuring that the application can only have one thread writing to the database at any point, I was able to scale to 1000's of threads.
And yea, its slow as hell. But its also fast enough and correct, which is a nice property to have.
I found a deadlock on sqlite3 ruby extension and fix it here: have a go with it and see if this fixes ur problem.
https://github.com/dxj19831029/sqlite3-ruby
I opened a pull request, no response from them anymore.
Anyway, some busy exception is expected as described in sqlite3 itself.
Be aware with this condition: sqlite busy
The presence of a busy handler does not guarantee that it will be invoked when there is
lock contention. If SQLite determines that invoking the busy handler could result in a
deadlock, it will go ahead and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of
invoking the busy handler. Consider a scenario where one process is holding a read lock
that it is trying to promote to a reserved lock and a second process is holding a reserved
lock that it is trying to promote to an exclusive lock. The first process cannot proceed
because it is blocked by the second and the second process cannot proceed because it is
blocked by the first. If both processes invoke the busy handlers, neither will make any
progress. Therefore, SQLite returns SQLITE_BUSY for the first process, hoping that this
will induce the first process to release its read lock and allow the second process to
proceed.
If you meet this condition, timeout isn't valid anymore. To avoid it, don't put select inside begin/commit. or use exclusive lock for begin/commit.
Hope this helps. :)
this is often a consecutive fault of multiple processes accessing the same database, i.e. if the "allow only one instance" flag was not set in RubyMine
Try running the following, it may help:
ActiveRecord::Base.connection.execute("BEGIN TRANSACTION; END;")
From: Ruby: SQLite3::BusyException: database is locked:
This may clear up the any transaction holding up the system
I believe this happens when a transaction times out. You really should be using a "real" database. Something like Drizzle, or MySQL. Any reason why you prefer SQLite over the two prior options?