I've been digging around stackoverflow trying to find others who get these prepared statements already exists errors.
In most cases configuring unicorn properly with the after/before fork resolves these issues.
However in my case we are still getting errors as such:
ActiveRecord::StatementInvalid: PG::Error: ERROR: prepared statement "a495" already exists: INSERT INTO "user_logins" ("account_id", "created_at", "ip_address", "user_agent", "user_id") VALUES ($1, $2, $3, $4, $5) RETURNING "id"
This error gets thrown in different areas in our app but always seems to have the same statement number 'a495'.
We are on rails 3.2.17, using postgres and we are on heroku.
I really have no idea why this is happening, but its starting to happen more frequently now.
Any help would be greatly appreciated.
In the rails stack trace this error is being thrown in the .prepare call. I'm confused because its checking for the sql key in the statements collection. If it doesn't exist it prepares the new one....however when trying to prepare it, its throwing the error.
def prepare_statement(sql)
sql_key = sql_key(sql)
unless #statements.key? sql_key
nextkey = #statements.next_key
#connection.prepare nextkey, sql
#statements[sql_key] = nextkey
end
#statements[sql_key]
end
We had the same problem, and did very thorough investigation. We concluded that in our case, this error is caused by Rack::Timeout, that very occasionally interrupts code execution after the new statement has been already created, but before the counter is updated on Rails side. Next prepared statement then tries to use the same name (e.g. a494), and a collision occurred.
My belief is that Rails has not implemented prepared statements correctly. Instead of using the increasing counter (a001, a002, ...), they should have used GUIDs. This way, a race condition described above wouldn't be an issue.
We didn't find a workaround. Improving the performance of an app, and increasing the window for Rack::Timeout, made this problem nearly extinct, but it still happens from time to time.
This is typically not a Postgres issue, but an issue with sharing database connections in something like Unicorn:
Understanding Heroku Postgres Log Statements and Common Errors
Here's my solution for Heroku, which unfortunately is a little involved. On the plus side, though, you don't need to suffer from 100's of error notifications when this error starts happening. All that's needed is that the app/dyno is restarted.
The basic outline of the procedure is that when we detect a ActiveRecord::StatementInvalid exception, with an error message description that contains the words 'prepared statement', we run the heroku restart command using Heroku's platform-api gem.
Put the platform-api gem in your Gemfile, and run bundle install
Set the HEROKU_API_KEY to the correct value. (You can generate a key from your Heroku dashboard). Use heroku config:set HEROKU_API_KEY=whatever-the-value-is.
Set the HEROKU_APP_NAME to the correct value. You can get this information from the heroku CLI, but it's just whatever you called your app.
Add the following to your ApplicationController (/app/controllers/application_controller.rb):
...
class ApplicationController < ActionController::Base
rescue_from ActiveRecord::StatementInvalid do |exception|
# notify your error handler, or send an email, or whatever
# ...
if exception.message =~ /prepared statement/
restart_dyno
end
end
def restart_dyno
heroku = PlatformAPI.connect_oauth(ENV["HEROKU_API_KEY"])
heroku.dyno.restart(ENV["HEROKU_APP_NAME"], "web")
end
end
That's it. Hope this helps.
Related
I am working on a Rails 5.x application, and I use Postgres as my database.
I often run rake db:migrate on my production servers. Sometimes the migration will add a new column to the database, and this causes some controller actions to crash with the following error:
ActiveRecord::PreparedStatementCacheExpired: ERROR: cached plan must not change result type
This is happening in a critical controller action that needs to have zero downtime, so I need to find a way to prevent this crash from ever happening.
Should I catch the ActiveRecord::PreparedStatementCacheExpired error and retry the save? Or should I add some locking to this particular controller action, so that I don't start serving any new requests while a database migration is running?
What would be the best way to prevent this crash from ever happening again?
I was able to fix this issue in some places by using this retry_on_expired_cache helper:
class ApplicationRecord < ActiveRecord::Base
self.abstract_class = true
class << self
# Retry automatically on ActiveRecord::PreparedStatementCacheExpired.
# (Do not use this for transactions with side-effects unless it is acceptable
# for these side-effects to occasionally happen twice.)
def retry_on_expired_cache(*_args)
retried ||= false
yield
rescue ActiveRecord::PreparedStatementCacheExpired
raise if retried
retried = true
retry
end
end
end
I would use it like this:
MyModel.retry_on_expired_cache do
#my_model.save
end
Unfortunately this was like playing "whack-a-mole", because this crash just kept happening all over my application during my rolling deploys (I'm not able to restart all the Rails processes at the same time.)
I finally learned that I can turn off prepared_statements to completely avoid this issue. (See this other question and answers on StackOverflow.)
I was worried about the performance penalty, but I found many reports from people who had set prepared_statements: false, and they hadn't noticed any problems. e.g. https://news.ycombinator.com/item?id=7264171
I created a file at config/initializers/disable_prepared_statements.rb:
db_configuration = ActiveRecord::Base.configurations[Rails.env]
db_configuration.merge!('prepared_statements' => false)
ActiveRecord::Base.establish_connection(db_configuration)
This allows me to continue setting the database configuration from the DATABASE_URL env variable, and 'prepared_statements' => false will be injected into the configuration.
This completely solves the ActiveRecord::PreparedStatementCacheExpired errors and makes it much easier to achieve high-availability for my service while still being able to modify the database.
Sporadically we get PG::UndefinedTable errors while using ActiveRecord. The association table name is some how corrupted and I quite often see
Cancelled appended to the end of the table name.
E.g:
ActiveRecord::StatementInvalid: PG::UndefinedTable: ERROR: relation "fooCancell" does not exist
ActiveRecord::StatementInvalid: PG::UndefinedTable: ERROR: relation "Cancelled" does not exist
ActiveRecord::StatementInvalid: PG::UndefinedTable: ERROR: relation "barC" does not exist
In the example above, I have obfuscated the table name by using foo and bar.
We see this errors when the rails project is running inside Puma. Queue workers seems to be doing okay.
The tables in the error message doesn't correspond to real tables or models. It looks like the case of memory corruption. Has anyone seen such issues? If so how did you get around it?
puma.rb
on_worker_boot do
ActiveRecord::Base.establish_connection
end
database.yml
production:
url: <%= ENV["DATABASE_URL"] %>
pool: <%= ENV['DB_CONNECTION_POOL_SIZE'] || 5%>
reaping_frequency: <%= ENV['DB_CONNECTION_REAPING_FREQUENCY'] || 10 %>
prepared_statements: false
I'm hazarding a guess here, based on this possibly related error...
But you might be either:
calling fork within your application; OR
calling ActiveRecord routines (using database calls) before the server (puma) is forking it's worker processes (during the app initialization).
Either of these will break ActiveRecord's synchronization and cause multiple processes to share the database connection pool without synchronizing it's use (resulting in interlaced and corrupt database commands).
If you are using fork, make sure to close all the ActiveRecord database connections and reinitialize the connection pool (there's a function call that does it, but I don't remember it of the top of my head, maybe ActiveRecord.disconnect! or ActiveRecord.connection_pool.disconnect!).
Otherwise, before running Puma (either during the initialization process or using Puma's after_fork), close all the ActiveRecord database connections and reinitialize the connection pool.
It looks like reaping_frequency may be the issue. I found a couple claims that they may have a threading bug. I would try removing that option or setting it to nil and see if that works. The only other thing I can think of is if you are manually calling Thread.new and using active record within it.
Here are the few claims against reaping:
http://omegadelta.net/2014/03/15/the-rails-grim-reaper/
https://github.com/mperham/sidekiq/issues/1936
Search for "DO fear the Reaper" here:
https://www.google.com/amp/s/bibwild.wordpress.com/2014/07/17/activerecord-concurrency-in-rails4-avoid-leaked-connections/amp/
We are running two rails applications against the same database. When we deploy, we typically deploy to App A, then App B, restarting all rails processes during the deploy. App A runs on 7 servers with at least 20 processes connections to the database. App B runs on 4 servers with at least 8 connections to the database.
Today, when we deployed App A, we added a column to an existing table:
change_table :organizations do |t|
t.integer :users_count, default: 0
end
We expected this to be fine: its new column on an existing table and it has a default. Shortly after the migration ran, a number of errors showed up, from both App A (before it was restarted) and App B (before it was deployed to).
These errors were:
FATAL ActiveRecord::StatementInvalid error: PG::InFailedSqlTransaction:
ERROR: current transaction is aborted, commands ignored until end of
transaction block
In the postgres log, I have 58 errors like this:
postgres[12283]: ERROR: cached plan must not change result type
postgres[12283]: STATEMENT: SELECT "organizations".* FROM
"organizations" WHERE "organizations"."id" = $1 LIMIT $2
This repeats a number of times and goes away after all deploys have finished and all processes restarted.
It appeared that Rails bug #12330 and Rails PR 22170 addressed this in Rails 5.0, but I have that commit and am still seeing this error.
Relevant software versions
Rails 5.0.2
PG 0.19.0
Postgres 9.5
One comment on Rails bug #12330 suggests that I have to add the columns with null defaults. Another suggests performing multiple deploys, one to disable prepare statements, then another to perform the migration and and re-enable prepared statements.
Is there away to avoid this? It clears up when we restart the servers, but I feel like I'm missing something - like only using nullable columns perhaps that would avoid these errors all together. This doesn't happen on every deploy and I don't know how to reproduce it - but this wasn't the first time it has happened.
You have changed table structure and thus what your prepared SELECT returns, because you use "organizations".*, with is returning all columns. PostgreSQL apparently doesn't support updating of prepared statements, so you need to either create new session (reconnect) or use DEALLOCATE to remove that prepared statement.
EDIT: You could also stop using SELECT *.
I wrote a helper method for my controller to iterate through an attribute, being represented as an array using PostgreSQL.
def format_cf array
nums = ""
array.each { |c| nums += "#{c}, " }
unless nums.blank?
nums.chop!.chop!
end
nums
end
This way, I don't get the messy {} chars in my view. I'm implementing an empty value for this attribute as the string '{}', meaning that's what I set the default value to in my migration. This hasn't been a problem for my development environment, as it interprets that as an empty array. However, now in production, this helper method is throwing an error saying
ActionView::Template::Error (undefined method `each' for "{}":String)
Is my implementation wrong here, or can anyone think of some obscure setting I may have overlooked when comparing my development.rb and production.rb?
EDIT: 2013-04-11 9:00
I'm currently deploying using capistrano with unicorn and nginx
I'm going to guess you may have run into this bug as well, if you're using rails 4 https://github.com/rails/rails/issues/10432. Basically there's a bug in the Migrations system that turns :string, array: true into a normal :string directive. The joys of using edge stuff huh.
I gues you used taps to deploy tour database on heroku with
heroku db:push
Problem is that taps doesn't support Postgres array, and it ends up casting the column as a string. There are many workaround for that.
The one I used was to open a console on heroku
heroku run console
Then get a connection to the database
User.connection # or any of your models
And then execute raw sql with the connection#execute method in order to create a backup column, delete the current string column and recreate it as an array.
You'll probably find it more useful to use the import/export advised by heroku
And if you're not using heroku, then I' completely wrong, and I have no idea what your problem is :)
This page I have been developing for my app has been working fine locally (using sqllite3) but when I push it to Heroku, which uses PostgreSQL I get this error:
NeighborhoodsController# (ActionView::Template::Error) "PGError: ERROR: column \"isupforconsideration\" does not exist\nLINE 1: ... \"photos\" WHERE (neighborhood = 52 AND isUpForCon...\n
From this line of code:
#photos = Photo.where(["neighborhood = ? AND isUpForConsideration = ?", #neighborhood.id, 1])
isUpForConsideration is defiantly part of the Photo column. All my migrations are up to date, and when I pull the db back locally isUpForConsideration is still there, and the app still works locally.
I've also tried:
#photos = #neighborhood.photos(:conditions => {:isUpForConsideration => 1})
and
#photos = #neighborhood.photos.where(["isUpForConsideration = 1"])
Which gives me this error:
NeighborhoodsController# (ActionView::Template::Error) "PGError: ERROR: column \"isupforconsideration\" does not exist\nLINE 1: ...tos\" WHERE (\"photos\".neighborhood = 52) AND (isUpForCon...\n
Any idea what I could be doing wrong?
Your problem is that table and column names are case sensitive in PostgreSQL. This is normally hidden by automatically converting these to all-lowercase when making queries (hence why you see the error message reporting "isupforconsideration"), but if you managed to dodge that conversion on creation (by double quoting the name, like Rails does for you when you create a table), you'll see this weirdness. You need to enclose "isUpForConsideration" in double quotes when using it in a WHERE clause to fix this.
e.g.
#photos = #neighborhood.photos.where(["\"isUpForConsideration\" = 1"])
Another way to to get this error by modifying a migration and pushing changes up to Heroku without rebuilding the table.
Caveat: You'll lose data and perhaps references, so what I'm about to explain is a bad idea unless you're certain that no references will be lost. Rails provides ways to modify tables with migrations -- create new migrations to modify tables, don't modify the migrations themselves after they're created, generally.
Having said that, you can run heroku run rake db:rollback until that table you changed pops off, and then run heroku run rake db:migrate to put it back with your changes.
In addition, you can use the taps gem to back up and restore data. Pull down the database tables, munge them the way you need and then push the tables back up with taps. I do this quite often during the prototyping phase. I'd never do that with a live app though.