I am writing a rails applications that utilize solr for full text search
In development mode, I used the sunpost solr gem which is really handy. I used the sqlite3 database in development and everything went smooth.
Now is the time to move to production server and I installed the solr-tomcat package and moved to my production database which is Mysql. I moved the conf files of solr from my application folder to /usr/share/solr/conf
Suddenly, I cannot reindex, and solr returned this
rake RAILS_ENV=production sunspot:solr:reindex
[# ] [ 50/7312] [ 0.68%] [00:00] [00:41] [ 175.82/s]rake aborted!
Mysql::Error: Unknown column 'barangs.' in 'where clause': SELECT `barangs`.* FROM `barangs` WHERE (`barangs`.`` >= 0) ORDER BY `barangs`.`` ASC LIMIT 50
Intrigued, I tried to reindex with the development database, all is well and can be reindexed. This behavior left me baffled
Any help will be appreciated
I got the problem. It was due to me dropping and restoring tables recklessly and it loses some relationships definitions.
Hence the "missing" column on table 'barangs'.'MISSING' solr can't find the specified column.
For others who are facing the problem, I recommend inspecting the relationship between models first
Related
We are running two rails applications against the same database. When we deploy, we typically deploy to App A, then App B, restarting all rails processes during the deploy. App A runs on 7 servers with at least 20 processes connections to the database. App B runs on 4 servers with at least 8 connections to the database.
Today, when we deployed App A, we added a column to an existing table:
change_table :organizations do |t|
t.integer :users_count, default: 0
end
We expected this to be fine: its new column on an existing table and it has a default. Shortly after the migration ran, a number of errors showed up, from both App A (before it was restarted) and App B (before it was deployed to).
These errors were:
FATAL ActiveRecord::StatementInvalid error: PG::InFailedSqlTransaction:
ERROR: current transaction is aborted, commands ignored until end of
transaction block
In the postgres log, I have 58 errors like this:
postgres[12283]: ERROR: cached plan must not change result type
postgres[12283]: STATEMENT: SELECT "organizations".* FROM
"organizations" WHERE "organizations"."id" = $1 LIMIT $2
This repeats a number of times and goes away after all deploys have finished and all processes restarted.
It appeared that Rails bug #12330 and Rails PR 22170 addressed this in Rails 5.0, but I have that commit and am still seeing this error.
Relevant software versions
Rails 5.0.2
PG 0.19.0
Postgres 9.5
One comment on Rails bug #12330 suggests that I have to add the columns with null defaults. Another suggests performing multiple deploys, one to disable prepare statements, then another to perform the migration and and re-enable prepared statements.
Is there away to avoid this? It clears up when we restart the servers, but I feel like I'm missing something - like only using nullable columns perhaps that would avoid these errors all together. This doesn't happen on every deploy and I don't know how to reproduce it - but this wasn't the first time it has happened.
You have changed table structure and thus what your prepared SELECT returns, because you use "organizations".*, with is returning all columns. PostgreSQL apparently doesn't support updating of prepared statements, so you need to either create new session (reconnect) or use DEALLOCATE to remove that prepared statement.
EDIT: You could also stop using SELECT *.
I am using ruby on rails with postgresql database. A weird thing happened that some of the tables_id_seq are not starting from 1, but 980190963. But some of the seq are correctly starting from 1.
The problem is when I tried to insert data into those table which seq not start from 1, unique primary key violation raised occasionally.
--EDIT--
I found that when I do
rake db:migrate test
The problem happens.
If I do
rake db:migrate RAILS_ENV=test
The problem has gone.
If you don't need to keep the data in those tables, you can run the following which will reset the table (THIS WILL REMOVE ANY DATA THERE) and restart the tables id to start at 1. THIS IS ONLY VIABLE IF YOU DON"T NEED THE DATA IN THAT TABLE!
ActiveRecord::Base.connection.execute("TRUNCATE TABLE table_name_here RESTART IDENTITY")
As far as why your tables are starting at that id value I would start by checking your migrations. Maybe somewhere in the code there is an "Alter Sequence" sql statement but again not sure.
I'm developing an information system in Ruby on Rails.
I want to hand out following uids to users:
0: root
1-499: system services (server-side)
500: system admin
501-999: external apps (apps that connect through API)
1000+: human users
I have the following migration set up:
class SetUsersAutoincrementValue < ActiveRecord::Migration
def change
execute("ALTER SEQUENCE users_id_seq RESTART WITH 1000")
end
end
The migration works as expected. However, it doesn't if triggered by rake db:reset db:migrate.
What to do?
Thanks
I suppose rake db:migrate:reset.
Try to restart the ruby server (Puma) may help.
I run the SQL command ALTER TABLE directly on DB to change a table id column from INT to BIGINT.
After updating, when inserting big number I got an error out of range for ActiveModel::Type::Integer with limit 4"
After restart ruby server, everything works well.
This page I have been developing for my app has been working fine locally (using sqllite3) but when I push it to Heroku, which uses PostgreSQL I get this error:
NeighborhoodsController# (ActionView::Template::Error) "PGError: ERROR: column \"isupforconsideration\" does not exist\nLINE 1: ... \"photos\" WHERE (neighborhood = 52 AND isUpForCon...\n
From this line of code:
#photos = Photo.where(["neighborhood = ? AND isUpForConsideration = ?", #neighborhood.id, 1])
isUpForConsideration is defiantly part of the Photo column. All my migrations are up to date, and when I pull the db back locally isUpForConsideration is still there, and the app still works locally.
I've also tried:
#photos = #neighborhood.photos(:conditions => {:isUpForConsideration => 1})
and
#photos = #neighborhood.photos.where(["isUpForConsideration = 1"])
Which gives me this error:
NeighborhoodsController# (ActionView::Template::Error) "PGError: ERROR: column \"isupforconsideration\" does not exist\nLINE 1: ...tos\" WHERE (\"photos\".neighborhood = 52) AND (isUpForCon...\n
Any idea what I could be doing wrong?
Your problem is that table and column names are case sensitive in PostgreSQL. This is normally hidden by automatically converting these to all-lowercase when making queries (hence why you see the error message reporting "isupforconsideration"), but if you managed to dodge that conversion on creation (by double quoting the name, like Rails does for you when you create a table), you'll see this weirdness. You need to enclose "isUpForConsideration" in double quotes when using it in a WHERE clause to fix this.
e.g.
#photos = #neighborhood.photos.where(["\"isUpForConsideration\" = 1"])
Another way to to get this error by modifying a migration and pushing changes up to Heroku without rebuilding the table.
Caveat: You'll lose data and perhaps references, so what I'm about to explain is a bad idea unless you're certain that no references will be lost. Rails provides ways to modify tables with migrations -- create new migrations to modify tables, don't modify the migrations themselves after they're created, generally.
Having said that, you can run heroku run rake db:rollback until that table you changed pops off, and then run heroku run rake db:migrate to put it back with your changes.
In addition, you can use the taps gem to back up and restore data. Pull down the database tables, munge them the way you need and then push the tables back up with taps. I do this quite often during the prototyping phase. I'd never do that with a live app though.
I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight
Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.
I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.
You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.