I have a column in my Postgres DB's table that's a boolean type. I want to change it to an integer because I need more than just true/false in that column.
I also want to change all the true values to 1 and the false values to 2.
Is this easily done in Rails? I was trying to do this via a migration file and migrating my DB.
Yes, you can do this change with a single migration. The only tricky part is that need to tell the database how to convert the boolean values to your integers.
The way to do this is to use a USING clause in the ALTER TABLE to provide the mapping. The raw SQL version would look like this:
alter table models
alter column c type integer
using case when c then 1 else 2 end
and that translates to a migration thusly:
def change
change_column :models, :c, :integer, :using => 'case when c then 1 else 2 end'
end
A boolean column can only contain TRUE or FALSE so that simple CASE should be sufficient. If you're allowing NULLs and want to preserve them then:
:using => 'case when c is null then null when c then 1 else 2 end'
should do the trick.
Of course, you'll have to update all your code to properly work with these new integers by hand.
Do you really need to modify the database for this? One possible solution is to just create a wrapper method that handles this for you. Let's say you have boolean column named mycol, then you could just write a wrapper method that transparently handles this logic without modifying the underlying database.
within your ActiveRecord model:
def mycol
read_attribute(:boolcol) ? 1 : 2
end
def mycol=(value)
write_attribute(:mycol, value == 1)
end
So for instance, running u.mycol = 1 && u.save would write false to the database and u.reload.mycol would return 1.
However, if it's really necessary to do a migration then create a new integer column to supercede the original boolean column. Don't remove or modify the existing column because want to make sure you're not corrupting or destroying data.
After creating the new column, create a rake task to iterate through all your existing records (use the find_each method for the iteration) and set the integer value for the new column based on the value of the original boolean column. Once you've verified the integrity of the data you can drop the original boolean column and replace it with the newly created column.
Related
Here's one example where such a requirement comes up:
Suppose we have a table 'sample' in our database which has a column 'col1' and an auto-incremental 'id' column.
We want the default value of 'col1' to be taken as the value of the 'id' column. How do I do that?
I am using rails to write the migration script which will create the table in my postgres databse.
def self.up
Yourmodel.update_all("updated_at=created_at")
end
Something like this in your migration script can help you.
I have a Rails app with a custom algorithm for id generation for one table. Now I decided to use default incremental ids generated by Postgres as primary keys and move my custom values from id to another column like uid. And I want to regenerate values in id column for all records - as normal from 1 to n.
What is the best way to do this? I have about 1000 records. Records from other tables are associated with these records.
You can keep whatever value is in ID column but create a new column named UID and set it as a primary key and auto increment
def self.up
execute "ALTER TABLE mytable modify COLUMN uid int(8) AUTO_INCREMENT"
end
You can tell your model to use UID as primary key as
self.primary_key = 'uid'
You can simply do it by iterating on your records, and updating them (and their associated objects). 1000 records is not that much to process.
Let's say that you have a table named "my_objects", with its model named "MyObject". Let's also say that you have another table, named "related_objects" and its model "RelatedObject"
You can iterate on all your MyObjects records, and update their related objects and the record itself at the same time.
records = MyObject.all #Whatever "MyObject" you have.
i = 0
records.each do |record|
#Updating whatever associated objects the record have
record.related_objects.each do |related_object|
related_object.update_column("my_object_id", i)
end
#Updating the record itself
record.update_column("id", i)
i++
end
I want to generate a value in a column based on the database ID ActiveRecord assigns to my record. Normally, I would just add an after save callback where this value is generated and than saved to the database.
Unfortunately, I have to deal with a not-null constraint on that column, so it needs to be assigned at the same time when I get an ID. Is there a thread-safe way, to combine both?
You could write your code in transaction block, where you first find the maximum value of id column in the given table and than use that ID value to generate value of other column. Will that work for you?
ActiveRecord::Base.transaction do
id = maximum(:id)
other_column = "#{id} some string"
create(other_column: other_column)
end
I'm working on a project where localisation was done by creating associated *_locales tables with a locale and name field. I am migrating this over to use an hstore column (using Postgres) but I don't quite have the syntax right.
The structure is now like this
table: thingy_locales
locale :string
name :string
thingy_id : integer
table: thingies
name_translations :hstore
In a migration I wish to move all data from the thingy_locales table into the name_translations field with an key of 'en' (as currently there are only 'en' locales in the thingy_locales table.)
so I've tried this
execute "UPDATE thingies t SET name_translations=(select (\"'en'\" => \"'|name|'\")::hstore from thingy_locales where thingy_id = t.id);"
but that gives me
PG::UndefinedColumn: ERROR: column "'en'" does not exist
LINE 1: ...ort_categories loc SET name_translations=(select ("'en'" => ...
^
What have I done wrong?
I don't know of an automatic SQL command to do it, but you probably want to do multiple migrations:
Add the hstore column to thingies
Iterate over all of the thingy_locales, reading them into a hash/hashes. Then do Thingy.create!(name_translations: {'en' => hash})
Drop thingy_locales
Migrations in Rails can have any arbitrary code in them. Since this is an instance where you will be transforming the data, doing it in Ruby is probably your safest bet.
If speed is a concern, then you may need to go for an optimized SQL query, but frankly if you aren't worried about speed, don't trouble yourself.
Actually, just looking at your code again, assuming that you have this association:
class Thingy
has_many :thingy_locales
end
class ThingyLocale
belongs_to :thingy
end
it seems like what you want to do something like this:
Thingy.all.each do |thingy|
name = thingy.thingy_locales.where(locale: 'en').first.select(:name)
thingy.name_translations = {'en' => name}
thingy.save!
end
Okay I got it to work.
execute "UPDATE thingies t SET name_translations=hstore('en', (select name from thingy_locales where thingy_id = t.id));"
does the job perfectly
Don't try to grind a ton of data through Ruby, do it inside the database. Two approaches immediately come to mind:
Use a correlated subquery in an UPDATE:
connection.execute(%q{
update thingies
set name_translations = (
select hstore(array_agg(locale), array_agg(name))
from thingy_locales
where thingy_id = thingies.id
)
})
JOIN to a derived table in an UPDATE:
connection.execute(%q{
update thingies
set name_translations = dt.h
from (
select hstore(array_agg(locale), array_agg(name)) as h, thingy_id
from thingy_locales
group by thingy_id
) as dt
where id = dt.thingy_id
})
The core of both is to use the hstore(text[], text[]) function to build the hstore and the array_agg function to build the key and value arrays that hstore(text[], text[]) wants.
Don't be afraid to throw SQL into connection.execute calls in your migrations. Pretending that your database is too dumb to do anything interesting may be ideologically pure in a Rails application but it is a non-productive and unprofessional attitude. You'll be better served in the long run by learning SQL and how your databases work.
In rails app, I am trying and tinkering to add fts in postgres for existing data.
Here is what I have done:
class AddNameFtsIndexToCompanies < ActiveRecord::Migration
def up
execute(<<-'eosql'.strip)
DROP INDEX IF EXISTS index_companies_name;
CREATE INDEX index_companies_name
ON companies
USING gin( (to_tsvector('english', "companies"."name")) );
eosql
execute(<<-'eosql'.strip)
ALTER TABLE companies ADD COLUMN name_tsv tsvector;
CREATE TRIGGER tsv_name_update
BEFORE INSERT OR UPDATE ON companies FOR EACH ROW
EXECUTE PROCEDURE tsvector_update_trigger(name_tsv, 'pg_catalog.english', name);
CREATE INDEX index_companies_fts_name ON companies USING GIN (name_tsv);
eosql
end
def down
execute(<<-'eosql'.strip)
DROP INDEX IF EXISTS index_companies_name
eosql
execute(<<-'eosql'.strip)
DROP INDEX IF EXISTS index_fts_name;
DROP TRIGGER IF EXISTS tsv_name_update ON companies;
ALTER TABLE companies DROP COLUMN name_tsv
eosql
end
end
The value for name_tsv column is still empty.
But for just quick test , I tried this:
input_data = "foo"
Company.where(["to_tsvector(companies.name) ## plainto_tsquery(?)", input_data ])
and compare it with this:
input_data = "foo"
Company.where(["companies.name ilike ? ", "%#{input_data}%"])
And the former is slower.
Questions:
1. Why is it slower?
2. What is the best practice to populate tsvector column for existing data?
Although my question is related to rails app, but generally it's more about postgresql fts,
so any postgres-specific solution is still welcomed.
Why is it slower?
I am willing to bet it is doing a sequential scan in both cases and the tsvector conversion is slower than the pattern matching.
What is the best practice to populate tsvector column for existing data?
You need to create indexes that PostgreSQL can use for operations such as overlapping elements. Btree indexes (the default) don't give you that. You need a GIN or GIST index (the big difference in this case is that there is a read/write performance tradeoff in that choice). Also PostgreSQL won't know that it can use an index in your case because you aren't querying on the indexed column. What you need instead is a functional index. So you need to do something like:
CREATE INDEX company_name_idx_fts ON companies USING GIN (to_tsvector(name, 'English'));
Then you can scan the output of that function against your full text search in your query.