I have a table already created. I am looking for a rails migration where I can modify the starting point of the auto_increment number for id column of my table. Let's say I want it to start from 1000.
I googled a bit and came across this:
it says:
:options "string" pass raw options to
your underlying database, e.g.
auto_increment = 10000. Note that
passing options will cause you to lose
the default ENGINE=InnoDB statement
Can this be used for something I want? and how will the migration look since i am changing the column and not creating new one...
You can use raw execute method
execute ("ALTER TABLE your_table_name AUTO_INCREMENT = 10000")
Related
I am running a script to update a table structure, my problem is with the primary key column (id), the query creates a new id column each time I run my script.
This the first time that I am trying to write a script to update a database structure.
The database was created by an old version of an application,and now we want to release a new version of the application, but to achieve this goal I wrote a script which in summary is creating tables if they don't exist, adding columns to the tables if they don't exist, deleting old indexes and creating the new ones, etc.
The problem happens when the scripts add the primary key in the tables, in the database, the tables have a primary key column of type integer. But my query is not detecting this primary key column and it creates a new column with the same name and data type, at least that is what I see in PGAdmin v4.8.
Originally the primary key columns were created using the type serial and then PostgreSQL automatically creates a sequence and use it for the primary key.
How can I avoid the duplicated primary key columns?
Originally the column was created like follows
create table mytable(
id serial,
.
.
.
);
And if I look in the table, the column looks like this, which means that PostgreSQL created a sequence mytable_id_seq and used it for the auto increment value of primary key column.
id integer NOT NULL DEFAULT nextval('mytable_id_seq'::regclass)
But after I execute the following query and look in the table, it has a new column with the same name and data type like the one in the previous lines.
ALTER TABLE public.mytable ADD COLUMN IF NOT EXISTS id serial;
I am expecting to see only one column no matter how many times I execute the query.
I have understood the solution for changing the column type from string to text while using postgresql and rails 3.2 provided here. I have also implemented it. But when I rollback this migration, it fails with "PG::StringDataRightTruncation: ERROR: value too long" error. How should we tackle this problem?
You have new values that're too long for the old type. PostgreSQL would have to throw away data to change to varchar(255) if the values are longer than 255 chars. It refuses to do so because it won't cause data loss without being told very specifically to do so.
If you don't mind truncating these long values, permanently and unrecoverably discarding data, you can use the USING clause of ALTER COLUMN ... TYPE. This is the same approach used when converting string columns to integer.
ALTER TABLE mytable
ALTER COLUMN mycolumn
TYPE varchar(255)
USING (substring(mycolumn from 1 for 255));
I don't think there is any way to express this symmetrically in a Rails migration; you will need separate clauses for the up- and down- migrations, with the up-migration being a simple:
ALTER TABLE mytable
ALTER COLUMN mycolumn
TYPE text;
Frankly though, I think it's a terrible idea to do this in a migration. The migration should fail. This action should require administrator intervention to UPDATE the columns that have data that is too long, then run the migration.
Using RoR 2.3.8.
I have two models. It's strange that when I typed text and saved in Model A, it shows the exact text, but when I do this in Model B, it shows ???. It's most likely one supports UTF-8 but another doesn't. The thing is, I don't remember me setting any on either. What can I do to fix this?
Using Mac OS 10.6.7, Chrome
MySQL and other database engines set the encoding used for text on several levels: server, database, table and column. Generally the defaults are applied from the top down, from server to database, database to table and so forth, but they can be customized at any given point as required. Sometimes this happens inadvertently and can cause issues.
One way to know what encoding is currently active is to use a client like Sequel Pro which will expose this information to you, or to investigate using the mysql command line tool:
SHOW CREATE DATABASE example;
You get a response that contains something like this:
CREATE DATABASE `example` /*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci */
In this case it's a UTF-8 table with UTF-8 UNICODE collation. The encoding defines how thae data is stored and the collation defines how to handle sorting and case conversion.
Investigating further you can examine the table and columns:
SHOW CREATE TABLE example;
This gives a lot more detail, something like this:
CREATE TABLE `example` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
UNIQUE KEY `index_example_on_email` (`email`)
) ENGINE=InnoDB AUTO_INCREMENT=29 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
In this case the table itself is defaulting to UTF-8 and the email column is likewise flagged. You may have a column that's different.
If you need to alter the encoding or collation you can use the ALTER TABLE statement to effect this.
I have two tables on mysql: users, and management. The users table has a numeric id, and the management table has a varchar foreign key which is the primary key of the other table. The types are not the same, and this seems to be the main problem when I build an index from the User model, and try to include one column from the management table. The join that thinkinx sphinx generates takes too much damn time to execute, and thus the index never gets done.
I know the best solution is to change the management table and use a numeric id, but right now that seems to be too expensive. Is there a way to just tell thinking sphinx that the varchar field is in fact a numeric id, so the index could be generated without altering the tables?
If this is not clear, please ask me to clarify whatever seems too obscure.
Thanks!
I'd make sure you have a database index on your foreign key.
Also, if you want to edit the generated configuration, you can do so, and then process the index using one of the two options, which doesn't automatically regenerate the file:
rake ts:index INDEX_ONLY=true
rake ts:reindex # this was only added the other day
I am trying to run a migration on an existing database to change the column name on a table. When I run the migration, I get an error stating that Blob/Text fields cannot have a default value. The column in question is a text column, with a non-null attribute, but no default value.
The migration that Rails attempts is:
ALTER TABLE xxxxx CHANGE abcd ABCD text DEFAULT '' NOT NULL
Now, I haven't asked the migration to change the column type, I have only asked it to rename the column, so why is the migration trying to do anything to the column type?
I have Googled the issue, and haven't come up with an explanation or workaround.
Any help appreciated.
Vikram
There does seem to be a longstanding unresolved ticket on this issue, as described here:
rails bug report
Rails' default behavior is to make columns which are NULL, since this prevents false positives on presence checks, etc, when translating blank strings back into Ruby. Any chance you can work around this by redefining your text column to work with NULL values in the mySQL console?
EDIT
You can do this in your migration file, it's not the Rails way but it's a lot nicer than sending an email to everyone to change their local copies:
MyModel.connection.execute "ALTER TABLE xxxxx CHANGE abcd ABCD text DEFAULT NULL"