I'm trying to create a model with something like:
jobs title:string companyid:uuid
However, when I run db:create, it creates the table with the "title" column but just ignores the "companyid" column. The app of course crashes because ActiveRecord can't find the companyid column. If I add the DB column in manually, the app works (so I know RoR knows how to handle this data type)..
I'd like my DB provisioning and migration scripts to run correctly. I'm using PostgreSQL 9.0 and the postgres-pg adapter.
Anything special I need to do? Thanks!
This should help:
Ruby on Rails: UUID as your ActiveRecord primary key
Your migration isn't working because UUID isn't a supported type. If you look at the link you will see that they used UUID as a column name and string as the type. They also disabled the id column and set their UUID:string column to be the primary key.
There is also the simplified type in the Postgres Adapter that may be helpful for insight to why it works once you create the column manually. Though it just maps UUID to a string.
Fixing migrations so they could handle Postgres UUIDs could be some nice low hanging fruit for a contribution if someone having this issue wanted to contribute to Rails but given that Postgres UUIDs don't really appear to do much and your app will have to generate the IDs they might as well just be strings.
Related
I'm working on porting a very large Rails project from DataMapper to ActiveRecord. Among the models that has to be ported is a set of User models that used Single Table Inheritance (STI) to distinguish one type from another. Here is a simplified version of what it looks like:
class User < ActiveRecord::Base
...
end
class AdminUser < User
...
end
Generally, the 'type' field is used to tell the difference between Users and AdminUsers, by storing the class name of the object being saved (i.e. 'AdminUser'). And it works fine in development, but when I try User.create in the test environment, I get:
ActiveRecord::StatementInvalid: Mysql::Error: Column 'type' cannot be null
ActiveRecord tries to insert a new row, setting the type column to NULL... what could cause this to be happening in the test environment, but not in development?
Turns out it was a slight difference in the database table itself that was causing a change in behavior for ActiveRecord. The test database had no default value for the type column, whereas in development, the default value was 'User'. Apparently ActiveRecord uses the default value when inserting data for an object of the primary class type (the class that inherits from ActiveRecord::Base - in this case, User). Why it doesn't just use the class name is beyond my understanding!
My real confusion came when I updated my dev database to have a default for the type column, which I actually knew it needed, because somehow the production database already had one, so clearly my dev database was just out of sync. So I did this:
mysql> ALTER TABLE users MODIFY COLUMN type varchar(50) NOT NULL DEFAULT 'User';
...[ok]
mysql> exit
Bye
$> bundle exec rake db:test:prepare # <-- My Mistake
...[ok]
I thought this was all I had to do, but it turns out running db:test:prepare just matches your test database to your schema.rb file, and my schema.rb file hadn't been updated, so then User.create worked in development, but broke in testing :D
Eventually, I came to understand all of the above, in addition to the fact that I needed to run db:migrate in order to update my schema.rb file BEFORE running db:test:prepare. Once I did that: voila! User.create actually used the default value of the type column to insert new User objects.
Moral of the story:
NEVER let your development database get out of sync with production. If it is: blow it away with a db:schema:load and start over with new dev data! (Or get a production dump or something)
Choose your ORM wisely. R.I.P. DataMapper - I'll miss your elegant abstractions... but not your bugs.
This has been asked several times before (here and here, and more).
Every time I push my rails app to Heroku (for at least the last few months, I'd say), I have to reset my keys using the familiar
ActiveRecord::Base.connection.tables.each { |t| ActiveRecord::Base.connection.reset_pk_sequence!(t) }
incantation. Otherwise I get postgresql failures like this when I try to create new records:
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "users_clients_pkey" DETAIL: Key (id)=(401) already exists. : INSERT INTO "users_clients" ("user_id", "client_id") VALUES (20, 46) RETURNING "id"
(This is an example; it happens on various tables, depending on what the first action is that's done on the app after a push.)
Once I do the reset-keys incantation, it's fine until my next push to heroku... even when my push does not include any migrations.
I'm a little baffled as to why this is happening and what can be done to prevent it.
No, there's no datatable manipulation code in my deployment tasks.
Its happening because the primary key(id) value already exists. Why? Because the primary key sequence in postgres is messed up. without looking at the database or knowing the schema, it difficult to suggest a solution but if your database can affort a downtime of 10-15mins. you can try
If there is just one table which is problem. you can Export all data into new set of table with new names without ID column.
drop existing tables and rename the newly created table to old tables's name.
enable writes to our app again.
But if entire DB is in a mess, then it need something more elaborate but I can't tell without looking the schema.
I'm currently looking at migrating an existing system (written in spaghetti PHP) over to rails. The problem is, it has to run off of a live database. A lot of the ID columns on these different tables aren't named id. For instance, the customers table has an ID column called Customer_ID. Upon looking, I just realised that rails does infact seem to find by the primary key instead of a specific column called id.
Will I face a lot of problems later with the naming of these ID columns, specifically in stuff like relationships?
After v2.3.8, set_primary_key :col_name is deprecated.
self.primary_key = 'col_name' is recommended.
http://api.rubyonrails.org/classes/ActiveRecord/AttributeMethods/PrimaryKey/ClassMethods.html
Change primary key attribute in model by using
set_primary_key :col_name
I was using this command to create a model class "Listing". However, I was interested in knowing the relationship between the datatypes of the model and the datatypes of the underlying database. In this case, it is PostgreSQL. So when I type this command:
rails generate scaffold Listing name:string
I want to know what are the possible values I can use to describe the types. What is that dependent on? The underlying database? If so then what happens if the underlying database changes later? Also, where can I get a list of the types I can use here and their capacity with an underlying db of PostgreSQL?
That command actually creates migrations for creating tables, etc, on the DB, so that's where you should check the types supported.
I would copy/paste here but I think there is no need to.
http://guides.rubyonrails.org/migrations.html#supported-types
UPDATE
The link to the docs does not contain the information anymore. Go to this question Rails 4: List of available datatypes for the full list
I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model:
class SomeModel
include MongoMapper::Document
key :some_key, String
end
Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this:
class SomeModel
include MongoMapper::Document
key :some_key, String
key :some_new_key, String, :required => true
end
How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this
class SomeModel
include MongoMapper::Document
key :some_new_key, String, :required => true
end
But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space?
With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 -> version2 migration) and to delete the values for some_key (in the version2 -> version3 migration).
What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist?
EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.
Check out Mongrations... I just finished reading about it and it looks like what you're after.
http://terrbear.org/?p=249
http://github.com/terrbear/mongrations
Cheers! Kapslok
One option is to use the update operation to update all of your data at once. Multi update is new in the development releases so you'll need to use one of those.
You can try this contraption I just made, but it only works with mongoid and rails 3 (beta 3) at the moment. http://github.com/adacosta/mongoid_rails_migrations . It'll be upgraded to rails 3 when it goes final.
Also another gem for MongoMapper migrations https://github.com/alexeypetrushin/mongo_mapper_ext
Mongrations is a super old gem, completely deprecated. I recommend NOT using it.
Exodus is a really cool migration framework for Mongo, that might be what you want:
https://github.com/ThomasAlxDmy/Exodus
We just build this one: https://github.com/eberhara/mongration - it is a regular node module (you can find it on npm).
We needed a good mongodb migration framework, but could not find any - so we built one.
It has lot's of better features than the regular migration frameworks:
Checksum (issues an error when a previosuly ran migration does not match its old version)
Persists migration state to mongo (there is no regular state file)
Full support to replica sets
Automatic handle rollbacks (developers must specify the rollback procedures)
Ability to run multiple migrations (sync or async) at the same time
Ability to run migrations against different databases at the same time
Hope it helps!
Clint,
You can write code to do updates -- though it seems that for updating a record based on its own fields is not supported.
In such a case, I did the following and ran it against the server:
------------------------------
records = Patient.all()
records.each do |p|
encounters = p.encounters
if encounters.nil? || encounters.empty?
mra = p.updated_at
#puts "\tpatient...#{mra}"
else
mra = encounters.last.created_at
#puts "\tencounter...#{mra}"
end
old = p.most_recent_activity
p.most_recent_activity = mra
p.save!
puts "#{p.last_name} mra: #{old} now: #{mra}"
end
------------------------------
I bet you could hook into Activerecord::Miration to automate and track your "migration" scripts.
MongoDB is a schema-less database. That's why there are no migrations. In the database itself, it doesn't matter whether the objects have the key :some_key or the key :some_other_key at any time.
MongoMapper tries to enforce some restrictions on this, but since the database is so flexible, you will have to maintain those restrictions yourself. If you need a key on every object, make sure you run a script to update those keys on pre-existing objects, or handle the case of an object that doesn't have that key as you come across them.
I am fairly new to MongoDB myself, but as far as I can see, due to the flexibility of the schema-less db this is how you will need to handle it.