I've got a Rails 3 applicaiton running on SQL Server against a legacy database with some computed columns. When I try to save a new record, I get an error "The column "ContractPendingDays" cannot be modified because it is either a computed column or is the result of a UNION operator
I'm not updating this column, it just seems to be one of the columns activerecord tries to write to when adding a new record into the db.
Is there some way to stop this behavior? I even tried changing schema rb but that didn't help. (and suboptimal anyway since I'd have to do it every time I change the db.)
Is there some way to mark a column as not updatable so activerecord doesn't try to write to it?
Found answer here:
Prevent NULL for active record INSERT?
(Use attributes.delete in a before_create filter)
Related
As upsert is a 'newer' function that has been implemented in Rails’ ActiveRecord, there is not a lot of documentation behind it. I do understand the concept behind it, as its purpose is to insert a new record if it is a new record and update existing records if it is found in the database.
So for instance, when I call: Model.upsert(name: 'a', number: 1), does ActiveRecord look for a preexisting record with the name: 'a' or a preexisting record with the number: 1?
For example, right now when I am trying to update a Model with the name a using Model.upsert(name: 'a', number: 1), I get a NotNullViolation, because there is a null value in an id that I am not specifying. Can I update a model with upsert without passing in the id as a parameter?
Implementation will be different based on the database targeted. Postgres, for example natively supports upserts via INSERT...ON CONFLICT UPDATE. You should refer to your underlying database's documentation for the gory details. In general, though, ActiveRecord is going to attempt use all unique indexes on the table to determine whether an existing record conflicts or not. At a minimum this means your primary key, though it may mean others as well. If you are using Postgres or SQLite, you can pass :unique_by to upsert to explicitly indicate which fields should be used to determine whether a matching record already exists (doc). Note that if you use this and attempt to insert a record that conflicts with another record on a unique key not listed in unique_by, your database will throw an exception.
I get a NotNullViolation, because there is a null value in an id that I am not specifying.
No, of course not - you must specify all fields that would be inserted during an insert when performing an upsert, because obviously, if the record doesn't exist and must be inserted, it must contain all required fields. If you have fields specified as not null, you can't attempt an upsert of them when the inserted value would be null!
Since you're not passing an ID, Rails can't figure out how to determine record uniqueness. It may work with a unique index on one of your other fields (presuming that unique index is appropriate to your application).
I think the problem with the NotNullViolation you get with the missing id is caused by the underlying DB definition: the upsert method uses all unique columns of the table definition to find a possible match to update instead of creating a new record; in this case you miss an important parameter of the method and the upsert fails.
Is there a way to know when a user updates a table column? For example, at what time a user changes their last name?
Im not interested when last a table was updated; only the column. Is it possible using Rails 5 and PostgreSQL?
If you are including timestamps in your models (.created_at and .updated_at) then .updated_at will tell you the last time that the record (i.e. the database row) was updated.
But that will not tell you which attribute of the record was changed (i.e. which database column). Nor will it tell you which user changed it, or if it was changed automatically by something in your system, etc.
You would need the schema to do that. You can a new table called as user_logs and implement a trigger which stores old_record and new_record. This will help you to get the desired log for change.
Normally we use Model.where(name: 'John') to find records. Let's say I have a method inside the model named status which does some calculations and output a string. How can I use that in the where? Now if I use Model.where(status: 'active') it says PG::UndefinedColumn: ERROR: column model.status does not exist
No, you can't. where is for SQL conditions, if you calculate your status in Ruby, you can't use it in where.
If what you ask is how to have status applied as a filter by the SQL server before retrieving the records, you can't.
One option indeed, is to retrieve all the records and then use select to get what you want. That is very inefficient.
One other option is to write a fragment of SQL logic and plug that straight into a where. That is not easy but very efficient.
One last solution is to have that status be computer for you by a before_save and stored in the table. You can then use a regular where to filter which records you want. The downside of this, is that you have a new extra column.
I have a Join table in Rails which is just a 2 column table with ids.
In order to mass insert into this table, I use
ActiveRecord::Base.connection.execute("INSERT INTO myjointable (first_id,second_id) VALUES #{values})
Unfortunately this gives me errors when there are duplicates. I don't need to update any values, simply move on to the next insert if a duplicate exists.
How would I do this?
As an fyi I have searched stackoverflow and most the answers are a bit advanced for me to understand. I've also checked the postgresql documents and played around in the rails console but still to no avail. I can't figure this one out so i'm hoping someone else can help tell me what I'm doing wrong.
The closest statement I've tried is:
INSERT INTO myjointable (first_id,second_id) SELECT 1,2
WHERE NOT EXISTS (
SELECT first_id FROM myjointable
WHERE first_id = 1 AND second_id IN (...))
Part of the problem with this statement is that I am only inserting 1 value at a time whereas I want a statement that mass inserts. Also the second_id IN (...) section of the statement can include up to 100 different values so I'm not sure how slow that will be.
Note that for the most part there should not be many duplicates so I am not sure if mass inserting to a temporary table and finding distinct values is a good idea.
Edit to add context:
The reason I need a mass insert is because I have a many to many relationship between 2 models where 1 of the models is never populated by a form. I have stocks, and stock price histories. The stock price histories are never created in a form, but rather mass inserted themselves by pulling the data from YahooFinance with their yahoo finance API. I use the activerecord-import gem to mass insert for stock price histories (i.e. Model.import columns,values) but I can't type jointable.import columns,values because I get the jointable is an undefined local variable
I ended up using the WITH clause to select my values and give it a name. Then I inserted those values and used WHERE NOT EXISTS to effectively skip any items that are already in my database.
So far it looks like it is working...
WITH withqueryname(first_id,second_id) AS (VALUES(1,2),(3,4),(5,6)...etc)
INSERT INTO jointablename (first_id,second_id)
SELECT * FROM withqueryname
WHERE NOT EXISTS(
SELECT first_id FROM jointablename WHERE
first_id = 1 AND
second_id IN (1,2,3,4,5,6..etc))
You can interchange the Values with a variable. Mine was VALUES#{values}
You can also interchange the second_id IN with a variable. Mine was second_id IN #{variable}.
Here's how I'd tackle it: Create a temp table and populate it with your new values. Then lock the old join values table to prevent concurrent modification (important) and insert all value pairs that appear in the new table but not the old one.
One way to do this is by doing a left outer join of the old values onto the new ones and filtering for rows where the old join table values are null. Another approach is to use an EXISTS subquery. The two are highly likely to result in the same query plan once the query optimiser is done with them anyway.
Example, untested (since you didn't provide an SQLFiddle or sample data) but should work:
BEGIN;
CREATE TEMPORARY TABLE newjoinvalues(
first_id integer,
second_id integer,
primary key(first_id,second_id)
);
-- Now populate `newjoinvalues` with multi-valued inserts or COPY
COPY newjoinvalues(first_id, second_id) FROM stdin;
LOCK TABLE myjoinvalues IN EXCLUSIVE MODE;
INSERT INTO myjoinvalues
SELECT n.first_id, n.second_id
FROM newjoinvalues n
LEFT OUTER JOIN myjoinvalues m ON (n.first_id = m.first_id AND n.second_id = m.second_id)
WHERE m.first_id IS NULL AND m.second_id IS NULL;
COMMIT;
This won't update existing values, but you can do that fairly easily too by using with a second query that does an UPDATE ... FROM while still holding the write table lock.
Note that the lock mode specified above will not block SELECTs, only writes like INSERT, UPDATE and DELETE, so queries can continue to be made to the table while the process is ongoing, you just can't update it.
If you can't accept that an alternative is to run the update in SERIALIZABLE isolation (only works properly for this purpose in Pg 9.1 and above). This will result in the query failing whenever a concurrent write occurs so you have to be prepared to retry it over and over and over again. For that reason it's likely to be better to just live with locking the table for a while.
I'm trying to seed a Rails application with some sql statements in the seed.rb file. There are 12 values supplied, however, the table has 15 columns. The extra three columns are the automatically generated id, created_at and updated at columns that Rails includes by default. If I run a custom sql statement in the seed.rb file in the following manner....
connection = ActiveRecord::Base.connection()
query = "random sql"
connection.execute(query)
Rails doesn't create those columns for me in the way it would if I did
Employee.create!(name: "Joe")
Is there anyway to indicate to rails that I need the id and timestamp columns filled with values when I run an sql statement in seed.rb?
No, because Rails has no way of knowing whether your "random sql" even creates any records for it to fill in ids/timestamps.
When you connection.execute you are on your own, you have forsaken your ORM and given in to the temptation of SQL.
If you can do it using ActiveRecord, then do so! If not, well, that is why Rails lets you drop down to SQL (but think again. Can you really not write it in Ruby?).