I have a Rails app that uses postgreSQL.
I recently did a backup of production and restored it to development.
When I try to add a Payment record in development, I get:
ERROR: duplicate key value violates unique constraint "payments_pkey"
DETAIL: Key (id)=(1) already exists.
Yet, there is only one record in the table with id=1 and the payments_id_seq has Current value = 1.
So, whey isn't Rails trying to add id=2 ??
Thanks for the help!
PS - is there a script or command in pgadmin to force the id_seq to be correct?
If you receive a PostgreSQL unique key violation error message ("duplicate key value violates unique constraint..."), probably your primary key index is out of sync, e.g. after populating the database.
Use
ActiveRecord::Base.connection.reset_pk_sequence!('[table_name]')
to fix the sequence for the users table.
Presumably whatever method you used to copy your database didn't update your sequences along the way, a standard dump/restore should have take care of that but if you copied things row-by-row by hand then you'll have to fix things using setval.
If you only need to fix the sequence for a table T, then you could do this from the console:
ActiveRecord::Base.connection.execute(%q{
select setval('T_id_seq', m)
from (
select max(id) from T
) as dt(m)
})
or you could feed that SQL to pgadmin. You'd repeat that for each table T.
Related
I have an old application I am supporting that uses a Microsoft Access database. The original table design did not add primary keys to every table. I am working on a migration program that among other things is adding and filling in a new primary key field (GUID) when needed.
This is happening in three steps:
Add a new guid field with no constraints
Fill the field with new unique guids
Add the primary key constraints
My problem is setting the unique guids when the table has duplicate rows. Here is my code to set the guids.
Query.SQL.Add('SELECT * FROM ' + TableName);
Query.Open;
while Query.Eof = false do
begin
Query.Edit;
Query.FieldByName(NewPrimaryKeyFieldName).AsGuid := TGuid.NewGuid;
Query.Post;
Query.Next;
end;
FireDac generates an update statement that contains a where clause with all the original fields/values in the row (since there is no unique field for it to use). However, because the rows are complete duplicates the statement still updates two rows.
FireDac correctly errors with this message
Update command updated [2] instead of [1] record.
I can open up the database in Access and delete the duplicate records or assign them a unique guid by editing the table. I would like my conversion tool to automatically do this.
Is there some way to work with these duplicate rows in FireDac? Either to update just one at a time, or to delete just one of them?
In my opinion there is no way to do it with just one SQL Statement.
I would do this:
1. Copy the whole table without duplicates by using a new temp table
SELECT DISTINCT * FROM <TABLENAME>
Add the Keys
Delete old table content and copy new content from new table
Notes:
The DB Should be unavailable for everyone else for that Operation
2. Make BACKUP before
I'm trying to come up with a PostgreSQL schema for host data that's currently in an LDAP store. Part of that data is the list of hostnames a machine can have, and that attribute is generally the key that most people use to find the host records.
One thing I'd like to get out of moving this data to an RDBMS is the ability to set a uniqueness constraint on the hostname column so that duplicate hostnames can't be assigned. This would be easy if hosts could only have one name, but since they can have more than one it's more complicated.
I realize that the fully-normalized way to do this would be to have a hostnames table with a foreign key pointing back to the hosts table, but I'd like to avoid having everybody need to do joins for even the simplest query:
select hostnames.name,hosts.*
from hostnames,hosts
where hostnames.name = 'foobar'
and hostnames.host_id = hosts.id;
I figured using PostgreSQL arrays could work for this, and they certainly make the simple queries simple:
select * from hosts where names #> '{foobar}';
When I set a uniqueness constraint on the hostnames attribute, though, it of course treats the entire list of names as the unique value instead of each name. Is there a way to make each name unique across every row instead?
If not, does anyone know of another data-modeling approach that would make more sense?
The righteous path
You might want to reconsider normalizing your schema. It is not necessary for everyone to "join for even the simplest query". Create a VIEW for that.
Table could look like this:
CREATE TABLE hostname (
hostname_id serial PRIMARY KEY
, host_id int REFERENCES host(host_id) ON UPDATE CASCADE ON DELETE CASCADE
, hostname text UNIQUE
);
The surrogate primary key hostname_id is optional. I prefer to have one. In your case hostname could be the primary key. But many operations are faster with a simple, small integer key. Create a foreign key constraint to link to the table host.
Create a view like this:
CREATE VIEW v_host AS
SELECT h.*
, array_agg(hn.hostname) AS hostnames
-- , string_agg(hn.hostname, ', ') AS hostnames -- text instead of array
FROM host h
JOIN hostname hn USING (host_id)
GROUP BY h.host_id; -- works in v9.1+
Starting with pg 9.1, the primary key in the GROUP BY covers all columns of that table in the SELECT list. The release notes for version 9.1:
Allow non-GROUP BY columns in the query target list when the primary
key is specified in the GROUP BY clause
Queries can use the view like a table. Searching for a hostname will be much faster this way:
SELECT *
FROM host h
JOIN hostname hn USING (host_id)
WHERE hn.hostname = 'foobar';
Provided you have an index on host(host_id), which should be the case as it should be the primary key. Plus, the UNIQUE constraint on hostname(hostname) implements the other needed index automatically.
In Postgres 9.2+ a multicolumn index would be even better if you can get an index-only scan out of it:
CREATE INDEX hn_multi_idx ON hostname (hostname, host_id);
Starting with Postgres 9.3, you could use a MATERIALIZED VIEW, circumstances permitting. Especially if you read much more often than you write to the table.
The dark side (what you actually asked)
If I can't convince you of the righteous path, here is some assistance for the dark side:
Here is a demo how to enforce uniqueness of hostnames. I use a table hostname to collect hostnames and a trigger on the table host to keep it up to date. Unique violations raise an exception and abort the operation.
CREATE TABLE host(hostnames text[]);
CREATE TABLE hostname(hostname text PRIMARY KEY); -- pk enforces uniqueness
Trigger function:
CREATE OR REPLACE FUNCTION trg_host_insupdelbef()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
-- split UPDATE into DELETE & INSERT
IF TG_OP = 'UPDATE' THEN
IF OLD.hostnames IS DISTINCT FROM NEW.hostnames THEN -- keep going
ELSE
RETURN NEW; -- exit, nothing to do
END IF;
END IF;
IF TG_OP IN ('DELETE', 'UPDATE') THEN
DELETE FROM hostname h
USING unnest(OLD.hostnames) d(x)
WHERE h.hostname = d.x;
IF TG_OP = 'DELETE' THEN RETURN OLD; -- exit, we are done
END IF;
END IF;
-- control only reaches here for INSERT or UPDATE (with actual changes)
INSERT INTO hostname(hostname)
SELECT h
FROM unnest(NEW.hostnames) h;
RETURN NEW;
END
$func$;
Trigger:
CREATE TRIGGER host_insupdelbef
BEFORE INSERT OR DELETE OR UPDATE OF hostnames ON host
FOR EACH ROW EXECUTE FUNCTION trg_host_insupdelbef();
SQL Fiddle with test run.
Use a GIN index on the array column host.hostnames and array operators to work with it:
Why isn't my PostgreSQL array index getting used (Rails 4)?
Check if any of a given array of values are present in a Postgres array
In case anyone still needs what was in the original question:
CREATE TABLE testtable(
id serial PRIMARY KEY,
refs integer[],
EXCLUDE USING gist( refs WITH && )
);
INSERT INTO testtable( refs ) VALUES( ARRAY[100,200] );
INSERT INTO testtable( refs ) VALUES( ARRAY[200,300] );
and this would give you:
ERROR: conflicting key value violates exclusion constraint "testtable_refs_excl"
DETAIL: Key (refs)=({200,300}) conflicts with existing key (refs)=({100,200}).
Checked in Postgres 9.5 on Windows.
Note that this would create an index using the operator &&. So when you are working with testtable, it would be times faster to check ARRAY[x] && refs than x = ANY( refs ).
P.S. Generally I agree with the above answer. In 99% cases you'd prefer a normalized schema. Please try to avoid "hacky" stuff in production.
If my Question title is not clear,
Let me details it more:
In my rails application, in local(development) I am using PL-SQL as my databse, and there is a Customer table where the Primary ID starts from 1,2,3.......so on. Till I start using the same database in Local(development) & Production there was no problem in creating customers, so after I started running Local(development) & Production in parallel with same DB many of the time customer creations fails as ID's trying to create as Duplicate. So how can I set in my Local/Production to change the next Index to start with for the ID into another number to avoid this conflict. ?
Eg: I want to continue using(1,2.....etc) the ID in Local & in production I want to set the next Id from 50000 on wards and continuation.
If I understand your question correctly, you have development environment and live environment both linked to the same Database and same table (say the table name is customer_tab, and the primary id column is cus_id).
Although, this is highly not recommended practice, but if you want to have the primary id be in sequence regardless where the insert is coming from (live or dev), then you can use sequence and triggers. That is, insert statement will not insert the primary id, rather it will leave it null. However, on insert, you run the trigger that uses the sequence numbers. Something like this:
/*Create sequence*/
create sequence customer_tab_seq
start with 5000
increment by 1;
/*Create trigger*/
CREATE OR REPLACE TRIGGER customer_tab_tri
BEFORE INSERT ON customer_tab
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
IF INSERTING THEN
IF :NEW.cus_id IS NULL THEN
SELECT customer_tab_seq.NEXTVAL INTO :NEW.cus_id FROM DUAL;
END IF;
END IF;
END;
/
In this case, first row inserted will be given 5000 (regardless if it is from live or dev), the second will be given 5001 (regardless if it is from live or dev). And it will save you future conflicts.
Im using sqlite db in a sample rails application. From my users table, i have cleared all records using User.delete_all. But now whenever i insert a new record in the table using User.create, the id is starting at a value which is one more than the id of the last record which was there in the table. For example, if my table had 5 records and i cleared all, then when i do User.create, its starting at id 6.
Is there any way i can make the id start from 1 again ?
Thank You
Similar question : How to reset a single table in rails? . We can run the following at rails console to reset id column to 1 for a sqlite table
ActiveRecord::Base.connection.execute("DELETE from sqlite_sequence where name = '<table_name>'")
You seem to have autoincrement turned on for the id column.
Sqlite handles these values in an internal table called sqlite_sequence. You could reset the id for a particular autoincrement-enabled table by querying:
UPDATE "sqlite_sequence" SET "seq" = 0 WHERE "name" = $YOURTABLENAME
However, this is not a good idea because the autoincrement functionality is intended to be used in a way that the user does not influence its algorithm. Ideally, you should not care about the actual value of your id but consider it only as a unique identifier for a record.
For one reason or another the pre-existing Postgres schema I'm using with my Rails app doesn't have a default sequence set for a table's primary key, so I am required to query for it every time I want to create a new row.
I have set_sequence_name "seq_people_id" in my model, but whenever I call Person.new Postgres complains to me because Rails is executing the insert query without the ID (which is marked as NOT NULL in the schema).
How do I tell Rails to always use the sequence when creating new records?
Postgres 8.1.4
ActiveRecord 3.0.3
Rails 2.3.10
Here's what I get when I run psql and \d foo:
Table "public.foo"
Column | Type | Modifiers
--------+---------------+------------------------------------------------------
id | integer | not null default nextval('foo_id_seq'::regclass)
(etc.)
I'd check the following:
Verify the actual sequence name is the same as what you reference (people_id_seq vs. seq_people_id)
Verify the table's default is similar to what I have above
(just checking) is the primary key's field named "id" ?
Did you create the table using a migration or by hand? If the latter, try creating a table with a migration, specifying the same fields as in your people table. Does it work properly? Compare the tables.