I am working with sqlite DB in iOS. I have two tables named LEVEL and SUBJECT.
Now I need to sync the above two tables where the TOTALCREDITS in the LEVEL table will be updated automatically when the user add a new record in the SUBJECT table(which uses LEVELID as Foreign key).
You need a trigger:
CREATE TRIGGER update_totalcredits
AFTER INSERT ON Subject
BEGIN
UPDATE Level
SET TotalCredits = (SELECT SUM(Credits)
FROM Subject
WHERE LevelID = NEW.LevelID)
WHERE LevelID = NEW.LevelID;
END;
However, it might be a better idea to compute the total credits dynamically (with the SELECT SUM(... query) whenever you need them.
Related
I have an old application I am supporting that uses a Microsoft Access database. The original table design did not add primary keys to every table. I am working on a migration program that among other things is adding and filling in a new primary key field (GUID) when needed.
This is happening in three steps:
Add a new guid field with no constraints
Fill the field with new unique guids
Add the primary key constraints
My problem is setting the unique guids when the table has duplicate rows. Here is my code to set the guids.
Query.SQL.Add('SELECT * FROM ' + TableName);
Query.Open;
while Query.Eof = false do
begin
Query.Edit;
Query.FieldByName(NewPrimaryKeyFieldName).AsGuid := TGuid.NewGuid;
Query.Post;
Query.Next;
end;
FireDac generates an update statement that contains a where clause with all the original fields/values in the row (since there is no unique field for it to use). However, because the rows are complete duplicates the statement still updates two rows.
FireDac correctly errors with this message
Update command updated [2] instead of [1] record.
I can open up the database in Access and delete the duplicate records or assign them a unique guid by editing the table. I would like my conversion tool to automatically do this.
Is there some way to work with these duplicate rows in FireDac? Either to update just one at a time, or to delete just one of them?
In my opinion there is no way to do it with just one SQL Statement.
I would do this:
1. Copy the whole table without duplicates by using a new temp table
SELECT DISTINCT * FROM <TABLENAME>
Add the Keys
Delete old table content and copy new content from new table
Notes:
The DB Should be unavailable for everyone else for that Operation
2. Make BACKUP before
I want to update whole table from on query. Following is my update functionality is happened:
Database (Database A) stored in the iDevice
Temperory Database (Batabase B) downloads to the device and store in the temp folder inside the device. (Both DB has same database structure)
First I attach temp db to device database. Attached db name is SECOND1
Then I insert new records from temp db to device folder from following Insert code. It is working fine.
INSERT INTO main.fList SELECT * FROM SECOND1.fList WHERE not exists (SELECT 1 FROM main.fList WHERE main.fList.GUID = SECOND1.fList.GUID)
But when I use following code to update it is not working fine. It update same top value for all device db table's rows.
UPDATE fList SET Notes = (SELECT SECOND1.fList.Notes FROM SECOND1.fList WHERE SECOND1.fList.GUID = fDefectList.GUID) WHERE EXISTS (SELECT * FROM SECOND1.fList WHERE SECOND1.fList.GUID = fList.GUID
I found SQL query for bulk update. Following is the code,
UPDATE fTempRank7
SET
fTempRank7.int_key = fRank7.int_key,
fTempRank7.int_rank6 = fRank7.int_rank6,
fTempRank7.title = fRank7.title,
fTempRank7.sequence = fRank7.sequence,
fTempRank7.lastupdated = fRank7.lastupdated
FROM
fTempRank7 INNER JOIN fRank7 ON
fTempRank7.int_key = fRank7.int_key
But in sqlite this code does not work.
Anyone knows bulk update in sqlite?
SQLite does not support joins in an UPDATE statement.
When you use a table name without a database name, it refers to the first matching table in the innermost query. In other words, if you use SECOND1.fList in a subquery, any other occurrence of fList refers to the table in SECOND1.
To ensure that you always refer to the correct table, use the database name in all table references.
The main database is always named main, so all table references should be either main.fList or SECOND1.fList.
Anyway, if you are updating all columns, you can simplify the update by deleting the rows that would be updated, so that all new data can just be inserted:
BEGIN;
DELETE FROM fList WHERE GUID IN (SELECT GUID FROM SECOND1.fList);
INSERT INTO fList SELECT * FROM SECOND1.fList;
COMMIT;
When you have a UNIQUE constraint on the GUID column, this can be simplified into a single statement:
INSERT OR REPLACE INTO fList SELECT * FROM SECOND1.fList;
And here I don't use main. because I know what I'm doing. ☺
I have a database I would like to convert to use UUID's as the primary key in postgresql.
I have roughly 30 tables with deep multi-level associations. Is there an 'easy' way to convert all current ID's to UUID?
From this: https://coderwall.com/p/n_0awq, I can see that I could alter the table in migration. I was thinking something like this:
for client in Client.all
# Retrieve children
underwritings = client.underwritings
# Change primary key
execute 'ALTER TABLE clients ALTER COLUMN id TYPE uuid;'
execute 'ALTER TABLE clients ALTER COLUMN id SET DEFAULT uuid_generate_v1();'
# Get new id - is this already generated?
client_id = client.id
for underwriting in underwritings
locations = underwriting.locations
other_record = underwriting.other_records...
execute 'ALTER TABLE underwritings ALTER COLUMN id TYPE uuid;'
execute 'ALTER TABLE underwritings ALTER COLUMN id SET DEFAULT uuid_generate_v1();'
underwriting.client_id = client_id
underwriting.saved
underwriting_id = underwriting.id
for location in locations
buildings = location.buildings
execute 'ALTER TABLE locations ALTER COLUMN id TYPE uuid;'
execute 'ALTER TABLE locations ALTER COLUMN id SET DEFAULT uuid_generate_v1();'
location.undewriting_id = underwriting_id
location.save
location_id = location.id
for building in buildings
...
end
end
for other_record in other_records
...
end
...
...
end
end
Questions:
Will this work?
Is there an easier way to do this?
Will child records be retrieved properly as long as they are retrieved before the primary key is changed?
Will the new primary key be already generated as soon as the alter table is called?
Thanks very much for any help or tips in doing this.
I found these to be quite tedious. It is possible to use direct queries to PostgreSQL to convert table with existing data.
For primary key:
ALTER TABLE students
ALTER COLUMN id DROP DEFAULT,
ALTER COLUMN id SET DATA TYPE UUID USING (uuid(lpad(replace(text(id),'-',''), 32, '0'))),
ALTER COLUMN id SET DEFAULT uuid_generate_v4()
For other references:
ALTER TABLE students
ALTER COLUMN city_id SET DATA TYPE UUID USING (uuid(lpad(replace(text(city_id),'-',''), 32, '0')))
The above left pads the integer value with zeros and converts to a UUID. This approach does not require id mapping and if needed old id could be retrieved.
As there is no data copying, this approach works quite fast.
To handle these and more complicated case of polymorphic associations please use https://github.com/kreatio-sw/webdack-uuid_migration. This gem adds additional helpers to ActiveRecord::Migration to ease these migrations.
I think trying to do something like this through Rails would just complicate matters. I'd ignore the Rails side of things completely and just do it in SQL.
Your first step is grab a complete backup of your database. Then restore that backup into another database to:
Make sure that your backup works.
Give you a realistic playpen where you can make mistakes without consequence.
First you'd want to clean up your data by adding real foreign keys to match all your Rails associations. There's a good chance that some of your FKs will fail, if they do you'll have to clean up your broken references.
Now that you have clean data, rename all your tables to make room for the new UUID versions. For a table t, we'll refer to the renamed table as t_tmp. For each t_tmp, create another table to hold the mapping from the old integer ids to the new UUID ids, something like this:
create table t_id_map (
old_id integer not null,
new_id uuid not null default uuid_generate_v1()
)
and then populate it:
insert into t_id_map (old_id)
select id from t_tmp
And you'll probably want to index t_id_map.old_id while you're here.
This gives us the old tables with integer ids and a lookup table for each t_tmp that maps the old id to the new one.
Now create the new tables with UUIDs replacing all the old integer and serial columns that held ids; I'd add real foreign keys at this point as well; you should be paranoid about your data: broken code is temporary, broken data is usually forever.
Populating the new tables is pretty easy at this point: simply use insert into ... select ... from constructs and JOIN to the appropriate t_id_map tables to map the old ids to the new ones. Once the data has been mapped and copied, you'll want to do some sanity checking to make sure everything still makes sense. Then you can drop your t_tmp and t_id_map tables and get on with your life.
Practice that process on a copy of your database, script it up, and away you go.
You would of course want to shut down any applications that access your database while you're doing this work.
Didn't want to add foreign keys, and wanted to to use a rails migration. Anyways, here is what I did if others are looking to do this (example for 2 tables, I did 32 total):
def change
execute 'CREATE EXTENSION "uuid-ossp";'
execute <<-SQL
ALTER TABLE buildings ADD COLUMN guid uuid DEFAULT uuid_generate_v1() NOT NULL;
ALTER TABLE buildings ALTER COLUMN guid SET DEFAULT uuid_generate_v1();
ALTER TABLE buildings ADD COLUMN location_guid uuid;
ALTER TABLE clients ADD COLUMN guid uuid DEFAULT uuid_generate_v1() NOT NULL;
ALTER TABLE clients ALTER COLUMN guid SET DEFAULT uuid_generate_v1();
ALTER TABLE clients ADD COLUMN agency_guid uuid;
ALTER TABLE clients ADD COLUMN account_executive_guid uuid;
ALTER TABLE clients ADD COLUMN account_representative_guid uuid;
SQL
for record in Building.all
location = record.location
record.location_guid = location.guid
record.save
end
for record in Client.all
agency = record.agency
record.agency_guid = agency.guid
account_executive = record.account_executive
record.account_executive_guid = account_executive.guid unless account_executive.blank?
account_representative = record.account_representative
record.account_representative_guid = account_representative.guid unless account_representative.blank?
record.save
end
execute <<-SQL
ALTER TABLE buildings DROP CONSTRAINT buildings_pkey;
ALTER TABLE buildings DROP COLUMN id;
ALTER TABLE buildings RENAME COLUMN guid TO id;
ALTER TABLE buildings ADD PRIMARY KEY (id);
ALTER TABLE buildings DROP COLUMN location_id;
ALTER TABLE buildings RENAME COLUMN location_guid TO location_id;
ALTER TABLE clients DROP CONSTRAINT clients_pkey;
ALTER TABLE clients DROP COLUMN id;
ALTER TABLE clients RENAME COLUMN guid TO id;
ALTER TABLE clients ADD PRIMARY KEY (id);
ALTER TABLE clients DROP COLUMN agency_id;
ALTER TABLE clients RENAME COLUMN agency_guid TO agency_id;
ALTER TABLE clients DROP COLUMN account_executive_id;
ALTER TABLE clients RENAME COLUMN account_executive_guid TO account_executive_id;
ALTER TABLE clients DROP COLUMN account_representative_id;
ALTER TABLE clients RENAME COLUMN account_representative_guid TO account_representative_id;
SQL
end
I'm searching a way to simulate "create table as select" in Firebird from SP.
We are using this statement frequently in another product, because it is very easy for make lesser, indexable sets, and provide very fast results in server side.
create temp table a select * from xxx where ...
create indexes on a ...
create temp table b select * from xxx where ...
create indexes on b ...
select * from a
union
select * from b
Or to avoid the three or more levels in subqueries.
select *
from a where id in (select id
from b
where ... and id in (select id from c where))
The "create table as select" is very good cos it's provide correct field types and names so I don't need to predefine them.
I can simulate "create table as" in Firebird with Delphi as:
Make select with no rows, get the table field types, convert them to create table SQL, run it, and make "insert into temp table " + selectsql with rows (without order by).
It's ok.
But can I create same thing in a common stored procedure which gets a select sql, and creates a new temp table with the result?
So: can I get query result's field types to I can create field creator SQL from them?
I'm just asking if is there a way or not (then I MUST specify the columns).
Executing DDL inside stored procedure is not supported by Firebird. You could do it using EXECUTE STATEMENT but it is not recommended (see the warning in the end of "No data returned" topic).
One way to do have your "temporary sets" would be to use (transaction-level) Global Temporary Table. Create the GTT as part of the database, with correct datatypes but without constraints (those would probably get into way when you fill only some columns, not all) - then each transaction only sees it's own version of the table and data...
I need to CREATE a new table from a query on existing tables using ADO query.
DB is MS Access 2003. Is there a simple way to recreate this?
DROP TABLE IF EXISTS tmp_report;
CREATE TABLE tmp_report
SELECT Userid, Name,
DATE(CheckTime) AS date,
MIN(CheckTime) AS first_login,
MAX(checktime) AS last_login,
COUNT(CheckTime) AS No_logins,
IF(COUNT(CheckTime) = 1, 'ERROR',
TIME_TO_SEC(TIMEDIFF(max(checktime), min(CheckTime))) AS total_sec
FROM
Checkinout LEFT JOIN Userinfo USING(Userid)
GROUP BY
Userid, DATE(CheckTime)
ORDER BY
Userid, DATE(CheckTime);
To CREATE a new table from a query on existing tables, you can use SELECT INTO(this creates a new table) or INSERT INTO SELECT(this inserts into an existing table) statements.
Check this MSDN page, it has nice examples that you need.