How to use inner join inside the update query in SQLite in Objective C? - ios

I want to update whole table from on query. Following is my update functionality is happened:
Database (Database A) stored in the iDevice
Temperory Database (Batabase B) downloads to the device and store in the temp folder inside the device. (Both DB has same database structure)
First I attach temp db to device database. Attached db name is SECOND1
Then I insert new records from temp db to device folder from following Insert code. It is working fine.
INSERT INTO main.fList SELECT * FROM SECOND1.fList WHERE not exists (SELECT 1 FROM main.fList WHERE main.fList.GUID = SECOND1.fList.GUID)
But when I use following code to update it is not working fine. It update same top value for all device db table's rows.
UPDATE fList SET Notes = (SELECT SECOND1.fList.Notes FROM SECOND1.fList WHERE SECOND1.fList.GUID = fDefectList.GUID) WHERE EXISTS (SELECT * FROM SECOND1.fList WHERE SECOND1.fList.GUID = fList.GUID
I found SQL query for bulk update. Following is the code,
UPDATE fTempRank7
SET
fTempRank7.int_key = fRank7.int_key,
fTempRank7.int_rank6 = fRank7.int_rank6,
fTempRank7.title = fRank7.title,
fTempRank7.sequence = fRank7.sequence,
fTempRank7.lastupdated = fRank7.lastupdated
FROM
fTempRank7 INNER JOIN fRank7 ON
fTempRank7.int_key = fRank7.int_key
But in sqlite this code does not work.
Anyone knows bulk update in sqlite?

SQLite does not support joins in an UPDATE statement.
When you use a table name without a database name, it refers to the first matching table in the innermost query. In other words, if you use SECOND1.fList in a subquery, any other occurrence of fList refers to the table in SECOND1.
To ensure that you always refer to the correct table, use the database name in all table references.
The main database is always named main, so all table references should be either main.fList or SECOND1.fList.
Anyway, if you are updating all columns, you can simplify the update by deleting the rows that would be updated, so that all new data can just be inserted:
BEGIN;
DELETE FROM fList WHERE GUID IN (SELECT GUID FROM SECOND1.fList);
INSERT INTO fList SELECT * FROM SECOND1.fList;
COMMIT;
When you have a UNIQUE constraint on the GUID column, this can be simplified into a single statement:
INSERT OR REPLACE INTO fList SELECT * FROM SECOND1.fList;
And here I don't use main. because I know what I'm doing. ☺

Related

How to insert CDC Data from a stream to another table with dynamic column names

I have a Snowflake stored procedure and I want to use "insert into" without hard coding column names.
INSERT INTO MEMBERS_TARGET (ID, NAME)
SELECT ID, NAME
FROM MEMBERS_STREAM;
This is what I have and column names are hardcoded. The query should copy data from MEMBERS_STREAM to MEMBERS_TARGET. The stream has more columns such as
METADATA$ACTION | METADATA$ISUPDATE | METADATA$ROW_ID
which I am not intending to copy.
I don't know of a way to not copy the METADATA columns if not hardcoding. However if you don't want the data maybe the easiest thing to do is to add them to your target, INSERT using a SELECT * and later in the sp set them to NULL.
Alternatively, earlier in your sp, run an ALTER TABLE ADD COLUMN to add the columns, INSERT using SELECT * and then after that run an ALTER TABLE DROP COLUMN to remove the columns? That way your table structure stays the same, albeit briefly it will have some extra columns.
A SELECT * is not usually recommended but it's the easiest alternative I can think of

Using FireDac to update only 1 of a duplicate row (no primary key or unique field)

I have an old application I am supporting that uses a Microsoft Access database. The original table design did not add primary keys to every table. I am working on a migration program that among other things is adding and filling in a new primary key field (GUID) when needed.
This is happening in three steps:
Add a new guid field with no constraints
Fill the field with new unique guids
Add the primary key constraints
My problem is setting the unique guids when the table has duplicate rows. Here is my code to set the guids.
Query.SQL.Add('SELECT * FROM ' + TableName);
Query.Open;
while Query.Eof = false do
begin
Query.Edit;
Query.FieldByName(NewPrimaryKeyFieldName).AsGuid := TGuid.NewGuid;
Query.Post;
Query.Next;
end;
FireDac generates an update statement that contains a where clause with all the original fields/values in the row (since there is no unique field for it to use). However, because the rows are complete duplicates the statement still updates two rows.
FireDac correctly errors with this message
Update command updated [2] instead of [1] record.
I can open up the database in Access and delete the duplicate records or assign them a unique guid by editing the table. I would like my conversion tool to automatically do this.
Is there some way to work with these duplicate rows in FireDac? Either to update just one at a time, or to delete just one of them?
In my opinion there is no way to do it with just one SQL Statement.
I would do this:
1. Copy the whole table without duplicates by using a new temp table
SELECT DISTINCT * FROM <TABLENAME>
Add the Keys
Delete old table content and copy new content from new table
Notes:
The DB Should be unavailable for everyone else for that Operation
2. Make BACKUP before

Firebird: simulating create table as?

I'm searching a way to simulate "create table as select" in Firebird from SP.
We are using this statement frequently in another product, because it is very easy for make lesser, indexable sets, and provide very fast results in server side.
create temp table a select * from xxx where ...
create indexes on a ...
create temp table b select * from xxx where ...
create indexes on b ...
select * from a
union
select * from b
Or to avoid the three or more levels in subqueries.
select *
from a where id in (select id
from b
where ... and id in (select id from c where))
The "create table as select" is very good cos it's provide correct field types and names so I don't need to predefine them.
I can simulate "create table as" in Firebird with Delphi as:
Make select with no rows, get the table field types, convert them to create table SQL, run it, and make "insert into temp table " + selectsql with rows (without order by).
It's ok.
But can I create same thing in a common stored procedure which gets a select sql, and creates a new temp table with the result?
So: can I get query result's field types to I can create field creator SQL from them?
I'm just asking if is there a way or not (then I MUST specify the columns).
Executing DDL inside stored procedure is not supported by Firebird. You could do it using EXECUTE STATEMENT but it is not recommended (see the warning in the end of "No data returned" topic).
One way to do have your "temporary sets" would be to use (transaction-level) Global Temporary Table. Create the GTT as part of the database, with correct datatypes but without constraints (those would probably get into way when you fill only some columns, not all) - then each transaction only sees it's own version of the table and data...

optimize query for last-auto-inc value

Sybase Advantage Database
I am doing a query
INSERT INTO nametable
SELECT * FROM nametable WHERE [indexkey]=32;
UPDATE nametable Set FieldName=1
WHERE [IndexKey]=(SELECT max([indexKey]) FROM nametable);
The purpose is to copy a given record into a new record, and then update the newly created record with some new values. The "indexKey" is declared as autoinc and is the primary key to the table.
I am not sure if this can be achieved in a single statement with better speed or;;; suggestions appreciated.
It can be achieved with a single statement but it will make the code more susceptible to schema changes. Suppose that there are 2 additional columns in the table besides the FieldName and the indexKey columns. Then the following statement will achieve your objective.
INSERT INTO nametable ( FieldName, Column2, Column3 )
SELECT 1, Column2, Column3 FROM nametable WHERE [indexkey]=32
However, if the table structure changes, this statement will need to be updated accordingly.
BTW, your original implementation is not safe in multi-user scenarios. The max( [indexKey] ) in the UPDATE statement may not be the one generated by the INSERT statement. Another user could have inserted another row between the two statements. To use your original approach, you should use the LastAutoInc() scalar.
UPDATE nametable Set FieldName=1
WHERE [IndexKey] = LastAutoInc( STATEMENT )

how to update a selected record in a dataset and update another datatable in another Adoconnection?

I have 2 adoconnections and 2 datatables in each connection (Local Table1_master Table1_Detail) (Network Table1_master Table1_Detail). I show them in a DBgrid and now I would like to update the (Local Table1_master Table1_Detail) from the tables in (Network Table1_master Table1_Detail). How can I update the selected records?
I have tried many ways but normally it inserts more records and doesn´t update the record.
I use a .MDB database.
You could use an old master -> new master approach. Return both datasets sorted the same way, and run down each list simultaneously. if table1.key > table2.key then you have a record in table2 that doesn't exist in table1...you can delete the record in table2 or increment the cursor. If table1.key < table2.key then you are missing a record in table2, so insert the new record. if table1.key = table2.key then you can perform your update logic. if table1 is at end but table2 is not, then the rest of table2 doesn't exist in table1 (so possible deletes). If your at the end of table2, but not at the end of table1, then the rest of table1 are inserts.
The nice thing about this approach is you only walk each table once, and its in the same loop.

Resources