Can i add extra parameters to an IBDataset - delphi

Is it possible to add extra params to an IBDataset.
I would like to add a couple of extra values to the insert sql for fields that don't exist in the select sql.
I don't think Delphi supports this, but i wondered if anyone had any unique workarounds.

You can apply the insert manually. For example if you have this selectSQL:
select somefield
from sometable
You can write a OnUpdateRecord event handler and do it manually for the ukInsert kind of update, using a insert query like:
insert into sometable (somefield, someotherfield)
values (:somefield, :someotherfield)
After you applied the insert, you set the uaApplied value for the UpdateAction var parameter.

Related

How to insert CDC Data from a stream to another table with dynamic column names

I have a Snowflake stored procedure and I want to use "insert into" without hard coding column names.
INSERT INTO MEMBERS_TARGET (ID, NAME)
SELECT ID, NAME
FROM MEMBERS_STREAM;
This is what I have and column names are hardcoded. The query should copy data from MEMBERS_STREAM to MEMBERS_TARGET. The stream has more columns such as
METADATA$ACTION | METADATA$ISUPDATE | METADATA$ROW_ID
which I am not intending to copy.
I don't know of a way to not copy the METADATA columns if not hardcoding. However if you don't want the data maybe the easiest thing to do is to add them to your target, INSERT using a SELECT * and later in the sp set them to NULL.
Alternatively, earlier in your sp, run an ALTER TABLE ADD COLUMN to add the columns, INSERT using SELECT * and then after that run an ALTER TABLE DROP COLUMN to remove the columns? That way your table structure stays the same, albeit briefly it will have some extra columns.
A SELECT * is not usually recommended but it's the easiest alternative I can think of

Insert into Identical tables without using dynamic sql

I am using SQL Server 2012 Enterprise and I have a stored procedure that accepts two parameters:
#pkgId varchar(16), #siteId varchar(2)
The stored procedure will then do an INSERT like this:
IF #siteId = '01'
BEGIN
INSERT INTO dbo.table**01** (pkgId)
VALUES (#pkgId)
END
IF #siteId = '02'
BEGIN
INSERT INTO dbo.table**02** (pkgId)
VALUES (#pkgId)
END
IF #siteId = '03'
BEGIN
INSERT INTO dbo.table**03** (pkgId)
VALUES(#pkgId)
END
Now we are looking to add 10 more site's. So I would have to add 10 more IF statements, but I DO NOT want to use dynamic SQL as I need the query plans to be cached, because speed is a must. Also, I have many more tables that already end in '01', '02' and '03', so there is a lot more code updates for me to do.
Also, it is a business requirement that these tables be separate. Meaning, I cannot just have one table with siteId as a column.
So the question is: is there some other way I can perform this INSERT by using some other alternative and keep my coding at a minimum? Meaning, I would like to call the INSERT only once, if possible, without the use of dynamic SQL.
FYI - I have seen some other alternatives like setting a synonym at real time, but this will cause concurrency issues.
Query plans for dynamic SQL in the manner you need to use it are cached, see links here and here2

Can I use a parameter for the IN clause in a MySQL query/stored proc

I want to retrieve data in Delphi using a stored procedure. I used the below SQL statement and Initial as a parameter:
SELECT * FROM "PackUser" where Initials in (:Initial)
It didn't select any records when the user types A,B in my Edit box, because it sends a single string 'A,B' to the stored procedure. I want to add extra quotes in the middle: 'A','B'.
How can I do this?
This can be done like this:
input_string=',A,B,C.D'
SELECT * FROM "PackUser" where locate(concat(',', Initials), input_string);

Making record fields case sensitive in PotgreSQL stored procedure

I am having problems defining the record's fields as case sensitive in my PostgreSQL stored procedure. I define my record as:
CREATE TYPE json_record AS (
"objectType" text ,
"objectSubtype" text
};
The reason why I need the the fields to be case sensitive is because the record is populated from JSON in stored procedure and I have no control over JSON content
My stored procedure is:
CREATE OR REPLACE FUNCTION record(id uuid, json_in json) RETURNS void AS $$
DECLARE
raw_record json_record;
BEGIN
SELECT json_populate_record( NULL::json_record, json_in) INTO raw_record;
INSERT INTO my_resource (uuid, type, subtype)
SELECT (id, raw_record.objectType, raw_record.objectSubtype);
END;
$$ LANGUAGE plpgsql VOLATILE;
When I execute the procedure I am getting the error:
ERROR: record "raw_record" has no field "objecttype"
I understand the cause of the error: the "json_record" is defined with the case sensitive fields, but PostgreSQL's execution engine converted raw_record.objectType and raw_record.objectSubtype to raw_record.objecttype and raw_record.objectsubtype. The question is how to avoid this situation and force record's fields to be treated as case sensitive. The only solution I can think of is to use dynamic SQL, build the query piece by piece and wrap the fields in quote_ident(), but I am wondering if there a simpler solution to enforce case sensitivity for a record?
force record's fields to be treated as case sensitive
The same way you always use case-sensitive identifiers: you need to enclose them in double quotes. You also don't really need a SELECT clause for the insert:
INSERT INTO my_resource (uuid, type, subtype)
VALUES (id, raw_record."objectType", raw_record."objectSubtype");
As a side note: select (a,b,c) does not what you think it does. It selects a single column that is an anonymous record (with three fields). It does not select three columns. So if you do want to stick with the select version instead of values, you have to remove the parentheses:
INSERT INTO my_resource (uuid, type, subtype)
SELECT id, raw_record."objectType", raw_record."objectSubtype";

optimize query for last-auto-inc value

Sybase Advantage Database
I am doing a query
INSERT INTO nametable
SELECT * FROM nametable WHERE [indexkey]=32;
UPDATE nametable Set FieldName=1
WHERE [IndexKey]=(SELECT max([indexKey]) FROM nametable);
The purpose is to copy a given record into a new record, and then update the newly created record with some new values. The "indexKey" is declared as autoinc and is the primary key to the table.
I am not sure if this can be achieved in a single statement with better speed or;;; suggestions appreciated.
It can be achieved with a single statement but it will make the code more susceptible to schema changes. Suppose that there are 2 additional columns in the table besides the FieldName and the indexKey columns. Then the following statement will achieve your objective.
INSERT INTO nametable ( FieldName, Column2, Column3 )
SELECT 1, Column2, Column3 FROM nametable WHERE [indexkey]=32
However, if the table structure changes, this statement will need to be updated accordingly.
BTW, your original implementation is not safe in multi-user scenarios. The max( [indexKey] ) in the UPDATE statement may not be the one generated by the INSERT statement. Another user could have inserted another row between the two statements. To use your original approach, you should use the LastAutoInc() scalar.
UPDATE nametable Set FieldName=1
WHERE [IndexKey] = LastAutoInc( STATEMENT )

Resources