Inserting record in table results in error message - delphi-2010

When I want to insert records in my table in the database I get this error message: There are fewer columns in the insert statement than specified in the values clause. This is my SQL statement:
dm.qr_emp.SQl.Add( 'insert into emp_per(nom, prenom, date_naiss1, cle_ccp,nomf,prenomf,' +
'lieu_naiss,sex,adr,id_corps,id_fonction,id_categ,montant_resp,' + 'compte_ccp,etat,sal_base,id_grade,id_banque,id_degree,sal_unique,' +'pfc,iep,num_social,sal_principale,ind,salaire,montant_irg,' +
'ss,nbr_enfant,nbr_enfant_sup,bource_f,irg,retenue) '
);
dm.qr_emp.SQL.Add
('values('+quotedstr(edit1.Text)+', '+quotedstr(edit2.Text)+','+quotedstr(formatdatetime('yyyy-dd-mm',datetimepicker1.DateTime))+', '+
edit4.Text+','+quotedstr(edit5.Text)+','+quotedstr(edit3.Text)+','+quotedstr(edit6.Text)+','+quotedstr(ComboBox3.Text)+','+quotedstr(edit8.Text)+','+ inttostr(id_corps)+','+ inttostr(id_fonction)+',
'+quotedstr(edit10.Text)+','+floattostr(salair_responsabilite)+','+quotedstr(edit7.Text)+','+quotedstr(ComboBox5.Text)+','+floattostr(salaire_base)+','+ inttostr(id_grade)+','+ inttostr(id_banque)+','+
inttostr(id_degree)+','+quotedstr(ComboBox8.Text)+','+
floattostr(pfc)+','+quotedstr(edit15.Text)+','+quotedstr(edit13.Text)+','+
floattostr(sal_principale)+','+floattostr(ind)+','+quotedstr(edit20.text)+',+'+floattostr(montant_irg)+','+floattostr(ss)+', '+(edit23.Text)+',
'+quotedstr(edit26.Text)+','+floattostr(boursef)+','+quotedstr(edit21.Text)+','+
floattostr(retenu)+')');
dm.qr_emp.ExecSQL;

When you want to insert a record, the number of columns have to be the same as the number of inserted values. For instance:
INSERT INTO Employees(Name, City, AccountNumber)
VALUES ("John", "London", "0098737602230")
The next statements would fail for example, because the number of column names and inserted values do not match.
INSERT INTO Employees(Name, City)
VALUES ("John", "London", "0098737602230")
INSERT INTO Employees(Name, City, AccountNumber)
VALUES ("John", "London")
I would advise to structure your code a bit. The current statement is not readable by humans and will lead to errors.

Related

Is there any way to do error handling in snowflake?

Am currently loading data from one snowflake table to another table in snowflake, also doing some datatype conversions while doing the data loads
But when there is any error, my load is getting failed.I need to capture the error rows in a table and continue my load though any errors occur.
I have tried that using stored procedure as below but only able to capture error information:-
Please let me know if there is any way to achieve this in snowflake.
CREATE OR REPLACE PROCEDURE LOAD_TABLE_A()
RETURNS varchar
NOT NULL
LANGUAGE javascript
AS
$$
var result;
var sql_command = "insert into TABLE A"
sql_command += " select"
sql_command += " migration_status,to_date(status_date,'ddmmyyyy') as status_date,"
sql_command += " to_time(status_time,'HH24MISS') as status_time,unique_unit_of_migration_number,reason,"
sql_command += " to_timestamp_ntz(current_timestamp) as insert_date_time"
sql_command += " from TABLE B"
sql_command += " where insert_date_time>(select max(insert_date_time) from TABLE A);"
try {
snowflake.execute({ sqlText: sql_command});
result = "Succeeded";
}
catch (err) {
result = "Failed";
snowflake.execute({
sqlText: `insert into mcs_error_log VALUES (?,?,?,?)`
,binds: [err.code, err.state, err.message, err.stackTraceTxt]
});
}
return result;
$$;
I worked through an example how to send good rows from one table to another while sending bad ones to a separate table. It should be on the Snowflake blog shortly. The key is using multi-table inserts like so:
-- Create a staging table with all columns defined as strings.
-- This will hold all raw values from the load filess.
create or replace table SALES_RAW
( -- Actual Data Type
SALE_TIMESTAMP string, -- timestamp
ITEM_SKU string, -- int
PRICE string, -- number(10,2)
IS_TAXABLE string, -- boolean
COMMENTS string -- string
);
-- Create the production table with actual data types.
create or replace table SALES_STAGE
(
SALE_TIMESTAMP timestamp,
ITEM_SKU int,
PRICE number(10,2),
IS_TAXABLE boolean,
COMMENTS string
);
-- Simulate adding some rows from a load file. Two rows are good.
-- Four rows generate errors when converting to the data types.
insert into SALES_RAW
(SALE_TIMESTAMP, ITEM_SKU, PRICE, IS_TAXABLE, COMMENTS)
values
('2020-03-17 18:21:34', '23289', '3.42', 'TRUE', 'Good row.'),
('2020-17-03 18:21:56', '91832', '1.41', 'FALSE', 'Bad row: SALE_TIMESTAMP has the month and day transposed.'),
('2020-03-17 18:22:03', '7O242', '2.99', 'T', 'Bad row: ITEM_SKU has a capital "O" instead of a zero.'),
('2020-03-17 18:22:10', '53921', '$6.25', 'F', 'Bad row: PRICE should not have a dollar sign.'),
('2020-03-17 18:22:17', '90210', '2.49', 'Foo', 'Bad row: IS_TAXABLE cannot be converted to true or false'),
('2020-03-17 18:22:24', '80386', '1.89', '1', 'Good row.');
-- Make sure the rows inserted okay.
select * from SALES_RAW;
-- Create a table to hold the bad rows.
create or replace table SALES_BAD_ROWS like SALES_RAW;
-- Insert good rows into SALES_STAGE and
-- bad rows into SALES_BAD_ROWS
insert first
when SALE_TIMESTAMP_X is null and SALE_TIMESTAMP is not null or
ITEM_SKU_X is null and SALE_TIMESTAMP is not null or
PRICE_X is null and PRICE is not null or
IS_TAXABLE_X is null and IS_TAXABLE is not null
then
into SALES_BAD_ROWS
(SALE_TIMESTAMP, ITEM_SKU, PRICE, IS_TAXABLE, COMMENTS)
values
(SALE_TIMESTAMP, ITEM_SKU, PRICE, IS_TAXABLE, COMMENTS)
else
into SALES_STAGE
(SALE_TIMESTAMP, ITEM_SKU, PRICE, IS_TAXABLE, COMMENTS)
values
(SALE_TIMESTAMP_X, ITEM_SKU_X, PRICE_X, IS_TAXABLE_X, COMMENTS)
select try_to_timestamp (SALE_TIMESTAMP) as SALE_TIMESTAMP_X,
try_to_number (ITEM_SKU, 10, 0) as ITEM_SKU_X,
try_to_number (PRICE, 10, 2) as PRICE_X,
try_to_boolean (IS_TAXABLE) as IS_TAXABLE_X,
COMMENTS,
SALE_TIMESTAMP,
ITEM_SKU,
PRICE,
IS_TAXABLE
from SALES_RAW;
-- Examine the two good rows
select * from SALES_STAGE;
-- Examine the four bad rows
select * from SALES_BAD_ROWS;
Load error information is captured by Snowflake and can be accessed by querying the COPY_HISTORY table function.
https://docs.snowflake.net/manuals/sql-reference/functions/copy_history.html
Within the COPY INTO command you can decide how to proceed with a file if one or more rows fail the load process by using the ON_ERROR parameter.
https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-table.html#copy-options-copyoptions
I recommend you check out try_cast.
https://docs.snowflake.net/manuals/sql-reference/functions/try_cast.html
Also for your query, I would just use a view and if performance is an issue a materialized view.
I think a nice solution is to wrap your SQL calls with a helper method.
For example lets say instead of doing snowflake.execute({}) ...
You use something like:
EXEC(select * from table1 where x > ?,[param1]);
Inside the EXEC method you can have a try catch and you can easily add things like a continue handler, or exit_handler were you can put logic to log your errors on a table.
I have assembled a repo with a tools and some snippets. Maybe take a look at: https://github.com/orellabac/SnowJS-Helpers

Use computed columns in related table in power query on SQLITE via odbc

In power query ( version included with exel 2016, PC ), is it possible to refer to a computed column of a related table?
Say I have an sqlite database as follow
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE products (
iddb INTEGER NOT NULL,
px FLOAT,
PRIMARY KEY (iddb)
);
INSERT INTO "products" VALUES(0,0.0);
INSERT INTO "products" VALUES(1,1.1);
INSERT INTO "products" VALUES(2,2.2);
INSERT INTO "products" VALUES(3,3.3);
INSERT INTO "products" VALUES(4,4.4);
CREATE TABLE sales (
iddb INTEGER NOT NULL,
quantity INTEGER,
product_iddb INTEGER,
PRIMARY KEY (iddb),
FOREIGN KEY(product_iddb) REFERENCES products (iddb)
);
INSERT INTO "sales" VALUES(0,0,0);
INSERT INTO "sales" VALUES(1,1,1);
INSERT INTO "sales" VALUES(2,2,2);
INSERT INTO "sales" VALUES(3,3,3);
INSERT INTO "sales" VALUES(4,4,4);
INSERT INTO "sales" VALUES(5,5,0);
INSERT INTO "sales" VALUES(6,6,1);
INSERT INTO "sales" VALUES(7,7,2);
INSERT INTO "sales" VALUES(8,8,3);
INSERT INTO "sales" VALUES(9,9,4);
COMMIT;
basically we have products ( iddb, px ) and sales of those products ( iddb, quantity, product_iddb )
I load this data in power query by:
A. creating an ODBC data source using SQLITE3 driver : testDSN
B. in excel : data / new query , feeding this connection string Provider=MSDASQL.1;Persist Security Info=False;DSN=testDSN;
Now in power query I add a computed column, say px10 = px * 10 to the product table.
In the sales table, I can expand the product table into product.px, but not product.px10 . Shouldn't it be doable? ( in this simplified example I could expand first product.px and then create the px10 column in the sales table, but then any new table needinng px10 from product would require me to repeat the work... )
Any inputs appreciated.
I would add a Merge step from the sales query to connect it to the product query (which will include your calculated column). Then I would expand the Table returned to get your px10 column.
This is instead of expanding the Value column representing the product SQL table, which gets generated using the SQL foreign key.
You will have to come back and add any further columns added to the product query to the expansion list, but at least the column definition is only in one place.
In functional programming you don't modify existing values, only create new values. When you add the new column to product it creates a new table, and doesn't modify the product table that shows up in related tables. Adding a new column over product can't show up in Odbc's tables unless you apply that transformation to all related tables.
What you could do is generalize the "add a computed column" into a function that takes a table or record and adds the extra field. Then just apply that over each table in your database.
Here's an example against Northwind in SQL Server
let
Source = Sql.Database(".", "Northwind_Copy"),
AddAColumn = (data) => if data is table then Table.AddColumn(data, "UnitPrice10x", each [UnitPrice] * 10)
else if data is record then Record.AddField(data, "UnitPrice10x", data[UnitPrice] * 10)
else data,
TransformedSource = Table.TransformColumns(Source, {"Data", (data) => if data is table then Table.TransformColumns(data, {"Products", AddAColumn}, null, MissingField.Ignore) else data}),
OrderDetails = TransformedSource{[Schema="dbo",Item="Order Details"]}[Data],
#"Expanded Products" = Table.ExpandRecordColumn(OrderDetails, "Products", {"UnitPrice", "UnitPrice10x"}, {"Products.UnitPrice", "Products.UnitPrice10x"})
in
#"Expanded Products"

Create stored procedure to load text file into SQL database with predetermined offset values and create insert

I have the following 3 records stored in a text file...
But the SQL script will need to process 4.1 million records in good time...
//Instruction:
Copy this to a text file named: Deceased.txt
000101001118 IDENTITY NUMBER NOT NUMERIC
0001010061181PERSON DECEASED 19990101OBSTRUCTIVE AIRWAYS SYNDROME BABA NOWEZILE
0001010077097 COERTZEN AZIL CUBITT JONO
-> I need to write a query could be function / stored procedure that does the following:
Takes each record and copies to a sql table.
The table will have the following columns:
National_id Errmsg DeceasedDTE DeceasedReason Surname FirstNames
First_Initial Second_Initial Third_Initial FName1 FName2 FName3
Herewith the offset values for only the first 6 columns...
,LTRIM(SUBSTRING([TABLE],1,13)) --National_id
,LTRIM(SUBSTRING([TABLE],14,43)) --Errmsg
,LTRIM(SUBSTRING([TABLE],57,8)) --DeceasedDTE
,LTRIM(SUBSTRING([TABLE],65,50)) --DeceasedReason
,LTRIM(SUBSTRING([TABLE],115,45)) --Surname
,LTRIM(SUBSTRING([TABLE],158,50)) --FirstNames
Notes I need to use the FirstNames column to help populate the remaining columns..
Also the FirstNames is separated with a space and I need them spit up into each FName1, FName2 and FName3 ... with its corresponding first letter that makes up the initial..
I then need to create a script send to a .txt file that creates an insert statement for each record with the following columns... national_id, surname, First_Initial, Second_Initial,Third_Initial, First_Name, Second_Name, Third_Name
E.G.
Set #Insert = "insert into prodmgr.t_unverified (national_id, surname, First_Initial, Second_Initial,Third_Initial, First_Name, Second_Name, Third_Name) values ('"
I used Bulk insert to make this process more efficient ... but not successful as yet... the script will need to process millions of records..
Please help :)

SQLite multiple INSERT in iOS

I want to insert thousand of Record in table, right now i am using
INSERT INTO myTable ("id", "values") VALUES ("1", "test")
INSERT INTO myTable ("id", "values") VALUES ("2", "test")
INSERT INTO myTable ("id", "values") VALUES ("3", "test")
Query for insert record one by one, but its take long time for execution,
Now i want to insert all record from one query...
INSERT INTO myTable ("id", "values") VALUES
("1", "test"),
("2", "test"),
("3", "test"),
.....
.....
("n", "test")
But this query not working with Sqllite, Can you please give me some guidance for solve this problem
Thanks,
Please refer my answer here.
There is no query in sqlite that can support your structure, this is what I use to insert 1000s of records in db. Performance is good. You can give a try. :)
Insert into table_name (col1,col2)
SELECT 'R1.value1', 'R1.value2'
UNION SELECT 'R2.value1', 'R2.value2'
UNION SELECT 'R3.value1', 'R3.value2'
UNION SELECT 'R4.value1', 'R4.value2'
You can follow this link on SO for more information. But be careful of the number of insertion you want make, cause there is a limitation for this kind of usage (see here)
INSERT INTO 'tablename'
SELECT 'data1' AS 'column1', 'data2' AS 'column2'
UNION SELECT 'data3', 'data4'
UNION SELECT 'data5', 'data6'
UNION SELECT 'data7', 'data8'
As a further note, sqlite only seems to support upto 500 such union selects per query so if you are trying to throw in more data than that you will need to break it up into 500 element blocks

Grails GORM to return random rows from table?

In my grails application I have:
keywords = Keyword
.findAll("from Keyword where locale = '$locale' order by rand() ", [max:20])
Assume there are thousands of rows in the table that match the above criteria. But it seems the rows that are returned from the table are not random but in the order the rows are stored in Db although within the context of 20 rows that are returned they are random. For my application to work I want this query to return completely random rows from the table like it could be row id 203 , row id 3789, row id 9087, row id 789, and so on. How is that possible?
I use the following style:
Keyword.executeQuery('from Keyword order by rand()', [max: 9])
and it returns random rows from the entire table (we're using MySQL).
I'm not sure why execute query would behave differently from findAll though.
If you want to use a .withCriteria you can do that workaround:
User.withCriteria {
eq 'name', 'joseph'
sqlRestriction " order by rand()"
}
It's important to say that sometimes ( depends on the Criteria query created ) it's necessary to add a 1=1 in sqlRestriction because it adds an "and" condition in generated query.
So if you have a sqle exception use:
sqlRestriction " 1=1 order by rand()"

Resources