Informix 12.10
tblItems
(
Type SMALLINT, {Precious Metal = 1, Other = 2}
Description VARCHAR,
Quantity SMALLINT,
Name VARCHAR,
Weight DECIMAL(5,1),
Purity SMALLINT,
Brand VARCHAR,
Model VARCHAR,
SerialNum VARCHAR
);
EDIT UPDATE: Sample data below is stored in tblItems.Type and tblItems.Description. Please note that the contents in Description column are all uppercase characters and may also include punctuation character.
2|1LAPTOP APPLE 15.5" MODEL MACKBOOK PRO,S/N W80461WCAGX, WITH CHARGER||||||||
1|1RING 2.3PW 14K||||||||
2|DRILL RIOBY, MODEL D5521 S/N77720||||||||
2|TRIMMER TORO, MODEL 0242 S/N 66759||||||||
2|CELL SAMSUNG NOTE3, MODEL SM-N900T S/N RV8F90YLZ9W||||||||
I need to parse the sample item descriptions into the columns below, using the rules mentioned in the comments :
Quantity, {if description string does not start with a number, then Quantity = 1}
Name, {Always the first element if description has no quantity, second element if quantity present]
Weight, {Always before "PW" if Type = 1, Default to zero if Type = 2}
Purity, {Always before "K" if Type = 1, Default to NULL if Type = 2}
Brand, {Always the second element in description, if present}
Model, {Always after "MODEL", with or without a space}
Serial Number {Always after "S/N", with or without a space}
I would like to do this with an UPDATE statement, but if Informix has an import utility tool like SQL-Server's SSIS, then that could be a better option.
UPDATE, Expected Results:
Quantity 1 1 1 1 1
Name LAPTOP RING DRILL TRIMMER CELL
Weight 0.0 2.3 0.0 0.0 0.0
Purity 14
Brand APPLE RIOBY TORO SAMSUNG
Model MACKBOOK PRO D5521 0242 SM-N900T
SerialNum W8046WCAGX 77720 66759 RV8F90YLZ9W
Assuming you are using Informix 12.10.XC8 or above, you can try using regular expressions to parse the description string (see the online documentation here).
For the serial number, for example, you can do:
UPDATE tblitems
SET
serialnum =
DECODE
(
regex_match(description, '(.*)(S\/N)(.*)', 3)
, 't'::BOOLEAN, regex_replace(description, '(.*)(S\/N)([[:blank:]]?)([[:alnum:]]*)(.*)', '\4', 0, 3)
, 'f'::BOOLEAN, ''
)
So in the previous example I am testing if the description contains the S/N string and if that is true I use regex_replace to return the value after it, in this case the 4th matching group in the regular expression (I am not using regex_extract to get the value because it seems to return multiple values and I get error -686).
You can extend this approach to the rest of the columns and see if regular expressions are enough to parse the description column.
If you're looking for a SQL Server option and open to a Split/Parse function which maintains the sequence
Example
Select A.Type
,A.Description
,C.*
From YourTable A
Cross Apply (values ( replace(
replace(
replace(
replace(A.Description,',',' ')
,' ',' ')
,'Model ','Model')
,'S/N ','S/N')
)
)B(CleanString)
Cross Apply (
Select Quantity = IsNull(left(max(case when RetSeq=1 then RetVal end),NullIf(patindex('%[^0-9]%',max(case when RetSeq=1 then RetVal end)) -1,0)),1)
,Name = substring(max(case when RetSeq=1 then RetVal end),patindex('%[^0-9]%',max(case when RetSeq=1 then RetVal end)),charindex(' ',max(case when RetSeq=1 then RetVal end)+' ')-1)
,Weight = IIF(A.Type=2,null,try_convert(decimal(5,1),replace(max(case when RetVal like '%PW' then RetVal end),'PW','')))
,Purity = try_convert(smallint ,replace(max(case when RetVal like '%K' then RetVal end),'K',''))
,Brand = IIF(A.Type=1,null,max(case when RetSeq=2 then RetVal end))
,Model = replace(max(case when RetVal Like 'Model[0-9,A-Z]%' then RetVal end),'Model','')
,SerialNum = replace(max(case when RetVal Like 'S/N[0-9,A-Z]%' then RetVal end),'S/N','')
From [dbo].[tvf-Str-Parse](CleanString,' ') B1
) C
Returns
The TVF if Interested
CREATE FUNCTION [dbo].[tvf-Str-Parse] (#String varchar(max),#Delimiter varchar(10))
Returns Table
As
Return (
Select RetSeq = Row_Number() over (Order By (Select null))
,RetVal = LTrim(RTrim(B.i.value('(./text())[1]', 'varchar(max)')))
From (Select x = Cast('<x>' + replace((Select replace(#String,#Delimiter,'§§Split§§') as [*] For XML Path('')),'§§Split§§','</x><x>')+'</x>' as xml).query('.')) as A
Cross Apply x.nodes('x') AS B(i)
);
EDIT - If you don't want or can't use a TVF
dbFiddle
Select A.Type
,A.Description
,C.*
From YourTable A
Cross Apply (values ( replace(
replace(
replace(
replace(A.Description,',',' ')
,' ',' ')
,'Model ','Model')
,'S/N ','S/N')
)
)B(CleanString)
Cross Apply (
Select Quantity = IsNull(left(max(case when RetSeq=1 then RetVal end),NullIf(patindex('%[^0-9]%',max(case when RetSeq=1 then RetVal end)) -1,0)),1)
,Name = substring(max(case when RetSeq=1 then RetVal end),patindex('%[^0-9]%',max(case when RetSeq=1 then RetVal end)),charindex(' ',max(case when RetSeq=1 then RetVal end)+' ')-1)
,Weight = IIF(A.Type=2,null,try_convert(decimal(5,1),replace(max(case when RetVal like '%PW' then RetVal end),'PW','')))
,Purity = try_convert(smallint ,replace(max(case when RetVal like '%K' then RetVal end),'K',''))
,Brand = IIF(A.Type=1,null,max(case when RetSeq=2 then RetVal end))
,Model = replace(max(case when RetVal Like 'Model[0-9,A-Z]%' then RetVal end),'Model','')
,SerialNum = replace(max(case when RetVal Like 'S/N[0-9,A-Z]%' then RetVal end),'S/N','')
From (
Select RetSeq = row_number() over (Order By (Select null))
,RetVal = ltrim(rtrim(B.i.value('(./text())[1]', 'varchar(max)')))
From (Select x = Cast('<x>' + replace((Select replace(CleanString,' ','§§Split§§') as [*] For XML Path('')),'§§Split§§','</x><x>')+'</x>' as xml).query('.')) as A
Cross Apply x.nodes('x') AS B(i)
) B1
) C
Related
I want to create a master view combining all the tables passed as input to Snowflake Stored procedure. Please help on how the code can be framed for this.
create or replace procedure TEST_PROC("SRC_DB" VARCHAR(30),
"SRC_SCHEMA" VARCHAR(30), "TGT_DB" VARCHAR(30), "TGT_SCHEMA"
VARCHAR(30))
RETURNS varchar
LANGUAGE JAVASCRIPT
EXECUTE AS OWNER
as
$$
var result = '';
var tab = 'TABLE1,TABLE2'
var get_tables = `
with cte as(select value from table(SPLIT_TO_TABLE
(('${tab}'),','))
) select value from cte;`
var tables_name_master=snowflake.execute ({sqlText: get_tables});
var lcols_agg = '';
while(tables_name_master.next()){
var table_value = tables_name_master.getColumnValue(1);
var column_list = `
WITH cte2 as (select COLUMN_NAME , listagg(TABLE_NAME, ', ')
within group (order by COLUMN_NAME) A
from ${SRC_DB}.information_schema.COLUMNS
where TABLE_SCHEMA= '${SRC_SCHEMA}' and TABLE_NAME in (select
value from table(SPLIT_TO_TABLE (('${tab}'),',')))
group by COLUMN_NAME order by COLUMN_NAME
),
cte3 as (select 'x' x, COLUMN_NAME,iff(contains(A,'${table_value}'),COLUMN_NAME,CONCAT('NULL AS \"',COLUMN_NAME,'\"')) valuess from cte2 order by COLUMN_NAME
)select listagg(valuess,',') final FROM cte3 GROUP BY x
`;
var rs = snowflake.execute({ sqlText: column_list });
while(rs.next()){
lcols_agg += "SELECT " + rs.getColumnValue(1) + " FROM "+ SRC_DB+"."+SRC_SCHEMA+"."+tables_name_master.getColumnValue(1) + "\n" +"UNION " +"\n"
}
}
var count1 = 0 ;
count1 = lcols_agg.length
result = lcols_agg.substring(0,(count1-7));
const create_union_view = `
create or replace view abcd AS ${result}
;`
var view_create = snowflake.execute({ sqlText: create_union_view });
view_create.next()
return result
$$
;
call SP_TEST('ABC','DEF','PQR','STU');
THis generates my final view statement as
CREATE OR REPLACE VIEW ABCD AS
COL1,COL2,COL3
UNION
NULL AS "COL2",NULL AS "COL3",COL1
Now due to mismatch of order od columns in union the view is throwing error while we do select * from abcd, any way we can have the columns of both tables in same order or any other work around?
You need to ORDER BY the function listagg after the cte3 expression:
instead of
select listagg(valuess,',') final FROM cte3 GROUP BY x
change it to:
select listagg(valuess,',') within group (order by valuess) final FROM cte3 GROUP BY x
See if that helps to resolve the order issue.
I am getting a count(*) after joining two Snowflake tables. This is done inside a stored procedure. If the count is greater than zero, I need to pass a value. My stored procedure gets called from a NiFi Processor and I have to return the value to NiFi so that an email can be sent from NiFi.
I am getting 'NaN' as output for the below code.
CREATE OR REPLACE PROCEDURE test_Delete_excep()
returns float not null
language javascript
as
$$
var rs;
var return_value = 0;
var SQL_JOIN = "select count(*) from (Select GT.VARIANTDATA from GOV_TEST GT inner join GOV_TEST_H GTH on GT.VARIANTDATA:COL1::String = GTH.VARIANTDATA:COL1::String where to_char(GT.VARIANTDATA) != to_char(GTH.VARIANTDATA));";
var stmt = snowflake.createStatement({sqlText: SQL_JOIN});
rs = stmt.execute();
rs.next();
return_value += JSON.stringify(rs.getColumnValue(1));
if (return_value > 0) { return 'email required';}
$$;
Here is the result:
Row TEST_DELETE_EXCEP
1 NaN
How can I do the arithmetic calculation and return a value to NiFi processor?
You are never returning a float value, which the SP defines as the return type. If return_value is greater than 0, it will try to return the string 'email required.', which is not a float. That will generate a NaN. If return_value is not greater than 0, the code will never return a value of any kind. That will return NULL. Since you specify NOT NULL for the return, that will force it to NaN
Also, I'm not sure why you're trying to stringify the rs.getColumnValue(1). The select count(*) will produce an integer value, which you can read directly.
You probably want something like this:
CREATE OR REPLACE PROCEDURE test_Delete_excep()
returns float not null
language javascript
as
$$
var rs;
var return_value = 0;
var SQL_JOIN = "select count(*) from (Select GT.VARIANTDATA from GOV_TEST GT inner join GOV_TEST_H GTH on GT.VARIANTDATA:COL1::String = GTH.VARIANTDATA:COL1::String where to_char(GT.VARIANTDATA) != to_char(GTH.VARIANTDATA));";
var stmt = snowflake.createStatement({sqlText: SQL_JOIN});
rs = stmt.execute();
if(rs.next()) {
return_value = rs.getColumnValue(1);
} else {
return -1;
}
return return_value;
$$;
This will return the row count produced by your join SQL. If you want something different, please clarify the desired output.
To create a trigger before insert using Informix database.
When we try to insert a record into the table it should insert random alphanumeric string into one of the field. Are there any built in functions?
The table consists of the following fields:
empid serial NOT NULL
age int
empcode varchar(10)
and I am running
insert into employee(age) values(10);
The expected output should be something as below:
id age empcode
1, 10, asf123*
Any help is appreciated.
As already commented there is no existing function to create a random string however it is possible to generate random numbers and then convert these to characters. To create the random numbers you can either create a UDR wrapper to a C function such as random() or register the excompat datablade and use the dbms_random_random() function.
Here is an example of a user-defined function that uses the dbs_random_random() function to generate a string of ASCII alphanumeric characters:
create function random_string()
returning varchar(10)
define s varchar(10);
define i, n int;
let s = "";
for i = 1 to 10
let n = mod(abs(dbms_random_random()), 62);
if (n < 10)
then
let n = n + 48;
elif (n < 36)
then
let n = n + 55;
else
let n = n + 61;
end if
let s = s || chr(n);
end for
return s;
end function;
This function can then be called from an insert trigger to populate the empcode column of your table.
I have a table Products and around 58000 records in it. I run two queries are as follows.
Product.where(description: 'swiss').count
It returns 0 products out of 58000 products. But when I run query below.
Product.where.not(description: 'swiss').count
it returns 2932 products out of 58000 products. I think it should return all 58000 products, because it is the reverse of first query.
I did not understand why it returns only 2932 products.
If you have NULL values in your columns, this could happen, because NULL always compares as NULL, even to itself, but WHERE expression must be true to include a result.
'a' = 'a'; True
'a' = 'b'; False
'a' = NULL; Results in NULL (not false!)
NULL = NULL; Results in NULL (not true!)
'a' != 'a'; False
'a' != 'b'; True
'a' != NULL; NULL
e.g. consider the table null_test containing
id | str
----+-----
1 | <NULL>
2 | a
3 | b
When looking for a column equal to some value, the NULL is never equal, so this will just return row 2.
SELECT id FROM null_test WHERE str = 'a';
But the same applies looking for a column not equal to some value, the NULL is never not equal (its NULL), so this will just return row 3.
SELECT id FROM null_test WHERE str != 'a';
Thus the total of your = and != is not all the rows, because you never included the NULL rows. This is where IS NULL and IS NOT NULL come in (they work like you might expect = NULL and != NULL to work).
SELECT id FROM null_test WHERE str != 'a' OR str IS NULL;
Now you get rows 1 and 3, making this the opposite of str = 'a'.
For ActiveRecord, it treats the Ruby nil value like the SQL NULL and creates the IS NULL checks in certain situations.
NullTest.where(str: 'a')
SELECT * FROM "null_test" WHERE "str" = 'a'
NullTest.where.not(str: 'a')
SELECT * FROM "null_test" WHERE "str" != 'a'
NullTest.where(str: nil)
SELECT * FROM "null_test" WHERE "str" IS NULL
NullTest.where.not(str: nil)
SELECT * FROM "null_test" WHERE "str" IS NOT NULL
NullTest.where(str: nil).or(NullTest.where.not(str: 'a'))
SELECT * FROM "null_test" WHERE "str" IS NULL OR "str" != 'a'
You have likely many records where description is nil and those are not included in the not.
A way to include the 'nil' records as well would be...
Product.where('description <> ? OR description IS NULL', 'swiss')
Or alternatively
Product.where.not(description: 'swiss').or(Product.where(description: nil))
I have a question about a performance of stored procedures in the ADS. I created a simple database with the following structure:
CREATE TABLE MainTable
(
Id INTEGER PRIMARY KEY,
Name VARCHAR(50),
Value INTEGER
);
CREATE UNIQUE INDEX MainTableName_UIX ON MainTable ( Name );
CREATE TABLE SubTable
(
Id INTEGER PRIMARY KEY,
MainId INTEGER,
Name VARCHAR(50),
Value INTEGER
);
CREATE INDEX SubTableMainId_UIX ON SubTable ( MainId );
CREATE UNIQUE INDEX SubTableName_UIX ON SubTable ( Name );
CREATE PROCEDURE CreateItems
(
MainName VARCHAR ( 20 ),
SubName VARCHAR ( 20 ),
MainValue INTEGER,
SubValue INTEGER,
MainId INTEGER OUTPUT,
SubId INTEGER OUTPUT
)
BEGIN
DECLARE #MainName VARCHAR ( 20 );
DECLARE #SubName VARCHAR ( 20 );
DECLARE #MainValue INTEGER;
DECLARE #SubValue INTEGER;
DECLARE #MainId INTEGER;
DECLARE #SubId INTEGER;
#MainName = (SELECT MainName FROM __input);
#SubName = (SELECT SubName FROM __input);
#MainValue = (SELECT MainValue FROM __input);
#SubValue = (SELECT SubValue FROM __input);
#MainId = (SELECT MAX(Id)+1 FROM MainTable);
#SubId = (SELECT MAX(Id)+1 FROM SubTable );
INSERT INTO MainTable (Id, Name, Value) VALUES (#MainId, #MainName, #MainValue);
INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (#SubId, #SubName, #MainId, #SubValue);
INSERT INTO __output SELECT #MainId, #SubId FROM system.iota;
END;
CREATE PROCEDURE UpdateItems
(
MainName VARCHAR ( 20 ),
MainValue INTEGER,
SubValue INTEGER
)
BEGIN
DECLARE #MainName VARCHAR ( 20 );
DECLARE #MainValue INTEGER;
DECLARE #SubValue INTEGER;
DECLARE #MainId INTEGER;
#MainName = (SELECT MainName FROM __input);
#MainValue = (SELECT MainValue FROM __input);
#SubValue = (SELECT SubValue FROM __input);
#MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = #MainName);
UPDATE MainTable SET Value = #MainValue WHERE Id = #MainId;
UPDATE SubTable SET Value = #SubValue WHERE MainId = #MainId;
END;
CREATE PROCEDURE SelectItems
(
MainName VARCHAR ( 20 ),
CalculatedValue INTEGER OUTPUT
)
BEGIN
DECLARE #MainName VARCHAR ( 20 );
#MainName = (SELECT MainName FROM __input);
INSERT INTO __output SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = #MainName;
END;
CREATE PROCEDURE DeleteItems
(
MainName VARCHAR ( 20 )
)
BEGIN
DECLARE #MainName VARCHAR ( 20 );
DECLARE #MainId INTEGER;
#MainName = (SELECT MainName FROM __input);
#MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = #MainName);
DELETE FROM SubTable WHERE MainId = #MainId;
DELETE FROM MainTable WHERE Id = #MainId;
END;
Actually, the problem I had - even so light stored procedures work very-very slow (about 50-150 ms) relatively to plain queries (0-5ms). To test the performance, I created a simple test (in F# using ADS ADO.NET provider):
open System;
open System.Data;
open System.Diagnostics;
open Advantage.Data.Provider;
let mainName = "main name #";
let subName = "sub name #";
// INSERT
let cmdTextScriptInsert = "
DECLARE #MainId INTEGER;
DECLARE #SubId INTEGER;
#MainId = (SELECT MAX(Id)+1 FROM MainTable);
#SubId = (SELECT MAX(Id)+1 FROM SubTable );
INSERT INTO MainTable (Id, Name, Value) VALUES (#MainId, :MainName, :MainValue);
INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (#SubId, :SubName, #MainId, :SubValue);
SELECT #MainId, #SubId FROM system.iota;";
let cmdTextProcedureInsert = "CreateItems";
// UPDATE
let cmdTextScriptUpdate = "
DECLARE #MainId INTEGER;
#MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName);
UPDATE MainTable SET Value = :MainValue WHERE Id = #MainId;
UPDATE SubTable SET Value = :SubValue WHERE MainId = #MainId;";
let cmdTextProcedureUpdate = "UpdateItems";
// SELECT
let cmdTextScriptSelect = "
SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = :MainName;";
let cmdTextProcedureSelect = "SelectItems";
// DELETE
let cmdTextScriptDelete = "
DECLARE #MainId INTEGER;
#MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName);
DELETE FROM SubTable WHERE MainId = #MainId;
DELETE FROM MainTable WHERE Id = #MainId;";
let cmdTextProcedureDelete = "DeleteItems";
let cnnStr = #"data source=D:\DB\test.add; ServerType=local; user id=adssys; password=***;";
let cnn = new AdsConnection(cnnStr);
try
cnn.Open();
let cmd = cnn.CreateCommand();
let parametrize ix prms =
cmd.Parameters.Clear();
let addParam = function
| "MainName" -> cmd.Parameters.Add(":MainName" , mainName + ix.ToString()) |> ignore;
| "SubName" -> cmd.Parameters.Add(":SubName" , subName + ix.ToString() ) |> ignore;
| "MainValue" -> cmd.Parameters.Add(":MainValue", ix * 3 ) |> ignore;
| "SubValue" -> cmd.Parameters.Add(":SubValue" , ix * 7 ) |> ignore;
| _ -> ()
prms |> List.iter addParam;
let runTest testData =
let (cmdType, cmdName, cmdText, cmdParams) = testData;
let toPrefix cmdType cmdName =
let prefix = match cmdType with
| CommandType.StoredProcedure -> "Procedure-"
| CommandType.Text -> "Script -"
| _ -> "Unknown -"
in prefix + cmdName;
let stopWatch = new Stopwatch();
let runStep ix prms =
parametrize ix prms;
stopWatch.Start();
cmd.ExecuteNonQuery() |> ignore;
stopWatch.Stop();
cmd.CommandText <- cmdText;
cmd.CommandType <- cmdType;
let startId = 1500;
let count = 10;
for id in startId .. startId+count do
runStep id cmdParams;
let elapsed = stopWatch.Elapsed;
Console.WriteLine("Test '{0}' - total: {1}; per call: {2}ms", toPrefix cmdType cmdName, elapsed, Convert.ToInt32(elapsed.TotalMilliseconds)/count);
let lst = [
(CommandType.Text, "Insert", cmdTextScriptInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]);
(CommandType.Text, "Update", cmdTextScriptUpdate, ["MainName"; "MainValue"; "SubValue"]);
(CommandType.Text, "Select", cmdTextScriptSelect, ["MainName"]);
(CommandType.Text, "Delete", cmdTextScriptDelete, ["MainName"])
(CommandType.StoredProcedure, "Insert", cmdTextProcedureInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]);
(CommandType.StoredProcedure, "Update", cmdTextProcedureUpdate, ["MainName"; "MainValue"; "SubValue"]);
(CommandType.StoredProcedure, "Select", cmdTextProcedureSelect, ["MainName"]);
(CommandType.StoredProcedure, "Delete", cmdTextProcedureDelete, ["MainName"])];
lst |> List.iter runTest;
finally
cnn.Close();
And I'm getting the following results:
Test 'Script -Insert' - total: 00:00:00.0292841; per call: 2ms
Test 'Script -Update' - total: 00:00:00.0056296; per call: 0ms
Test 'Script -Select' - total: 00:00:00.0051738; per call: 0ms
Test 'Script -Delete' - total: 00:00:00.0059258; per call: 0ms
Test 'Procedure-Insert' - total: 00:00:01.2567146; per call: 125ms
Test 'Procedure-Update' - total: 00:00:00.7442440; per call: 74ms
Test 'Procedure-Select' - total: 00:00:00.5120446; per call: 51ms
Test 'Procedure-Delete' - total: 00:00:01.0619165; per call: 106ms
The situation with the remote server is much better, but still a great gap between plaqin queries and stored procedures:
Test 'Script -Insert' - total: 00:00:00.0709299; per call: 7ms
Test 'Script -Update' - total: 00:00:00.0161777; per call: 1ms
Test 'Script -Select' - total: 00:00:00.0258113; per call: 2ms
Test 'Script -Delete' - total: 00:00:00.0166242; per call: 1ms
Test 'Procedure-Insert' - total: 00:00:00.5116138; per call: 51ms
Test 'Procedure-Update' - total: 00:00:00.3802251; per call: 38ms
Test 'Procedure-Select' - total: 00:00:00.1241245; per call: 12ms
Test 'Procedure-Delete' - total: 00:00:00.4336334; per call: 43ms
Is it any chance to improve the SP performance? Please advice.
ADO.NET driver version - 9.10.2.9
Server version - 9.10.0.9 (ANSI - GERMAN, OEM - GERMAN)
Thanks!
The Advantage v10 beta includes a variety of performance improvements directly targeting stored procedure performance. Here are some things to consider with the current shipping version, however:
In your CreateItems procedure it would likely be more efficient to replace
#MainName = (SELECT MainName FROM __input);
#SubName = (SELECT SubName FROM __input);
#MainValue = (SELECT MainValue FROM __input);
#SubValue = (SELECT SubValue FROM __input);
with the use of a single cursor to retrieve all parameters:
DECLARE input CURSOR;
OPEN input as SELECT * from __input;
FETCH input;
#MainName = input.MainName;
#SubName = input.SubName;
#MainValue = input.MainValue;
#SubValue = input.SubValue;
CLOSE input;
That will avoid 3 statement parse/semantic/optimize/execute operations just to retrieve the input parameters (I know, we really need to eliminate the __input table altogether).
The SelectItems procedure is rarely ever going to be as fast as a select from the client, especially in this case where it really isn't doing anything except abstracting a parameter value (which can easily be done on the client). Remember that because it is a JOIN, the SELECT to fill the __output table is going to be a static cursor (meaning an internal temporary file for the server to create and fill), but now in addition you have the __output table which is yet another temporary file for the server, plus you have additional overhead to populate this __output table with data that has already been place in the static cursor temp table, just for the sake of duplicating it (server could do a better job of detecting this and replacing __output with the existing static cursor reference, but it currently doesn't).
I will try to make some time to try your procedures on version 10. If you have the test tables you used in your testing feel free to zip them up and send them to Advantage#iAnywhere.com and put attn:JD in the subject.
There is one change that would help with the CreateItems procedure. Change the following two statements:
#MainId = (SELECT MAX(Id)+1 FROM MainTable);
#SubId = (SELECT MAX(Id)+1 FROM SubTable );
To this:
#MainId = (SELECT MAX(Id) FROM MainTable);
#MainId = #MainId + 1;
#SubId = (SELECT MAX(Id) FROM SubTable );
#SubId = #SubId + 1;
I looked at the query plan information (in Advantage Data Architect) for the first version of that statement. It looks like the optimizer does not break that MAX(id)+1 into the component pieces. The statement select max(id) from maintable can be optimized using the index on the ID field. It appears that max(id)+1 is not optimized. So making that change would be fairly significant particularly as the table grows.
Another thing that might help is to add a CACHE PREPARE ON; statement to the top of each script. This can help with certain procedures when running them multiple times.
Edit The Advantage v10 beta was released today. So I ran your CreateItems procedure with both v9.1 and the new beta version. I ran 1000 iterations against the remote server. The speed difference was significant:
v9.1: 101 seconds
v10 beta: 2.2 seconds
Note that I ran a version with the select max(id) change I described above. This testing was on my fairly old development PC.