How to submit the query by stored procedure in MySQL? - stored-procedures

Hello I am trying to automate my history tracking procedure in MySQL.
The procedure should update a table and create another using uid as a name.
CREATE PROCEDURE `InsertQueryStore`( u VARCHAR(128), ID INT, q VARCHAR(1024) )
BEGIN
INSERT INTO querystore(`qID`, `qstring`, `user`) VALUES(ID, q, u); # this works
# DROP TABLE IF EXIST ID ; //Can I do something like this?
# CREATE TABLE ID q; // The q is a query string which should return results into to table ID
END;
then I would like to call as:
Call InsertQueryStore("myname", 100, "select * from mydb.table limit 10")
What is the proper way to use the varchar variable in the procedure?
Thank you beforehand.
Arman.

I think the way to go with that would be using Dynamic SQL.
MySQL does not support dynamic SQL in the way some DBMS do, but it does have the PREPARE/EXECUTE methods for creating a query and executing it. See if you can use them within your stored procedure.
Something like:
CREATE PROCEDURE `InsertQueryStore`( u VARCHAR(128), ID INT, q VARCHAR(1024) )
BEGIN
INSERT INTO querystore(`qID`, `qstring`, `user`) VALUES(ID, q, u);
PREPARE stmt FROM "DROP TABLE IF EXIST ?";
EXECUTE stmt USING ID;
DEALLOCATE PREPARE stmt;
/* etc */
END;
If you find you can't use the parameterised version with '?' in that context, just use CONCAT() to assemble it with the actual value in the string as it is already known at that stage.
There is a reasonable article about it here, mentioned in a previous SO post.

Related

Bigquery - parametrize tables and columns in a stored procedure

Consider an enterprise that captures sensor data for different production facilities. per facility, we create an aggregation query that averages the values to 5min timeslots. This query exists out of a long list of with-clauses and writes data to a table (called aggregation_table).
Now my problem: currently we have n queries running that exactly run the same logic, the only thing that differs are table names (and sometimes column names but let's ignore that for now).
Instead of managing n different scripts that are basically the same, I would like to put it in a stored procedure that is able to work like this:
CALL aggregation_query(facility_name) -> resolve the different tables for that facility and then use them in the different with clauses
On top of that, instead of having this long set of clauses that give me the end-result, I would like to chunk them up in logical blocks that are parametrizable, So for example, if I call the aforementioned stored_procedure for facility A, I want to be able to pass / use this table name in these different functions, where the output can be re-used in the next statement (like you would do with with clauses).
Another argument of why I want to chunk this up in re-usable blocks is because we have many "derivatives" on this aggregation query, for example to manage historical data, to correct data or to have the sensor data on another aggregation level. As these become overly complex, it is much easier to manage them without having to copy paste and adjust these every time.
In the current set-up, it could be useful to know that I am only entitled to use plain BigQuery, As my team is not allowed to access the CI/CD / scheduling and repository. (meaning that I cannot solve the issue by having CI/CD that deploys the n different versions of the procedure and functions)
So in the end, I would like to end up with something like this using only bigquery:
CREATE OR REPLACE PROCEDURE
`aggregation_function`()
BEGIN
DECLARE
tablename STRING;
DECLARE
active_table_name STRING; ##get list OF tables CREATE TEMP TABLE tableNames AS
SELECT
table_catalog,
table_schema,
table_name
FROM
`catalog.schema.INFORMATION_SCHEMA.TABLES`
WHERE
table_name = tablename;
WHILE
(
SELECT
COUNT(*)
FROM
tableNames) >= 1 DO ##build dataset + TABLE name
SET
active_table_name = CONCAT('`',table_catalog,'.',table_schema,'.' ,table_name,'`'); ##use concat TO build string AND execute
EXECUTE IMMEDIATE '''
INSERT INTO
`aggregation_table_for_facility` (timeslot, sensor_name, AVG_VALUE )
WITH
STEP_1 AS (
SELECT
*
FROM
my_table_function_step_1(active_table_name,
parameter1,
parameter2) ),
STEP_2 AS (
SELECT
*
FROM
my_table_function_step_2(STEP_1,
parameter1,
parameter2) )
SELECT * FROM STEP_2
'''
USING active_table_name as active_table_name;
DELETE
FROM
tableNames
WHERE
table_name = tablename;
END WHILE
;
END
;
I was hoping someone could make a snippet on how I can do this in Standard SQL / Bigquery, so basically:
stored procedure that takes in a string variable and is able to use that as a table (partly solved in the approach above, but not sure if there are better ways)
(table) function that is able to take this table_name parameter as well and return back a table that can be used in the next with clause (or alternatively writes to a temp table)
I think below code snippets should provide you with some insights when dealing with procedures, inserts and execute immediate statements.
Here I'm creating a procedure which will insert values into a table that exists on the information schema. Also, as a value I want to return I use OUT active_table_name to return the value I assigned inside the procedure.
CREATE OR REPLACE PROCEDURE `project-id.dataset`.custom_function(tablename STRING,OUT active_table_name STRING)
BEGIN
DECLARE query STRING;
SET active_table_name= (SELECT CONCAT('`',table_catalog,'.',table_schema,'.' ,table_name,'`')
FROM `project-id.dataset.INFORMATION_SCHEMA.TABLES`
WHERE table_name = tablename);
#multine query can be handled by using ''' or """
Set query =
'''
insert into %s (string_field_0,string_field_1,string_field_2,string_field_3,string_field_4,int64_field_5)
with custom_query as (
select string_field_0,string_field_2,'169 BestCity',string_field_3,string_field_4,55677 from %s limit 1
)
select * from custom_query;
''';
# querys must perform operations and must be the last thing to perform
# pass parameters using format
execute immediate (format(query,active_table_name,active_table_name));
END
You can also use a loop to iterate trough records from a working table so it will execute the procedure and also be able to get the value from the procedure to use somewhere else.ie:A second procedure to perform a delete operation.
DECLARE tablename STRING;
DECLARE out_value STRING;
FOR record IN
(SELECT tablename from `my-project-id.dataset.table`)
DO
SET tablename = record.tablename;
LOOP
call `project-id.dataset`.custom_function(tablename,out_value);
select out_value;
END LOOP;
END FOR;
To recap, there are some restrictions such as the possibility to call procedures inside a execute immediate or to use execute immediate inside an execute immediate, to count a few. I think these snippets should help you dealing with your current situation.
For this sample I use the following documentation:
Data Manipulation Language
Dealing with outputs
Information Schema Tables
Execute Immediate
For...In
Loops

how to write to dynamically created table in Redshift procedure

I need to write a procedure in Redshift that will write to a table, but the table name comes from the input string. Then I declare a variable that puts together the table name.
CREATE OR REPLACE PROCEDURE my_schema.data_test(current "varchar")
LANGUAGE plpgsql
AS $$
declare new_table varchar(50) = 'new_tab' || '_' || current;
BEGIN
select 'somestring' as colname into new_table;
commit;
END;
$$
This code runs but it doesn't create a new table, no errors. If I remove the declare statement then it works, creating a table called "new_table". It's just not using the declared variable name.
It's hard to find good examples because Redshift is postgresql and all the postgresql pages say that it only has functions, not procedures. But Redshift procedures were introduced last year and I don't see many examples.
Well, when you are declaring a variable "new_table", and performing a SELECT ..INTO "new_table", the value is getting assigned to the variable "new_table". You will see that if you return your variable using a OUT parameter.
And when you remove the declaration, it simply work as a SELECT INTO syntax of Redshift SQL and creates a table.
Now to the solution:
Create a table using the CREATE TABLE AS...syntax.
Also you need to pass the value of declared variable, so use the EXECUTE command.
CREATE OR REPLACE PROCEDURE public.ct_tab (vname varchar)
AS $$
DECLARE tname VARCHAR(50):='public.swap_'||vname;
BEGIN
execute 'create table ' || tname || ' as select ''name''';
END;
$$ LANGUAGE plpgsql
;
Now if you call the procedure passing 'abc', a table named "swap_abc" will be created in public schema.
call public.ct_tab('abc');
Let me know if it helps :)

Is it possible to pass in a variable amount of parameters to a stored procedure in redshift?

I am trying to write a stored procedure in AWS Redshift SQL and one of my parameters needs the possibility to have an integer list (will be using 'IN(0,100,200,...)' inside there WHERE clause). How would I write the input parameter in the header of the procedure so that this is possible (if at all?)
I've tried passing them in as a VARCHAR "integer list" type thing but wasn't sure then how to parse that back into ints.
Update: I found a way to parse the string and loop through it using the SPLIT_PART function and store all of those into a table. Then just use a SELECT * FROM table with the IN() call
What I ended up doing was as follows. I took in the integers that I was expecting as a comma-separated string. I then ran the following on it.
CREATE OR REPLACE PROCEDURE test_string_to_int(VARCHAR)
AS $$
DECLARE
split_me ALIAS FOR $1;
loop_var INT;
BEGIN
DROP TABLE IF EXISTS int_list;
CREATE TEMPORARY TABLE int_list (
integer_to_store INT
);
FOR loop_var IN 1..(REGEXP_COUNT(split_me,',') + 1) LOOP
INSERT INTO int_list VALUES (CAST(SPLIT_PART(split_me,',',loop_var) AS INT));
END LOOP;
END;
$$ LANGUAGE plpgsql;
So I would call the procedure with something like:
CALL test_string_to_int('1,2,3');
and could do a select statement on it to see all the values stored into the table. Then in my queries the need this parameter I ran:
.........................
WHERE num_items IN(SELECT integer_to_store FROM int_list);

DB2 iSeries v6.1 vs Linux v9.7 functionality on xmltable, xml doc

Techies--
I'm getting sql0204 XML in *LIBL type *SQLUDT not found on an i db2 6.1 install when I try to deploy a stored proc I know works on Linux v9.7. The reason I am attempting to get this to work is because I really need to pass a table (or array) variable. I couldn't find a way to send a multi-dim array to a sproc on the 6.1 v of i, so I thought I'd try getting around that with an xml doc. But that failed too... Does anyone have any advice for me on how to solve this issue?
Here's the sproc that works on v9.7,Linux:
CREATE PROCEDURE HCMDEV.EMP_MULTIPLE_XML (IN DOC XML)
DYNAMIC RESULT SETS 1
READS SQL DATA
LANGUAGE SQL SPECIFIC EMP_MULTIPLE_XML
P1: BEGIN
DECLARE CSR1 CURSOR WITH RETURN FOR
SELECT emp.EMPID,
emp.FIRSTNAME,
emp.LASTNAME,
emp.DIVISION,
emp.DISTRICT,
emp.LOCATION,
emp.OPERATIONALAREA,
emp.TERMDATE,
emp.REHIREDATE,
emp.HIREDATE,
emp.ADDRESSLINE1,
emp.ADDRESSLINE2,
emp.CITY,
emp.STATE,
emp.ZIPCODE,
emp.TELEPHONE1,
emp.POSITIONCODE,
emp.POSITIONTITLE,
emp.HIRECODE
FROM HCMDEV.EMPLOYEE emp
WHERE EMP.EMPID IN
(SELECT X.EMPID
FROM XMLTABLE('$d/EMPLOYEE/EMPID' PASSING DOC AS "d" COLUMNS EMPID CHAR(9) PATH '.') AS X);
OPEN CSR1;
END P1
For those following this thread, xmltable is NOT available until V7R1. The workaround is to create a stored proc that accepts as the IN paramater a CLOB datatype. In my case, I pass a long string with commas to separate the values for the empid. That works just fine.
Here's a sampling:
CREATE PROCEDURE HCMDEV.EMP_MULT (IN emp_array CLOB(2M))
DYNAMIC RESULT SETS 1
READS SQL DATA
LANGUAGE SQL SPECIFIC EMP_MULT
P1: BEGIN
-- #######################################################################
-- # Returns specific employees based on incoming clob array
-- #######################################################################
DECLARE v_dyn varchar(10000);
DECLARE v_sql varchar(10000);
DECLARE cursor1 CURSOR WITH RETURN FOR v_dyn;
SET v_sql =
'SELECT
emp.EMPID,
emp.FIRSTNAME,
emp.LASTNAME
FROM HCMDEV.EMPLOYEE emp
WHERE emp.EMPID IN (' || emp_array || ')';
PREPARE v_dyn FROM v_sql;
OPEN cursor1;
END P1

Use Recursive CTE in DB2 stored proc

I have a need to run a recursive CTE within a stored proc, but I can't get it past this:
SQL0104N An unexpected token "with" was found following "SET count=count+1;
". Expected tokens may include: "". LINE NUMBER=26.
My google-fu showed a couple of similar topics, but none with resolution.
The query functions as expected outside of the stored proc, so I'm hoping that there's some syntactic sugar I'm missing that'll let this work. Similarly, the proc compiles and works without the query.
Here's a contrived example:
--setup
create table tree (id integer, name varchar(50), parent_id integer);
insert into tree values (1, 'Alice', null);
insert into tree values (2, 'Bob', 1);
insert into tree values (3, 'Charlie', 2);
-
- the proc
create or replace procedure testme() RESULT SETS 1 LANGUAGE SQL
BEGIN
DECLARE SQLSTATE CHAR(5);
DECLARE SQLCODE integer default 0;
DECLARE count INTEGER;
DECLARE sum INTEGER;
DECLARE total INTEGER;
DECLARE id INTEGER;
DECLARE curs CURSOR WITH RETURN FOR
select count,sum from sysibm.sysdummy1;
DECLARE hiercurs CURSOR FOR
select id from tree order by id;
SET bomQuery='';
PREPARE stmt FROM bomQuery;
SET count = 0;
SET sum = 0;
set total = 0;
OPEN hiercurs;
FETCH hiercurs INTO id;
WHILE (SQLCODE <> 100) DO
SET count=count+1;
with org (level,id,name,parent_id) as
(select 1 as level,root.id,root.name,root.parent_id from tree root where root.id=id
union all
select level+1,employee.id,employee.name,employee.parent_ id from org boss, tree employee
where level < 5 and employee.parent_id=boss.id)
select count(1) into sum from org;
SET total=total+sum;
FETCH hiercurs INTO id;
END WHILE;
CLOSE hiercurs;
OPEN curs;
END
the cte in db2 doesn't seem to recognize the scalar result of the query, and so it won't let the select into work (not a problem on Oracle or SQLServer)...solution is to open a cursor and FETCH INTO (instead of SELECT INTO) instead.
In addition to rjb's suggestion of enclosing the CTE query inside a cursor, you can also stuff the CTE into a user-defined function or a view, and then code a straight select against that object into your stored procedure.

Resources