Insert into Identical tables without using dynamic sql - stored-procedures

I am using SQL Server 2012 Enterprise and I have a stored procedure that accepts two parameters:
#pkgId varchar(16), #siteId varchar(2)
The stored procedure will then do an INSERT like this:
IF #siteId = '01'
BEGIN
INSERT INTO dbo.table**01** (pkgId)
VALUES (#pkgId)
END
IF #siteId = '02'
BEGIN
INSERT INTO dbo.table**02** (pkgId)
VALUES (#pkgId)
END
IF #siteId = '03'
BEGIN
INSERT INTO dbo.table**03** (pkgId)
VALUES(#pkgId)
END
Now we are looking to add 10 more site's. So I would have to add 10 more IF statements, but I DO NOT want to use dynamic SQL as I need the query plans to be cached, because speed is a must. Also, I have many more tables that already end in '01', '02' and '03', so there is a lot more code updates for me to do.
Also, it is a business requirement that these tables be separate. Meaning, I cannot just have one table with siteId as a column.
So the question is: is there some other way I can perform this INSERT by using some other alternative and keep my coding at a minimum? Meaning, I would like to call the INSERT only once, if possible, without the use of dynamic SQL.
FYI - I have seen some other alternatives like setting a synonym at real time, but this will cause concurrency issues.

Query plans for dynamic SQL in the manner you need to use it are cached, see links here and here2

Related

Bigquery - parametrize tables and columns in a stored procedure

Consider an enterprise that captures sensor data for different production facilities. per facility, we create an aggregation query that averages the values to 5min timeslots. This query exists out of a long list of with-clauses and writes data to a table (called aggregation_table).
Now my problem: currently we have n queries running that exactly run the same logic, the only thing that differs are table names (and sometimes column names but let's ignore that for now).
Instead of managing n different scripts that are basically the same, I would like to put it in a stored procedure that is able to work like this:
CALL aggregation_query(facility_name) -> resolve the different tables for that facility and then use them in the different with clauses
On top of that, instead of having this long set of clauses that give me the end-result, I would like to chunk them up in logical blocks that are parametrizable, So for example, if I call the aforementioned stored_procedure for facility A, I want to be able to pass / use this table name in these different functions, where the output can be re-used in the next statement (like you would do with with clauses).
Another argument of why I want to chunk this up in re-usable blocks is because we have many "derivatives" on this aggregation query, for example to manage historical data, to correct data or to have the sensor data on another aggregation level. As these become overly complex, it is much easier to manage them without having to copy paste and adjust these every time.
In the current set-up, it could be useful to know that I am only entitled to use plain BigQuery, As my team is not allowed to access the CI/CD / scheduling and repository. (meaning that I cannot solve the issue by having CI/CD that deploys the n different versions of the procedure and functions)
So in the end, I would like to end up with something like this using only bigquery:
CREATE OR REPLACE PROCEDURE
`aggregation_function`()
BEGIN
DECLARE
tablename STRING;
DECLARE
active_table_name STRING; ##get list OF tables CREATE TEMP TABLE tableNames AS
SELECT
table_catalog,
table_schema,
table_name
FROM
`catalog.schema.INFORMATION_SCHEMA.TABLES`
WHERE
table_name = tablename;
WHILE
(
SELECT
COUNT(*)
FROM
tableNames) >= 1 DO ##build dataset + TABLE name
SET
active_table_name = CONCAT('`',table_catalog,'.',table_schema,'.' ,table_name,'`'); ##use concat TO build string AND execute
EXECUTE IMMEDIATE '''
INSERT INTO
`aggregation_table_for_facility` (timeslot, sensor_name, AVG_VALUE )
WITH
STEP_1 AS (
SELECT
*
FROM
my_table_function_step_1(active_table_name,
parameter1,
parameter2) ),
STEP_2 AS (
SELECT
*
FROM
my_table_function_step_2(STEP_1,
parameter1,
parameter2) )
SELECT * FROM STEP_2
'''
USING active_table_name as active_table_name;
DELETE
FROM
tableNames
WHERE
table_name = tablename;
END WHILE
;
END
;
I was hoping someone could make a snippet on how I can do this in Standard SQL / Bigquery, so basically:
stored procedure that takes in a string variable and is able to use that as a table (partly solved in the approach above, but not sure if there are better ways)
(table) function that is able to take this table_name parameter as well and return back a table that can be used in the next with clause (or alternatively writes to a temp table)
I think below code snippets should provide you with some insights when dealing with procedures, inserts and execute immediate statements.
Here I'm creating a procedure which will insert values into a table that exists on the information schema. Also, as a value I want to return I use OUT active_table_name to return the value I assigned inside the procedure.
CREATE OR REPLACE PROCEDURE `project-id.dataset`.custom_function(tablename STRING,OUT active_table_name STRING)
BEGIN
DECLARE query STRING;
SET active_table_name= (SELECT CONCAT('`',table_catalog,'.',table_schema,'.' ,table_name,'`')
FROM `project-id.dataset.INFORMATION_SCHEMA.TABLES`
WHERE table_name = tablename);
#multine query can be handled by using ''' or """
Set query =
'''
insert into %s (string_field_0,string_field_1,string_field_2,string_field_3,string_field_4,int64_field_5)
with custom_query as (
select string_field_0,string_field_2,'169 BestCity',string_field_3,string_field_4,55677 from %s limit 1
)
select * from custom_query;
''';
# querys must perform operations and must be the last thing to perform
# pass parameters using format
execute immediate (format(query,active_table_name,active_table_name));
END
You can also use a loop to iterate trough records from a working table so it will execute the procedure and also be able to get the value from the procedure to use somewhere else.ie:A second procedure to perform a delete operation.
DECLARE tablename STRING;
DECLARE out_value STRING;
FOR record IN
(SELECT tablename from `my-project-id.dataset.table`)
DO
SET tablename = record.tablename;
LOOP
call `project-id.dataset`.custom_function(tablename,out_value);
select out_value;
END LOOP;
END FOR;
To recap, there are some restrictions such as the possibility to call procedures inside a execute immediate or to use execute immediate inside an execute immediate, to count a few. I think these snippets should help you dealing with your current situation.
For this sample I use the following documentation:
Data Manipulation Language
Dealing with outputs
Information Schema Tables
Execute Immediate
For...In
Loops

TFDQuery failing to update?

I'm having a problem with a synchronisation issue... I have a source table (mtAllowanceCategory) which I want to update to a copy (qryAllowanceCategory) of it. To make sure records in the copy are deleted if they are no longer present in the source, the copy has a "StillHere" boolean field, which is set to on when the record is added or updated and otherwise stays off. Afterwards, all records with StillHere=false are deleted.
That's the idea, anyway... in practice, the flag fields isn't turned on when posting updates. When I trace the code, the statement is executed; when I look in Access, it stays off. Hence the delete SQL afterwards clears the entire table.
Been trying to figure this for hours now; what am I missing??
mtAllowanceCategory:TFDMemTable (filled from an API call, this works fine)
qryAllowanceCategory:TFDQuery
conn:TFDConnection to a local Access database (also used for qryAllowanceCategory)
conn.ExecSQL('UPDATE AllowanceCategory SET StillHere=false;');
while not mtAllowanceCategory.eof do
begin
if qryAllowanceCategory.locate('WLPid',mtAllowanceCategory.FieldByName('Id').AsString,[loCaseInsensitive]) then
begin
Updating:=true;
qryAllowanceCategory.Edit;
end
else
begin
Updating:=false;
qryAllowanceCategory.Insert;
end;
qryAllowanceCategory.fieldbyname('createdBy').AsString:=mtAllowanceCategory.FieldByName('createdBy').AsString;
qryAllowanceCategory.fieldbyname('createdOn').AsString:=mtAllowanceCategory.FieldByName('createdOn').AsString;
qryAllowanceCategory.fieldbyname('description').AsString:=mtAllowanceCategory.FieldByName('description').AsString;
qryAllowanceCategory.fieldbyname('WLPid').AsString:=mtAllowanceCategory.FieldByName('id').AsString;
qryAllowanceCategory.fieldbyname('isDeleted').Asboolean:=mtAllowanceCategory.FieldByName('isDeleted').Asboolean;
qryAllowanceCategory.fieldbyname('isInUse').Asboolean:=mtAllowanceCategory.FieldByName('isInUse').Asboolean;
qryAllowanceCategory.fieldbyname('modifiedBy').AsString:=mtAllowanceCategory.FieldByName('modifiedBy').AsString;
qryAllowanceCategory.fieldbyname('modifiedOn').AsString:=mtAllowanceCategory.FieldByName('modifiedOn').AsString;
qryAllowanceCategory.fieldbyname('WLPname').AsString:=mtAllowanceCategory.FieldByName('name').AsString;
qryAllowanceCategory.fieldbyname('number').AsInteger:=mtAllowanceCategory.FieldByName('number').AsInteger;
qryAllowanceCategory.fieldbyname('percentage').AsFloat:=mtAllowanceCategory.FieldByName('number').AsFloat;
qryAllowanceCategory.fieldbyname('remark').AsString:=mtAllowanceCategory.FieldByName('remark').AsString;
qryAllowanceCategory.fieldbyname('LocalEdited').AsBoolean:=false;
qryAllowanceCategory.fieldbyname('LocalInserted').AsBoolean:=false;
qryAllowanceCategory.fieldbyname('LocalDeleted').AsBoolean:=false;
qryAllowanceCategory.fieldbyname('StillHere').AsBoolean:=true;
qryAllowanceCategory.Post;
mtAllowanceCategory.next;
end;
conn.commit;
conn.ExecSQL('DELETE FROM AllowanceCategory WHERE StillHere=false;');
When I read your q, I was struck by two thoughts:
One was that I couldn't immediately
see the cause of your problem and the other that you could probably avoid the problem anyway
if you used Sql rather than table traversals in code.
It seemed to me that you might be able to do most
if not all of what you need, in terms of synchronising the two tables, using Access
Sql rather than traversing the qryAllowanceCategory table using a while not EOF loop.
(btw, in the following I'm going to use 'mtAC' and qryAC to reduce typing & typos)
Using Access SQL
Initially, I did not have much luck, as Access rejected my attempts to
refer to both tables in an Update statement against the qryAC one using a Join
or Outer Join, but then I came across a reference that showed that Access does
support an Inner Join syntax. These SQL statements execute successfully by calling
ExecSQL on the FireDAC connection to the database:
update qryAC set qryAC.StillHere = True
where exists(select mtAC.* from mtAC inner join qryAC on mtAC.WLPid = qryAC.WLPid)
and
update qryAC inner join mtAC on mtAC.WLPid = qryAC.WLPid set qryAC.AValue = mtAC.AValue
This first of these obviously provides a way to update the StillHere field to set it to True,
or False with a trivial modification.
The second shows a way to update a set of fields in qryAC from the matching rows in mtAC
and this could, of course, be limited to a subset of rows with a suitable Where clause.
Access Sql also supports checking whether a row in one table exists in the other, as in
select * from qryAC q where exists (select * from mtac m where q.wlpid = m.wlpid)
and for deleting rows in one table which do not exist in the other
delete from qryAC q where not exists (select * from mtac m where q.wlpid = m.wlpid)
Using FireDAC's LocalSQL
I also mentioned LocalSQL in a comment. This supports a far broader range
of Sql statements that native Access Sql and can operate on any TDataSet descendant,
so if you find something that Access Sql syntax doesn't support, it is worth considering
using LocalSQL instead. Its main downside is that it operates on the datasets using
traversals, so in not quite as "instant" as native Sql. It can be a bit tricky to set up,
so here are the settings from the DFM which show how the components need connecting up. You would use it by feeding what you want to FDQuery1.
object AccessConnection: TFDConnection
Params.Strings = (
'Database=D:\Delphi\Code\FireDAC\LocalSQL\Allowance.accdb'
'DriverID=MSAcc')
Connected = True
LoginPrompt = False
end
object mtAC: TFDQuery
AfterOpen = mtACAfterOpen
Connection = AccessConnection
SQL.Strings = (
'select * from mtAC')
end
object qryAC: TFDQuery
Connection = AccessConnection
end
object LocalSqlConnection: TFDConnection
Params.Strings = (
'DriverID=SQLite')
Connected = True
LoginPrompt = False
end
object FDLocalSQL1: TFDLocalSQL
Connection = LocalSqlConnection
DataSets = <
item
DataSet = mtAC
end
item
DataSet = qryAC
end>
end
object FDGUIxWaitCursor1: TFDGUIxWaitCursor
Provider = 'Forms'
end
object FDPhysSQLiteDriverLink1: TFDPhysSQLiteDriverLink
end
object FDQuery1: TFDQuery
Connection = LocalSqlConnection
end
If anyone is interested:
The problem was in not refreshing qryAllowanceCategory after the initial SQL setting StillHere to false. The memory version (qryAllowanceCategory) of the record didn't get that update, so according to him, the flag was still on; after the field updates it appeared there were no changes (all the other fields were unchanged as well) so the post was ignored. In the actual table it was off though, so the final delete SQL removed it.
The problem was solved by adding a refresh after the first UPDATE SQL statement.

Return 2 resultset from cursor based on one query (nested cursor)

I'm trying to obtain 2 different resultset from stored procedure, based on a single query. What I'm trying to do is that:
1.) return query result into OUT cursor;
2.) from this cursor results, get all longest values in each column and return that as second OUT
resultset.
I'm trying to avoid doing same thing twice with this - get data and after that get longest column values of that same data. I'm not sure If this is even possible, but If It is, can somebody show me HOW ?
This is an example of what I want to do (just for illustration):
CREATE OR REPLACE PROCEDURE MySchema.Test(RESULT OUT SYS_REFCURSOR,MAX_RESULT OUT SYS_REFCURSOR)
AS
BEGIN
OPEN RESULT FOR SELECT Name,Surname FROM MyTable;
OPEN MAX_RESULT FOR SELECT Max(length(Name)),Max(length(Surname)) FROM RESULT; --error here
END Test;
This example compiles with "ORA-00942: table or view does not exist".
I know It's a silly example, but I've been investigating and testing all sorts of things (implicit cursors, fetching cursors, nested cursors, etc.) and found nothing that would help me, specially when working with stored procedure returning multiple resultsets.
My overall goal with this is to shorten data export time for Excel. Currently I have to run same query twice - once for calculating data size to autofit Excel columns, and then for writing data into Excel.
I believe that manipulating first resultset in order to get second one would be much faster - with less DB cycles made.
I'm using Oracle 11g, Any help much appreciated.
Each row of data from a cursor can be read exactly once; once the next row (or set of rows) is read from the cursor then the previous row (or set of rows) cannot be returned to and the cursor cannot be re-used. So what you are asking is impossible as if you read the cursor to find the maximum values (ignoring that you can't use a cursor as a source in a SELECT statement but, instead, you could read it using a PL/SQL loop) then the cursor's rows would have been "used up" and the cursor closed so it could not be read from when it is returned from the procedure.
You would need to use two separate queries:
CREATE PROCEDURE MySchema.Test(
RESULT OUT SYS_REFCURSOR,
MAX_RESULT OUT SYS_REFCURSOR
)
AS
BEGIN
OPEN RESULT FOR
SELECT Name,
Surname
FROM MyTable;
OPEN MAX_RESULT FOR
SELECT MAX(LENGTH(Name)) AS max_name_length,
MAX(LENGTH(Surname)) AS max_surname_length
FROM MyTable;
END Test;
/
Just for theoretical purposes, it is possible to only read from the table once if you bulk collect the data into a collection then select from a table-collection expression (however, it is going to be more complicated to code/maintain and is going to require that the rows from the table are stored in memory [which your DBA might not appreciate if the table is large] and may not be more performant than compared to just querying the table twice as you'll end up with three SELECT statements instead of two).
Something like:
CREATE TYPE test_obj IS OBJECT(
name VARCHAR2(50),
surname VARCHAR2(50)
);
CREATE TYPE test_obj_table IS TABLE OF test_obj;
CREATE PROCEDURE MySchema.Test(
RESULT OUT SYS_REFCURSOR,
MAX_RESULT OUT SYS_REFCURSOR
)
AS
t_names test_obj_table;
BEGIN
SELECT Name,
Surname
BULK COLLECT INTO t_names
FROM MyTable;
OPEN RESULT FOR
SELECT * FROM TABLE( t_names );
OPEN MAX_RESULT FOR
SELECT MAX(LENGTH(Name)) AS max_name_length,
MAX(LENGTH(Surname)) AS max_surname_length
FROM TABLE( t_names );
END Test;
/

Count occurrence of values in a serialized attribute(array) in Active Admin dashboard (Rails, Active admin 1.0, Postgresql database, postgres_ext gem)

I'd like to have a basic table summing up the number of occurence of values inside arrays.
My app is a Daily Deal app built to learn more Ruby on Rails.
I have a model Deals, which has one attribute called Deal_goal. It's a multiple select which is serialized in an array.
Here is the deal_goal taken from schema.db:
t.string "deal_goal",:array => true
So a deal A can have deal= goal =[traffic, qualification] and another deal can have as deal_goal=[branding, traffic, acquisition]
What I'd like to build is a table in my dashboard which would take each type of goal (each value in the array) and count the number of deals whose deal_goal's array would contain this type of goal and count them.
My objective is to have this table:
How can I achieve this? I think I would need to group each deal_goal array for each type of value and then count the number of times where this goals appears in the arrays. I'm quite new to RoR and can't manage to do it.
Here is my code so far:
column do
panel "top of Goals" do
table_for Deal.limit(10) do
column ("Goal"), :deal_goal ????
# add 2 columns:
'nb of deals with this goal'
'Share of deals with this goal'
end
end
Any help would be much appreciated!
I can't think of any clean way to get the results you're after through ActiveRecord but it is pretty easy in SQL.
All you're really trying to do is open up the deal_goal arrays and build a histogram based on the opened arrays. You can express that directly in SQL this way:
with expanded_deals(id, goal) as (
select id, unnest(deal_goal)
from deals
)
select goal, count(*) n
from expanded_deals
group by goal
And if you want to include all four goals even if they don't appear in any of the deal_goals then just toss in a LEFT JOIN to say so:
with
all_goals(goal) as (
values ('traffic'),
('acquisition'),
('branding'),
('qualification')
),
expanded_deals(id, goal) as (
select id, unnest(deal_goal)
from deals
)
select all_goals.goal goal,
count(expanded_deals.id) n
from all_goals
left join expanded_deals using (goal)
group by all_goals.goal
SQL Demo: http://sqlfiddle.com/#!15/3f0af/20
Throw one of those into a select_rows call and you'll get your data:
Deal.connection.select_rows(%q{ SQL goes here }).each do |row|
goal = row.first
n = row.last.to_i
#....
end
There's probably a lot going on here that you're not familiar with so I'll explain a little.
First of all, I'm using WITH and Common Table Expressions (CTE) to simplify the SELECTs. WITH is a standard SQL feature that allows you to produce SQL macros or inlined temporary tables of a sort. For the most part, you can take the CTE and drop it right in the query where its name is:
with some_cte(colname1, colname2, ...) as ( some_pile_of_complexity )
select * from some_cte
is like this:
select * from ( some_pile_of_complexity ) as some_cte(colname1, colname2, ...)
CTEs are the SQL way of refactoring an overly complex query/method into smaller and easier to understand pieces.
unnest is an array function which unpacks an array into individual rows. So if you say unnest(ARRAY[1,2]), you get two rows back: 1 and 2.
VALUES in PostgreSQL is used to, more or less, generate inlined constant tables. You can use VALUES anywhere you could use a normal table, it isn't just some syntax that you throw in an INSERT to tell the database what values to insert. That means that you can say things like this:
select * from (values (1), (2)) as dt
and get the rows 1 and 2 out. Throwing that VALUES into a CTE makes things nice and readable and makes it look like any old table in the final query.

Conditionally inserting from a stored procedure

I am trying to teach myself SQL. I have a web matrix project I am working on to edit and display posts backed by a SQL server Datatabase. A work colleague suggested I use a Stored Procedure to commit the post rather than writing the sql inline.
So far the procedure looks ok but I would like to check if the url slug already exists, and if so return something to say so (The url slug should be unique). I'm struggling with how I am supposed to check before the insert. I have also read that it is bad practice to return from a stored procedure, but I thought it would be a good idea to return something to let the caller know the insert did not go ahead.
Any help would be very much appreciated.
-- =============================================
-- Author: Dean McDonnell
-- Create date: 05/12/2011
-- Description: Commits an article to the database.
-- =============================================
CREATE PROCEDURE CommitPost
#UrlSlug VARCHAR(100),
#Heading VARCHAR(100),
#SubHeading VARCHAR(300),
#Body VARCHAR(MAX)
AS
INSERT INTO Posts(UrlSlug, Heading, SubHeading, Body, Timestamp)
VALUES(#UrlSlug, #Heading, #SubHeading, #Body, GETDATE())
This is what I have so far.
CREATE PROCEDURE CommitPost
#UrlSlug VARCHAR(100),
#Heading VARCHAR(100),
#SubHeading VARCHAR(300),
#Body VARCHAR(MAX)
AS
IF NOT EXISTS (SELECT * FROM Posts WHERE UrlSlug = #UrlSlug)
INSERT INTO Posts(UrlSlug, Heading, SubHeading, Body, Timestamp)
VALUES(#UrlSlug, #Heading, #SubHeading, #Body, GETDATE())
SELECT ##ROWCOUNT
To check for existance, do a SELECT COUNT like so:
CREATE PROCEDURE CommitPost
#UrlSlug VARCHAR(100),
#Heading VARCHAR(100),
#SubHeading VARCHAR(300),
#Body VARCHAR(MAX)
AS
DECLARE #count INT
SELECT #count = COUNT(*) FROM Posts WHERE UrlSlug = #UrlSlug
IF #count = 0 THEN
BEGIN
INSERT INTO Posts(UrlSlug, Heading, SubHeading, Body, Timestamp)
VALUES(#UrlSlug, #Heading, #SubHeading, #Body, GETDATE())
END
You may set an unique index on UrlSlug to make the database reject insertions of urls already in the database, but nonetheless you should check before inserting.
If your caller wants to know if the row was inserted, return the #count value. If it's 0 then the line was inserted, else not. I'm not aware of a "bad practice" regarding to return values from a SP. As a SP does not have a result, though, you need to use an out parameter.
If you do just one SQL statement like this insert you could just use paratemerized query i.e. I assume that you are using .NET.
If you want to return values I would suggest that you use a FUNCTION instead of a STORED PROCEDURE. You can return either tables or whatever you want from a function.
There are some limitations though. You can dig a little deeper into the differences to see what is used when. Here's a link that can help you out get started:
Function vs. Stored Procedure in SQL Server
If you want to use stored procedure anyway, you can either return a single row, single column result set, using SELECT, or just use an output parameter.
If you want to do actions depending of whether the column exists or not I would suggest that you look into MERGE statement.That way you would perform only one query to the database instead of two or more(doing SELECT and then INSERT).
There are also other ways to use database access like various ORMs on top of the database in the code that will make your life easier, like LINQ-to-SQL etc. There are a lot of possibilities out there. You need to determine what's best in a given situation.

Resources