I have a problem with a view taken from a dynamic procedure.
Here is some code to reproduce my problem (note that this is a simplified version of what I try to handle.)
CREATE TABLE [dbo].[DataTable]
(
[id] [uniqueidentifier] NOT NULL CONSTRAINT [DF_DataTable_id] DEFAULT (newid()),
[Order_ID] [uniqueidentifier] NOT NULL,
[DataType] [varchar](50) NOT NULL,
[DataValue] [varchar](50) NOT NULL,
CONSTRAINT [PK_DataTable]
PRIMARY KEY CLUSTERED ([id] ASC)
) ON [PRIMARY]
GO
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('F8BB99BB-FE94-4161-B449-00C5E81FA991', 'Order_No', '123')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('E0C7C86E-74B9-4F37-AF41-5BD82A0CE2FF', 'Order_No', '124')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('988E3B5A-B9FA-486C-9755-E4F7AF1A9964', 'Order_No', '125')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('2690A51B-5CBC-42F9-9F76-94D7F9BF2FD4', 'Order_No', '126')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('F8BB99BB-FE94-4161-B449-00C5E81FA991', 'order_Status', 'OK')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('E0C7C86E-74B9-4F37-AF41-5BD82A0CE2FF', 'order_Status', 'OK')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('988E3B5A-B9FA-486C-9755-E4F7AF1A9964', 'order_Status', 'NOK')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('2690A51B-5CBC-42F9-9F76-94D7F9BF2FD4', 'order_Status', 'pending')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('F8BB99BB-FE94-4161-B449-00C5E81FA991', 'Level1_Status', '2')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('E0C7C86E-74B9-4F37-AF41-5BD82A0CE2FF', 'Level1_Status', '5')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('988E3B5A-B9FA-486C-9755-E4F7AF1A9964', 'Level2_Status', '3')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('2690A51B-5CBC-42F9-9F76-94D7F9BF2FD4', 'Level2_Status', '1')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('F8BB99BB-FE94-4161-B449-00C5E81FA991', 'customer', 'John')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('E0C7C86E-74B9-4F37-AF41-5BD82A0CE2FF', 'customer', 'Nancy')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('988E3B5A-B9FA-486C-9755-E4F7AF1A9964', 'location', 'Germany')
insert into dbo.DataTable (Order_ID, DataType, DataValue)
VALUES ('2690A51B-5CBC-42F9-9F76-94D7F9BF2FD4', 'information', 'very important')
GO
CREATE PROCEDURE [dbo].[sp_order_List]
as
begin
DECLARE #cols AS NVARCHAR(MAX), #query AS NVARCHAR(MAX);
SELECT
#cols = STUFF((SELECT distinct ',' + QUOTENAME(DataType)
FROM
dbo.DataTable
FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)'),1,1,'')
set #query =
'select * from
(SELECT Order_ID, DataType, DataValue FROM dbo.DataTable ) ps
PIVOT ( MIN (DataValue) FOR DataType in (' + #cols + ')) AS pvt'
execute (#query);
end
GO
In SQL Server 2008 R2 it was possible to put a view over the procedure, so I can access the data in the format needed and everything was fine.
Please take note, that normally I don't do select * in my View, but join, subselect and convert my data and some other tables, to get my final result, depending on my needs.
exec sp_serveroption #server = 'YOURSERVER\SQL2012'
,#optname = 'DATA ACCESS'
,#optvalue = 'TRUE'
go
create view [dbo].[vw_order_list]
as
SELECT a.*
FROM openquery([YourServer\SQL2012], 'SET FMTONLY OFF;exec TestDatabase.dbo.sp_order_List') AS a
GO
Now my company wants to upgrade to SQL Server 2012 and here my problems starts.
When I try to create the view I get this error message:
Error :11514 – The metadata could not be determined because statement
in procedure contains dynamic SQL. Consider using the WITH RESULT SETS
clause to explicitly describe the result set.
The problem is, that the columns are generated automatically depending on the values in DataType, so that I can't give the view a fix Result Set.
I already tried everything which came into my mind and tried to find that problem with google without success.
Please feel free to ask, if something is unclear or if you have any suggestions.
Using a view is not required, but would make my life a lot easier.
Just keep in mind, that I need to join different tables depending on users request in the software.
Ideas I already had, but had issues with:
something with a temp table (didn't get it to work)
creating the view by procedure dynamically. The problem is, that there is a chance that somebody is requesting the view while I try to recreate it with the procedure. Don't know how I can stop this from happening.
Related
I create a stored procedure as follows:
replace PROCEDURE mydb.sp_Insert_Values (
IN lastExecDate timestamp
)
SQL SECURITY CREATOR
BEGIN
CREATE MULTISET VOLATILE TABLE vt_ref_table_1
(
Ref_Id integer,
Ref_Unit_Type varchar(50)
) ON COMMIT PRESERVE ROWS;
insert into mydb.vt_ref_table_1
select Ref_Id, Ref_Unit_Type
from mydb.ref_table_1;
INSERT INTO mydb.Time_Series_Table
select
t1.TD_TIMECODE
, t1.Time_Series_Meas
, t1.Time_Series_Curve_Type_CD
, t1.Ref_Unit_Type
, t1.Ref_Id
, t1.created_on
from
(
select
meas_ts as TD_TIMECODE
, ref_table_1.Ref_Id as Ref_Id
, meas as Time_Series_Meas,
, Time_Series_Curve_Type_CD
, Ref_Unit_Type
, current_timestamp as created_on
from mydb_stg.Time_Series_Table_Stg as stg
left join mydb.ref_table_1 as ref_table_1
on ref_table_1.Ref_Id = stg.Ref_Id
where stg.created_on >= :lastExecDate
) as t1
left join mydb.Time_Series_Table as t2
on t1.TD_TIMECODE=t2.TD_TIMECODE
and t1.Time_Series_Curve_Type_CD = t2.Time_Series_Curve_Type_CD
and t1.Ref_Id=t2.Ref_Id
where t2.Ref_Id is null
;
END;
It compiles but when I call it this error is thrown:
Only a COMMIT WORK or null statement is legal after a DDL Statement.
I know the error is related to the volatile table but I don't know how to correct it.
Why I need the volatile table:
The reference table has a row-level-security constraint. If I use it directly I get another error:
A multi-table operation is executed and the tables do not have the
same security constraints.
Teradata Version: 16.20
Mode: ANSI
workaround:
create VOLATILE table first in your session
create procedure in the same session
I have a mysql DB with rails, and a column "shorthand" (string) that I'd like to make unique across multiple tables. Is there a way I can do this without making a third table?
Expression
id
shorthand
...
etc
Variable
id
shorthand
...
etc
I want the values in the 'shorthand' columns of both tables to be unique between each other ie. a record shorthand value "xyz" in Expression would be rejected if a Variable with shorthand value "xyz" were to exist in the DB already.
Any thoughts appreciated, even "you have to use a third table" :)
Here an example using a third table:
-- TEMP SCHEMA for testing
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE TABLE shorthand
( shorthand varchar NOT NULL PRIMARY KEY
, one_or_two varchar NOT NULL
);
CREATE TABLE table_one
( one_id INTEGER NOT NULL PRIMARY KEY
, shorthand varchar NOT NULL REFERENCES shorthand(shorthand)
ON DELETE CASCADE ON UPDATE CASCADE
DEFERRABLE INITIALLY DEFERRED
, etc_one varchar
);
CREATE TABLE table_two
( two_id INTEGER NOT NULL PRIMARY KEY
, shorthand varchar NOT NULL REFERENCES shorthand(shorthand)
ON DELETE CASCADE ON UPDATE CASCADE
DEFERRABLE INITIALLY DEFERRED
, etc_two varchar
);
-- Trigger function for BOTH tables
CREATE FUNCTION set_one_or_two( ) RETURNS TRIGGER
AS $func$
BEGIN
IF (TG_OP = 'INSERT') THEN
INSERT INTO shorthand (shorthand, one_or_two)
VALUES(new.shorthand, TG_TABLE_NAME)
;
ELSEIF (TG_OP = 'UPDATE') THEN
UPDATE shorthand SET shorthand = new.shorthand
WHERE shorthand = old.shorthand
;
ELSEIF (TG_OP = 'DELETE') THEN
DELETE FROM shorthand
WHERE shorthand = old.shorthand
;
END IF;
RETURN NULL;
END
$func$ LANGUAGE plpgsql
;
-- Triggers for I/U/D
CREATE CONSTRAINT TRIGGER check_one
AFTER INSERT OR UPDATE OR DELETE
ON table_one
FOR EACH ROW
EXECUTE PROCEDURE set_one_or_two ( )
;
CREATE CONSTRAINT TRIGGER check_two
AFTER INSERT OR UPDATE OR DELETE
ON table_two
FOR EACH ROW
EXECUTE PROCEDURE set_one_or_two ( )
;
-- Some tests (incomplete)
INSERT INTO table_one (one_id,shorthand,etc_one) VALUES (1, 'one' , 'one' );
INSERT INTO table_two (two_id,shorthand,etc_two) VALUES (1, 'two' , 'two' );
SELECT * FROM shorthand;
\echo this should fail
INSERT INTO table_one (one_id,shorthand,etc_one) VALUES (11, 'two' , 'eleven' );
SELECT * FROM shorthand;
UPDATE table_one SET shorthand = 'eleven' WHERE one_id = 1;
SELECT * FROM shorthand;
I think this older article does exactly what you are looking for (simulating multi table constraints):
http://classes.soe.ucsc.edu/cmps180/Winter04/constraints.html
You might also like to investigate postgres CREATE CONSTRAINT TRIGGER using a function similar to the check_nojoin() function in the article.
http://www.postgresql.org/docs/9.0/static/sql-createconstraint.html
Once you have the exact SQL you need you can put it in your rails migration with execute "the required SQL"
An alternative approach is to use a third table 'shorthands' with columns 'shorthand' and 'src'. Define shorthand as the unique primary key on that table. On each of your other two tables define 'src' as a single char field defaulting to 'A' and 'B' on each table respecitively. Add a foreign key constraint on each of your two tables consisting of both 'shorthand' and 'src' and referencing table 'shorthands'. When inserting or updating rows in either of your two tables you need to ensure the 'shorthands' table is updated either explicity as part of your transaction or via a trigger and set both 'shorthand', and 'src' to the respective table ie 'A' or 'B'.
What the foreign key constraints do is ensure that the shorthand value exists in the shorthand's table for the respective src table but because of the uniqueness constraint on just the 'shorthand' column in the shorthand's table if the other table has already defined the shorthand value a key violation will occur thus guaranteeing uniqueness across two (or even more) tables.
Whatever you do, it is best to put the referential integrity into the database, not in orm/active record validations.
I have to do a big update script - not an SPL (stored procedure).
It's to be written for an Informix db.
It involves inserting rows into multiple tables, each of which relies on the serial of the insert into the previous table.
I know I can get access to the serial by doing this:
SELECT DISTINCT dbinfo('sqlca.sqlerrd1') FROM systables
but I can't seem to define a local variable to store this before the insert into the next table.
I want to do this:
insert into table1 (serial, data1, data2) values (0, 'newdata1', 'newdata2');
define serial1 as int;
let serial1 = SELECT DISTINCT dbinfo('sqlca.sqlerrd1') FROM systables;
insert into table2 (serial, data1, data2) values (0, serial1, 'newdata3');
But of course Informix chokes on the define line.
Is there a way to do this without having to create this as a stored procedure, run it once and then delete the procedure?
If the number of columns in the tables involved is as few as your example, then you could make the SPL permanent, and use it to insert your data, ie:
EXECUTE PROCEDURE insert_related_tables('newdata1','newdata2','newdata3');
Obviously that doesn't scale terribly well, but is OK for your example.
Another thought that expands on Jonathan's example and solves any concurrency issues that might arise from the use of MAX() would be to include DBINFO('sessionid') in Table3:
DELETE FROM Table3 WHERE sessionid = DBINFO('sessionid');
INSERT INTO Table1 (...);
INSERT INTO Table3 (sessionid, value)
VALUES (DBINFO('sessionid'), DBINFO('sqlca.sqlerrd1'));
INSERT INTO Table2
VALUES (0, (SELECT value FROM Table3
WHERE sessionid = DBINFO('sessionid'), 'newdata3');
...
You could also make Table3 a TEMP table:
INSERT INTO Table1 (...);
SELECT DISTINCT DBINFO('sqlca.sqlerrd1') AS serial_value
FROM some_dummy_table_like_systables
INTO TEMP Table3 WITH NO LOG;
INSERT INTO Table2 (...);
Informix does not provide a mechanism outside of stored procedures for 'local variables' of the type you want. However, in the limited example you provide, this works:
CREATE TABLE Table1
(
serial SERIAL(123) NOT NULL,
data1 VARCHAR(32) NOT NULL,
data2 VARCHAR(32) NOT NULL
);
CREATE TABLE Table2
(
serial SERIAL NOT NULL,
data1 INTEGER NOT NULL,
data2 VARCHAR(32) NOT NULL
);
INSERT INTO Table1(Serial, Data1, Data2)
VALUES(0, 'newdata1', 'newdata2');
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, DBINFO('sqlca.sqlerrd1'), 'newdata3');
SELECT * FROM Table1;
123 newdata1 newdata2
SELECT * FROM Table2;
1 123 newdata3
However, this works only because you need to insert one row into Table2. If you needed to insert more, the technique would not work well. You could, I suppose, use:
CREATE TEMP TABLE Table3
(
value INTEGER NOT NULL
);
INSERT INTO Table1(Serial, Data1, Data2)
VALUES(0, 'newdata1', 'newdata2');
INSERT INTO Table3(Value)
VALUES(DBINFO('sqlca.sqlerrd1'));
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, (SELECT MAX(value) FROM Table3), 'newdata3');
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, (SELECT MAX(value) FROM Table3), 'newdata4');
And so on...the temporary table for Table3 avoids problems with concurrency and MAX().
create table test_tables (
id number(38) primary key,
name varchar2(50),
age number(5),
gender varchar2(10),
point number(5,2)
);
insert into test_tables values (1,'name1',20,'male',80.5);
insert into test_tables values (2,'name2',21,'female',60.5);
insert into test_tables values (3,'name3',23,'male',90.5);
insert into test_tables values (4,'name4',19,'male',79.5);
insert into test_tables values (5,'name5',18,'female',80.5);
The result of TestTable.sum(:point) is 391,
why not 391.5?
I check rdoc,there are some description:
The value is returned with the same data type of the column
But in my case,the type of 'point' is number(5,2).
It's not a float?
I'm not especially familiar with SQLite but I think you should be using a DECIMAL type in your create table statement. More details:
Active Record SQLite Type Mapping
SQLite Types
Be careful of this issue though. It seems that the sqlite3 ruby driver insists on creating Float ruby objects rather than BigDecimal for DECIMAL query results.
I have a table called messages and here is the table structure, I don’t want id is auto increment field but it should be a primary key for that table.
Here is table structure for messages
CREATE TABLE `messages` (
`id` INT(11) NOT NULL,
`user_id` INT(11) NOT NULL,
`text` VARCHAR(255) NOT NULL,
`source` VARCHAR(100),
`created_at` DATETIME DEFAULT NULL,
`updated_at` DATETIME DEFAULT NULL,
PRIMARY KEY (`id`)
);
while insert the data into table I am using below hash object
msg['id'] = 12345;
msg['user_id'] = 1;
msg['text'] = 'Hello world';
If I save this hash into messages table, id is not inserting
message = Message.new(msg);
message.save!
Rails is building insert sql with out id, so id value is not inserting messages table.
How insert the id value in table, This the insert sql rails build with out using id field
INSERT INTO `users` (`updated_at`, `user_id `, `text`, `created_at`) VALUES('2010-06-18 12:01:05', '1', 'Hello world', '2010-06-18 12:01:05');
Setting ID value is often useful when migrating from legacy data or - as I am doing right now - merging two apps while preserving FK integrity.
I just scratched my head for a while and it seems you have to set the PK value before calling save. After the record is saved, ActiveRecord ignores #id= or update_attribute . So while setting up the record from an attribute hash I use:
article = Article.new(attrs)
article.id = attrs["id"]
article.save!
You're working against the way rails works. ActiveRecord reserves the use of the id column and manages it for you.
Why should id not be an auto-incrementing column if it's the primary key?
Why do you need to control its value?
If you need an id column you can control yourself, add another one. It won't be the primary key, but you can make it a unique index too.