I have to do a big update script - not an SPL (stored procedure).
It's to be written for an Informix db.
It involves inserting rows into multiple tables, each of which relies on the serial of the insert into the previous table.
I know I can get access to the serial by doing this:
SELECT DISTINCT dbinfo('sqlca.sqlerrd1') FROM systables
but I can't seem to define a local variable to store this before the insert into the next table.
I want to do this:
insert into table1 (serial, data1, data2) values (0, 'newdata1', 'newdata2');
define serial1 as int;
let serial1 = SELECT DISTINCT dbinfo('sqlca.sqlerrd1') FROM systables;
insert into table2 (serial, data1, data2) values (0, serial1, 'newdata3');
But of course Informix chokes on the define line.
Is there a way to do this without having to create this as a stored procedure, run it once and then delete the procedure?
If the number of columns in the tables involved is as few as your example, then you could make the SPL permanent, and use it to insert your data, ie:
EXECUTE PROCEDURE insert_related_tables('newdata1','newdata2','newdata3');
Obviously that doesn't scale terribly well, but is OK for your example.
Another thought that expands on Jonathan's example and solves any concurrency issues that might arise from the use of MAX() would be to include DBINFO('sessionid') in Table3:
DELETE FROM Table3 WHERE sessionid = DBINFO('sessionid');
INSERT INTO Table1 (...);
INSERT INTO Table3 (sessionid, value)
VALUES (DBINFO('sessionid'), DBINFO('sqlca.sqlerrd1'));
INSERT INTO Table2
VALUES (0, (SELECT value FROM Table3
WHERE sessionid = DBINFO('sessionid'), 'newdata3');
...
You could also make Table3 a TEMP table:
INSERT INTO Table1 (...);
SELECT DISTINCT DBINFO('sqlca.sqlerrd1') AS serial_value
FROM some_dummy_table_like_systables
INTO TEMP Table3 WITH NO LOG;
INSERT INTO Table2 (...);
Informix does not provide a mechanism outside of stored procedures for 'local variables' of the type you want. However, in the limited example you provide, this works:
CREATE TABLE Table1
(
serial SERIAL(123) NOT NULL,
data1 VARCHAR(32) NOT NULL,
data2 VARCHAR(32) NOT NULL
);
CREATE TABLE Table2
(
serial SERIAL NOT NULL,
data1 INTEGER NOT NULL,
data2 VARCHAR(32) NOT NULL
);
INSERT INTO Table1(Serial, Data1, Data2)
VALUES(0, 'newdata1', 'newdata2');
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, DBINFO('sqlca.sqlerrd1'), 'newdata3');
SELECT * FROM Table1;
123 newdata1 newdata2
SELECT * FROM Table2;
1 123 newdata3
However, this works only because you need to insert one row into Table2. If you needed to insert more, the technique would not work well. You could, I suppose, use:
CREATE TEMP TABLE Table3
(
value INTEGER NOT NULL
);
INSERT INTO Table1(Serial, Data1, Data2)
VALUES(0, 'newdata1', 'newdata2');
INSERT INTO Table3(Value)
VALUES(DBINFO('sqlca.sqlerrd1'));
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, (SELECT MAX(value) FROM Table3), 'newdata3');
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, (SELECT MAX(value) FROM Table3), 'newdata4');
And so on...the temporary table for Table3 avoids problems with concurrency and MAX().
Related
It seems that it is not straightforward for reordering columns in a SQLite3 table. At least the SQLite Manager in Firefox does not support this feature. For example, move the column2 to column3 and move column5 to column2. Is there a way to reorder columns in SQLite table, either with a SQLite management software or a script?
This isn't a trivial task in any DBMS. You would almost certainly have to create a new table with the order that you want, and move your data from one table to the order. There is no alter table statement to reorder the columns, so either in sqlite manager or any other place, you will not find a way of doing this in the same table.
If you really want to change the order, you could do:
Assuming you have tableA:
create table tableA(
col1 int,
col3 int,
col2 int);
You could create a tableB with the columns sorted the way you want:
create table tableB(
col1 int,
col2 int,
col3 int);
Then move the data to tableB from tableA:
insert into tableB
SELECT col1,col2,col3
FROM tableA;
Then remove the original tableA and rename tableB to TableA:
DROP table tableA;
ALTER TABLE tableB RENAME TO tableA;
sqlfiddle demo
You can always order the columns however you want to in your SELECT statement, like this:
SELECT column1,column5,column2,column3,column4
FROM mytable
WHERE ...
You shouldn't need to "order" them in the table itself.
The order in sqlite3 does matter. Conceptually, it shouldn't, but try this experiment to prove that it does:
CREATE TABLE SomeItems (
identifier INTEGER PRIMARY KEY NOT NULL,
filename TEXT NOT NULL, path TEXT NOT NULL,
filesize INTEGER NOT NULL, thumbnail BLOB,
pickedStatus INTEGER NOT NULL,
deepScanStatus INTEGER NOT NULL,
basicScanStatus INTEGER NOT NULL,
frameQuanta INTEGER,
tcFlag INTEGER,
frameStart INTEGER,
creationTime INTEGER
);
Populate the table with about 20,000 records where thumbnail is a small jpeg. Then do a couple of queries like this:
time sqlite3 Catalog.db 'select count(*) from SomeItems where filesize = 2;'
time sqlite3 Catalog.db 'select count(*) from SomeItems where basicScanStatus = 2;'
Does not matter how many records are returned, on my machine, the first query takes about 0m0.008s and the second query takes 0m0.942s. Massive difference, and the reason is because of the Blob; filesize is before the Blob and basicScanStatus is after.
We've now moved the Blob into its own table, and our app is happy.
you can reorder them using the Sqlite Browser
I am trying to copy over one row from my archive table to my original table.
Without my WHERE clause, the whole table of table2 gets copied to table1.
I don't want this of course. So based on the gridview's ID value listed, the table will copy over only the row whose ID is the same.
When I debug the lines I get the correct ID listed for DisplaySup.Rows(0).Cells(2).Text.
(
val_ID table2.V_ID%type
)
is
begin
execute immediate 'insert into table1 (select * from table2 where V_ID = val_ID)';
end;
Yet I get the error
ORA-00904: "VAL_ID": invalid identifier
Table2 and Table1 have identical columns; so they both have column titled V_ID. I am unsure why Val_ID is flagging an error.
VB.net line of coding:
SupArchive.Parameters.Add("val_ID", OleDbType.VarChar).Value = DisplaySup.Rows(0).Cells(2).Text
So I tried to reference: EXECUTE IMMEDIATE with USING clause giving errors
Like so to fix WHERE:
(
val_ID table2.V_ID%type
)
is
begin
execute immediate 'insert into table1 (select * from table2 where V_ID = '||val_ID||')';
end;
but I get error:
ORA-00904: "val_ID": invalid identifier
Any suggestions on how to fix my stored procedure?
UPDATE:
Tried to do the suggested:
(
val_ID table2.V_ID%type
)
AS
BEGIN
execute immediate 'insert into table1 (col1, col2, col3...)(select col1, col2, col3... from table2 where V_ID = :val_ID)' using val_ID;
end;
but get error:
ORA-00904: "col72": invalid identifier
for col72 after Select statement
EXAMPLE OF MY TABLES (both are identical) purpose of table2 is when a row is deleted in table1, table2 can re-create the user that was deleted
Table1
ID CompanyName FirstName LastName ....(72 cols)
Table2
ID CompanyName FirstName LastName... (72 cols)
You would do best to use a bind variable in your insert statement. Also, you need to list the columns you're inserting into as well as those you're inserting, to avoid the "too many values" error.
Eg:
declare
val_ID table2.V_ID%type := 1;
begin
execute immediate 'insert into table1 (col1, col2, ...) (select col1, col2, ... from table2 where V_ID = :val_ID)' using val_id;
end;
/
Although in this instance there is absolutely no need to use dynamic sql at all, so you could just do:
declare
val_id table2.v_id%type := 1;
begin
insert into table1 (col1, col2, ...)
select col1, col2, ...
from table2
where v_id = val_id;
end;
/
Don't forget to commit after you've run the procedure!
In power query ( version included with exel 2016, PC ), is it possible to refer to a computed column of a related table?
Say I have an sqlite database as follow
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE products (
iddb INTEGER NOT NULL,
px FLOAT,
PRIMARY KEY (iddb)
);
INSERT INTO "products" VALUES(0,0.0);
INSERT INTO "products" VALUES(1,1.1);
INSERT INTO "products" VALUES(2,2.2);
INSERT INTO "products" VALUES(3,3.3);
INSERT INTO "products" VALUES(4,4.4);
CREATE TABLE sales (
iddb INTEGER NOT NULL,
quantity INTEGER,
product_iddb INTEGER,
PRIMARY KEY (iddb),
FOREIGN KEY(product_iddb) REFERENCES products (iddb)
);
INSERT INTO "sales" VALUES(0,0,0);
INSERT INTO "sales" VALUES(1,1,1);
INSERT INTO "sales" VALUES(2,2,2);
INSERT INTO "sales" VALUES(3,3,3);
INSERT INTO "sales" VALUES(4,4,4);
INSERT INTO "sales" VALUES(5,5,0);
INSERT INTO "sales" VALUES(6,6,1);
INSERT INTO "sales" VALUES(7,7,2);
INSERT INTO "sales" VALUES(8,8,3);
INSERT INTO "sales" VALUES(9,9,4);
COMMIT;
basically we have products ( iddb, px ) and sales of those products ( iddb, quantity, product_iddb )
I load this data in power query by:
A. creating an ODBC data source using SQLITE3 driver : testDSN
B. in excel : data / new query , feeding this connection string Provider=MSDASQL.1;Persist Security Info=False;DSN=testDSN;
Now in power query I add a computed column, say px10 = px * 10 to the product table.
In the sales table, I can expand the product table into product.px, but not product.px10 . Shouldn't it be doable? ( in this simplified example I could expand first product.px and then create the px10 column in the sales table, but then any new table needinng px10 from product would require me to repeat the work... )
Any inputs appreciated.
I would add a Merge step from the sales query to connect it to the product query (which will include your calculated column). Then I would expand the Table returned to get your px10 column.
This is instead of expanding the Value column representing the product SQL table, which gets generated using the SQL foreign key.
You will have to come back and add any further columns added to the product query to the expansion list, but at least the column definition is only in one place.
In functional programming you don't modify existing values, only create new values. When you add the new column to product it creates a new table, and doesn't modify the product table that shows up in related tables. Adding a new column over product can't show up in Odbc's tables unless you apply that transformation to all related tables.
What you could do is generalize the "add a computed column" into a function that takes a table or record and adds the extra field. Then just apply that over each table in your database.
Here's an example against Northwind in SQL Server
let
Source = Sql.Database(".", "Northwind_Copy"),
AddAColumn = (data) => if data is table then Table.AddColumn(data, "UnitPrice10x", each [UnitPrice] * 10)
else if data is record then Record.AddField(data, "UnitPrice10x", data[UnitPrice] * 10)
else data,
TransformedSource = Table.TransformColumns(Source, {"Data", (data) => if data is table then Table.TransformColumns(data, {"Products", AddAColumn}, null, MissingField.Ignore) else data}),
OrderDetails = TransformedSource{[Schema="dbo",Item="Order Details"]}[Data],
#"Expanded Products" = Table.ExpandRecordColumn(OrderDetails, "Products", {"UnitPrice", "UnitPrice10x"}, {"Products.UnitPrice", "Products.UnitPrice10x"})
in
#"Expanded Products"
I have two Hive tables and I am trying to join both of them. The tables are not clustered or partitioned by any field. Though the tables contain records for common key fields, the join query always returns 0 records. All the data types are 'string' data types.
The join query is simple and looks something like below
select count(*) cnt
from
fsr.xref_1 A join
fsr.ipfile_1 B
on
(
A.co_no = B.co_no
)
;
Any idea what could be going wrong? I have just one record (same value) in both the tables.
Below are my table definitions
CREATE TABLE xref_1
(
co_no string
)
clustered by (co_no) sorted by (co_no asc) into 10 buckets
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS TEXTFILE;
CREATE TABLE ipfile_1
(
co_no string
)
clustered by (co_no) sorted by (co_no asc) into 10 buckets
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS TEXTFILE;
Hi You are using Star Schema Join. Please use your query like this:
SELET COUNT(*) cnt FROM A a JOIN B b ON (a.key1 = b.key1);
If still have issue Then use MAPJOIN:
set hive.auto.convert.join=true;
select count(*) from A join B on (key1 = key2)
Please see Link for more detail.
I wrote a stored procedure with a table name as parameter, that checks if there are duplicate rows in this table. The statements are built dynamically of course:
INSERT INTO tmpTable
SELECT col1, col2,... FROM table GROUP BY col1, col2, ... HAVING COUNT(*) > 1;
DELETE FROM tablename FROM tablenname
INNER JOIN tmpTable ON ISNULL(tablename.col1, 0) = ISNULL(tmpTable.col1, 0)
AND ISNULL(tablename.col2, 0) = ISNULL(tmpTable.col2, 0)
AND ...;
INSERT INTO tablename SELECT * FROM tmpTable;
Should work so far, but problem is, that it fails when the table has blob columns, like text. Those can not be compared in the JOIN. I also tried
DELETE FROM tablename GROUP BY col1, col2, ... HAVING COUNT(*) > 1;
but GROUP BY is not supported in DELETE statement directly without self-joining.
Also it's not possible to query information_schema for primary key of this table, since none of these tables has one.
Any ideas? Thanks.
Since the statement is already built dynamically, add casting the relevant columns to varchar(max) for the purpose of join. It's not difficult to figure which columns that are:
select c.name, quotename(c.name, '[')
from
sys.columns c
inner join sys.types t on c.system_type_id = t.system_type_id
where
c.object_id = object_id(#TABLE_NAME)
and c.is_computed = 0
and t.name in ('text', 'image', 'timestamp', 'xml')