TypeORM - await queryRunner.query with multiple SQL statements does not work - typeorm

I am trying to run the following query, as part of a TypeORM migration. It needs to use the inserted_id of the first statement to feed into the subsequent statements. However it does not seem to be working and fails with the error:
Error during migration run:
QueryFailedError: syntax error at or near ";"
Any ideas on how to fix it?
await queryRunner.query(
`
WITH inserted_id AS (
INSERT INTO "public"."collection" ("created_by", "updated_by")
VALUES ('system') RETURNING id
);
INSERT INTO "public"."metadata" ("name",
"slug",
"description",
"genre_id",
"collection_id",
"splash_src_url",
"logo_src_url",
"video_src_url",
"website_url",
"discord_url",
"telegram_url",
"twitter_url",
"created_by",
"updated_by")
VALUES ('Githubverse',
'Githubverse',
'Githubverse is a game based on Github World, the popular anime franchise with over 60+ million fans.',
3,
(select id from inserted_id),
'https://media.disgaming.com/githubverse/splash.jpg',
'https://media.disgaming.com/githubverse/logo.png',
'https://media.disgaming.com/githubverse/video.mp4',
'https://githubverse.io/',
'https://discord.gg/githubverse',
'https://t.me/githubverse',
'https://twitter.com/githubverse',
'system',
'system');
INSERT INTO "public"."collection_tag_mapping" ("collection_id", "tag_id")
VALUES ((select id from inserted_id), 3);
INSERT INTO "public"."collection_tag_mapping" ("collection_id", "tag_id")
VALUES ((select id from inserted_id), 4);
INSERT INTO "public"."collection_tag_mapping" ("collection_id", "tag_id")
VALUES ((select id from inserted_id), 10);
INSERT INTO "public"."collection_tag_mapping" ("collection_id", "tag_id")
VALUES ((select id from inserted_id), 19);
`,
);

Related

Informix one to many format issue

Trying to fix my Informix query results format from a one to many relationship. My current query is using a JOIN but is creating a new line for every time there is a match to the JOIN ON condition. I should add the below is only an example, the real data is thousands of entries with about a 100 unique "category" entries so I cant hard code WHERE statements, it needs to read each entry and add if a match. I tried a GROUP_CONCAT however is just returned an error, guess its not a informix function, I also tried reading this thread but have yet been unable to get working. Show a one to many relationship as 2 columns - 1 unique row (ID & comma separated list)
Any help will be appreciated.
IBM/Informix-Connect Version 3.70.UC4
IBM/Informix LIBGLS LIBRARY Version 5.00.UC5
IBM Informix Dynamic Server Version 11.70.FC8W1
Tables
movie
name rating movie_id
rio g 1
horton g 2
blade r 3
lotr_1 pg13 4
lotr_2 pg13 5
paul_blart pg 6
category
cat_name id
kids 1
comedy 2
action 3
fantasy 4
category_member
movie_name cat_name catmem_id
lotr_1 action 1
lotr_1 fantasy 2
rio kids 3
rio comedy 4
When I use
#!/bin/bash
echo "SET isolation dirty read;
UNLOAD to /export/home/movie/movieDetail.unl DELIMITER ','
SELECT a.name, a.rating, b.cat_name
FROM movie a
LEFT JOIN category b ON b.movie_name = a.name
;" | dbaccess thedb;
What I get is
rio,g,kids
rio,g,comedy
lotr_1,pg13,action
lotr_1,pg13,fantasy
What I would like is
rio,g,kids,comedy
lotr_1,pg13,action,fantasy
Install the GROUP_CONCAT user-defined aggregate
You must install the GROUP_CONCAT user-defined aggregate from SO 715350 (referenced in your question) into your database. The GROUP_CONCAT aggregate is not defined by Informix, but can be added if you use the SQL from that question. One difference between that and a normal built-in function is that you need to install the aggregate in each database in the server where you need to use it. There might be a way to do a 'global install' (for all databases in a given server), but I've forgotten (or, more accurately, never learned) how to do it.
Writing your queries
With the sample database listed at the bottom:
The query in the question does not run:
SELECT a.name, a.rating, b.cat_name
FROM movie a
LEFT JOIN category b ON b.movie_name = a.name;
SQL -217: Column (movie_name) not found in any table in the query (or SLV is undefined).
This can be fixed by changing category to category_member. This produces:
SELECT a.name, a.rating, b.cat_name
FROM movie a
LEFT JOIN category_member b ON b.movie_name = a.name;
rio g kids
rio g comedy
horton g
blade r
lotr_1 pg13 action
lotr_1 pg13 fantasy
lotr_2 pg13
paul_blart pg
The LEFT JOIN appears to be unwanted. And using GROUP_CONCAT produces approximately the desired answer:
SELECT a.name, a.rating, GROUP_CONCAT(b.cat_name)
FROM movie a
JOIN category_member b ON b.movie_name = a.name
GROUP BY a.name, a.rating;
rio g kids,comedy
lotr_1 pg13 action,fantasy
If you specify the delimiter as ,, the commas in the data from the GROUP_CONCAT operator will be escaped to avoid ambiguity:
SELECT a.NAME, a.rating, GROUP_CONCAT(b.cat_name)
FROM movie a
JOIN category_member b ON b.movie_name = a.NAME
GROUP BY a.NAME, a.rating;
rio,g,kids\,comedy
lotr_1,pg13,action\,fantasy
Within standard Informix utilities, there isn't a way to avoid that; they don't leave the selected/unloaded data in an ambiguous format.
I'm not convinced that the database schema is very well organized. The Movie table is OK; the Category table is OK; but the Category_Member table would be more orthodox if it used the schema:
DROP TABLE IF EXISTS category_member;
CREATE TABLE category_member
(
movie_id INTEGER NOT NULL REFERENCES Movie(Movie_id),
category_id INTEGER NOT NULL REFERENCES Category(Id),
PRIMARY KEY(movie_id, category_id)
);
INSERT INTO category_member VALUES(4, 3);
INSERT INTO category_member VALUES(4, 4);
INSERT INTO category_member VALUES(1, 1);
INSERT INTO category_member VALUES(1, 2);
-- Use GROUP_CONCAT
SELECT a.NAME, a.rating, GROUP_CONCAT(c.cat_name)
FROM movie a
JOIN category_member b ON b.movie_id = a.movie_id
JOIN category c ON b.category_id = c.id
GROUP BY a.NAME, a.rating;
The output from this query is the same as from the previous one, but the joining is more orthodox.
Sample database
DROP TABLE IF EXISTS movie;
CREATE TABLE movie
(
name VARCHAR(20) NOT NULL UNIQUE,
rating CHAR(4) NOT NULL,
movie_id SERIAL NOT NULL PRIMARY KEY
);
INSERT INTO movie VALUES("rio", "g", 1);
INSERT INTO movie VALUES("horton", "g", 2);
INSERT INTO movie VALUES("blade", "r", 3);
INSERT INTO movie VALUES("lotr_1", "pg13", 4);
INSERT INTO movie VALUES("lotr_2", "pg13", 5);
INSERT INTO movie VALUES("paul_blart", "pg", 6);
DROP TABLE IF EXISTS category;
CREATE TABLE category
(
cat_name VARCHAR(10) NOT NULL UNIQUE,
id SERIAL NOT NULL PRIMARY KEY
);
INSERT INTO category VALUES("kids", 1);
INSERT INTO category VALUES("comedy", 2);
INSERT INTO category VALUES("action", 3);
INSERT INTO category VALUES("fantasy", 4);
DROP TABLE IF EXISTS category_member;
CREATE TABLE category_member
(
movie_name VARCHAR(20) NOT NULL,
cat_name VARCHAR(10) NOT NULL,
catmem_id SERIAL NOT NULL PRIMARY KEY
);
INSERT INTO category_member VALUES("lotr_1", "action", 1);
INSERT INTO category_member VALUES("lotr_1", "fantasy", 2);
INSERT INTO category_member VALUES("rio", "kids", 3);
INSERT INTO category_member VALUES("rio", "comedy", 4);

Use computed columns in related table in power query on SQLITE via odbc

In power query ( version included with exel 2016, PC ), is it possible to refer to a computed column of a related table?
Say I have an sqlite database as follow
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE products (
iddb INTEGER NOT NULL,
px FLOAT,
PRIMARY KEY (iddb)
);
INSERT INTO "products" VALUES(0,0.0);
INSERT INTO "products" VALUES(1,1.1);
INSERT INTO "products" VALUES(2,2.2);
INSERT INTO "products" VALUES(3,3.3);
INSERT INTO "products" VALUES(4,4.4);
CREATE TABLE sales (
iddb INTEGER NOT NULL,
quantity INTEGER,
product_iddb INTEGER,
PRIMARY KEY (iddb),
FOREIGN KEY(product_iddb) REFERENCES products (iddb)
);
INSERT INTO "sales" VALUES(0,0,0);
INSERT INTO "sales" VALUES(1,1,1);
INSERT INTO "sales" VALUES(2,2,2);
INSERT INTO "sales" VALUES(3,3,3);
INSERT INTO "sales" VALUES(4,4,4);
INSERT INTO "sales" VALUES(5,5,0);
INSERT INTO "sales" VALUES(6,6,1);
INSERT INTO "sales" VALUES(7,7,2);
INSERT INTO "sales" VALUES(8,8,3);
INSERT INTO "sales" VALUES(9,9,4);
COMMIT;
basically we have products ( iddb, px ) and sales of those products ( iddb, quantity, product_iddb )
I load this data in power query by:
A. creating an ODBC data source using SQLITE3 driver : testDSN
B. in excel : data / new query , feeding this connection string Provider=MSDASQL.1;Persist Security Info=False;DSN=testDSN;
Now in power query I add a computed column, say px10 = px * 10 to the product table.
In the sales table, I can expand the product table into product.px, but not product.px10 . Shouldn't it be doable? ( in this simplified example I could expand first product.px and then create the px10 column in the sales table, but then any new table needinng px10 from product would require me to repeat the work... )
Any inputs appreciated.
I would add a Merge step from the sales query to connect it to the product query (which will include your calculated column). Then I would expand the Table returned to get your px10 column.
This is instead of expanding the Value column representing the product SQL table, which gets generated using the SQL foreign key.
You will have to come back and add any further columns added to the product query to the expansion list, but at least the column definition is only in one place.
In functional programming you don't modify existing values, only create new values. When you add the new column to product it creates a new table, and doesn't modify the product table that shows up in related tables. Adding a new column over product can't show up in Odbc's tables unless you apply that transformation to all related tables.
What you could do is generalize the "add a computed column" into a function that takes a table or record and adds the extra field. Then just apply that over each table in your database.
Here's an example against Northwind in SQL Server
let
Source = Sql.Database(".", "Northwind_Copy"),
AddAColumn = (data) => if data is table then Table.AddColumn(data, "UnitPrice10x", each [UnitPrice] * 10)
else if data is record then Record.AddField(data, "UnitPrice10x", data[UnitPrice] * 10)
else data,
TransformedSource = Table.TransformColumns(Source, {"Data", (data) => if data is table then Table.TransformColumns(data, {"Products", AddAColumn}, null, MissingField.Ignore) else data}),
OrderDetails = TransformedSource{[Schema="dbo",Item="Order Details"]}[Data],
#"Expanded Products" = Table.ExpandRecordColumn(OrderDetails, "Products", {"UnitPrice", "UnitPrice10x"}, {"Products.UnitPrice", "Products.UnitPrice10x"})
in
#"Expanded Products"

SQL Join based on three keys

Database is Teradata
I have two table which I am trying to join. Following are the table structures. When I join these table I expect to get two rows as output but getting 4 rows.what is reason for this behavior. Join based on three keys should uniquely identify a row but still getting 4 rows as output. Any help is appreciated.
TableA
Weekkey|segment|type|users
201501|1|A|100
201501|1|B|100
TableB
Weekkey|segment|type|revenue
201501|1|A|200
201501|1|B|200
when I join these two table using the following query i get the following result
select a.* ,b.user
from tablea a left join tableb b on a.weekkey=b.weekkey
and a.segment=b.segment
and a.type=b.type
Weekkey|segment|type|revenue|users
201501|1|A|200|100
201501|1|B|200|100
201501|1|A|200|100
201501|1|B|200|100
Using sql server, here is ddl and sample data along with the query you posted. The output you state you are getting doesn't happen here.
create table #tablea
(
Weekkey int
, segment int
, type char(1)
, users int
)
insert #tablea
select 201501, 1, 'A', 100 union all
select 201501, 1, 'B', 100
create table #TableB
(
Weekkey int
, segment int
, type char(1)
, revenue int
)
insert #TableB
select 201501, 1, 'A', 200 union all
select 201501, 1, 'B', 200
select a.*
, b.revenue
from #tablea a
left join #tableb b on a.weekkey = b.weekkey
and a.segment = b.segment
and a.type = b.type
drop table #tablea
drop table #TableB

SQLite multiple INSERT in iOS

I want to insert thousand of Record in table, right now i am using
INSERT INTO myTable ("id", "values") VALUES ("1", "test")
INSERT INTO myTable ("id", "values") VALUES ("2", "test")
INSERT INTO myTable ("id", "values") VALUES ("3", "test")
Query for insert record one by one, but its take long time for execution,
Now i want to insert all record from one query...
INSERT INTO myTable ("id", "values") VALUES
("1", "test"),
("2", "test"),
("3", "test"),
.....
.....
("n", "test")
But this query not working with Sqllite, Can you please give me some guidance for solve this problem
Thanks,
Please refer my answer here.
There is no query in sqlite that can support your structure, this is what I use to insert 1000s of records in db. Performance is good. You can give a try. :)
Insert into table_name (col1,col2)
SELECT 'R1.value1', 'R1.value2'
UNION SELECT 'R2.value1', 'R2.value2'
UNION SELECT 'R3.value1', 'R3.value2'
UNION SELECT 'R4.value1', 'R4.value2'
You can follow this link on SO for more information. But be careful of the number of insertion you want make, cause there is a limitation for this kind of usage (see here)
INSERT INTO 'tablename'
SELECT 'data1' AS 'column1', 'data2' AS 'column2'
UNION SELECT 'data3', 'data4'
UNION SELECT 'data5', 'data6'
UNION SELECT 'data7', 'data8'
As a further note, sqlite only seems to support upto 500 such union selects per query so if you are trying to throw in more data than that you will need to break it up into 500 element blocks

Local variables in an Informix script

I have to do a big update script - not an SPL (stored procedure).
It's to be written for an Informix db.
It involves inserting rows into multiple tables, each of which relies on the serial of the insert into the previous table.
I know I can get access to the serial by doing this:
SELECT DISTINCT dbinfo('sqlca.sqlerrd1') FROM systables
but I can't seem to define a local variable to store this before the insert into the next table.
I want to do this:
insert into table1 (serial, data1, data2) values (0, 'newdata1', 'newdata2');
define serial1 as int;
let serial1 = SELECT DISTINCT dbinfo('sqlca.sqlerrd1') FROM systables;
insert into table2 (serial, data1, data2) values (0, serial1, 'newdata3');
But of course Informix chokes on the define line.
Is there a way to do this without having to create this as a stored procedure, run it once and then delete the procedure?
If the number of columns in the tables involved is as few as your example, then you could make the SPL permanent, and use it to insert your data, ie:
EXECUTE PROCEDURE insert_related_tables('newdata1','newdata2','newdata3');
Obviously that doesn't scale terribly well, but is OK for your example.
Another thought that expands on Jonathan's example and solves any concurrency issues that might arise from the use of MAX() would be to include DBINFO('sessionid') in Table3:
DELETE FROM Table3 WHERE sessionid = DBINFO('sessionid');
INSERT INTO Table1 (...);
INSERT INTO Table3 (sessionid, value)
VALUES (DBINFO('sessionid'), DBINFO('sqlca.sqlerrd1'));
INSERT INTO Table2
VALUES (0, (SELECT value FROM Table3
WHERE sessionid = DBINFO('sessionid'), 'newdata3');
...
You could also make Table3 a TEMP table:
INSERT INTO Table1 (...);
SELECT DISTINCT DBINFO('sqlca.sqlerrd1') AS serial_value
FROM some_dummy_table_like_systables
INTO TEMP Table3 WITH NO LOG;
INSERT INTO Table2 (...);
Informix does not provide a mechanism outside of stored procedures for 'local variables' of the type you want. However, in the limited example you provide, this works:
CREATE TABLE Table1
(
serial SERIAL(123) NOT NULL,
data1 VARCHAR(32) NOT NULL,
data2 VARCHAR(32) NOT NULL
);
CREATE TABLE Table2
(
serial SERIAL NOT NULL,
data1 INTEGER NOT NULL,
data2 VARCHAR(32) NOT NULL
);
INSERT INTO Table1(Serial, Data1, Data2)
VALUES(0, 'newdata1', 'newdata2');
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, DBINFO('sqlca.sqlerrd1'), 'newdata3');
SELECT * FROM Table1;
123 newdata1 newdata2
SELECT * FROM Table2;
1 123 newdata3
However, this works only because you need to insert one row into Table2. If you needed to insert more, the technique would not work well. You could, I suppose, use:
CREATE TEMP TABLE Table3
(
value INTEGER NOT NULL
);
INSERT INTO Table1(Serial, Data1, Data2)
VALUES(0, 'newdata1', 'newdata2');
INSERT INTO Table3(Value)
VALUES(DBINFO('sqlca.sqlerrd1'));
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, (SELECT MAX(value) FROM Table3), 'newdata3');
INSERT INTO Table2(Serial, Data1, Data2)
VALUES(0, (SELECT MAX(value) FROM Table3), 'newdata4');
And so on...the temporary table for Table3 avoids problems with concurrency and MAX().

Resources