How to get last inserted record Id in Gupta SQL Base - guptateamdeveloper

I am new to Gupta Sql Base. I would like to know how to get the last inserted record in Gupta SQL

If you are using SYSDBSequence.NextVal to generate your Primary Key, either within the Insert stmt, or prior to the Insert , then you can retrieve it back immediately after the Insert by Selecting Where [Primary Key] = SYSDBSequence.Currval e.g.
Select Name from Patient Where Patient_Id = SYSDBSequence.Currval
Alternatively, If your Primary Key column has been defined as AUTO_INCREMENT , you can select it back after the Insert using MAX( [Primary Key ] ) e.g.
Select Name from Patient Where Patient_Id = (Select MAX( Patient_Id) from Patient )
Alternatively, if none of the above, then write an Insert Trigger to either return it , or to store the PK in a table so you will always have the latest PK recorded for you.
You may like to join the Gupta users forum at enter link description here or there is much archived information at enter link description here

Related

Updating existing records with IDs of new rows while using a "with" clause

Platform: Ruby on Rails with PostgreSQL database.
Problem:
We are doing some backfilling to migrate our data to a new structure. It's created a rather convoluted situation, and we'd like to handle it as efficiently as possible. It's partially addressed with SQL similar to this:
with rows as (
insert into responses (prompt_id, answer, received_at, user_id, category_id)
select prompt_id, null as answer, received_at, user_id, category_id
from prompts
where user_status = 0 and skipped is not true
returning id, category_id
)
insert into category_responses (category_id, response_id)
select category_id, id as response_id
from rows;
The tables and columns have been obfuscated/simplified so the reasoning behind it may not be as clear, but category_responses is a many-to-many join table. What we're doing is grabbing existing prompts, and creating a set of empty responses (answer is NULL) for each.
The piece that's missing is to then associate the records in prompts with the newly created responses. Is there a way to do this within the query? I would like to avoid adding a prompt_id column to answers if possible, but I am guessing this would be one way to handle that, including it in the returning clause, then issuing a second query to update the prompts table - and anyway I'm not even sure you can run more than one query with the results of a single with clause.
What's the best way to accomplish this?
I have settled on adding the needed column, and updated the query as follows:
with tab1 as (
insert into responses (prompt_id, answer, received_at, user_id, category_id, prompt_id)
select prompt_id, null as answer, received_at, user_id, category_id
from prompts
where user_status = 0 and skipped is not true
returning id, category_id, prompt_id
),
tab2 as (
update prompts
set response_id = tab1.response_id,
category_id = tab1.category_id
from tab1
where prompts.id = tab1.prompt_id
returning prompts.response_id as response_id, prompts.category_id as category_id
)
insert into category_responses (category_id, response_id)
select category_id, id as response_id
from tab2;

Checking existence of a column before adding

This is a small part of the data in my table PLANT I'm having in database...
id name code
123 OFFICE1 A1234
456 OFFICE2 B4567
789 OFFICE3 C8989
When I get all the data from an api, before inserting them into the database, I want to check if any records are present already.
This is how I'm checking if a record is present..
let isExists = sharedInstance.plantExists(thePlantObject, id: 123)
func plantExists(_ items: plant,id: Int) -> Bool {
var isExists = false
sharedInstance.database!.open()
isExists = sharedInstance.database!.executeUpdate("EXISTS(SELECT * FROM PLANT WHERE PLANT.id = ?)", withArgumentsIn: [id])
sharedInstance.database!.close()
return isExists
}
But if I print isExists, then this message is printed... (Bool) isExists = <variable not available>
What am I doing wrong here..?
If you can change the schema of db then make sure id attribute is set as unique and insertion will fail if you are inserting id which already exists in the column.
Or Just do select query with id and see if any results come back to you
Select * from where id =
As per comment #Joakim Danielson, EXISTS should be a part of WHERE clause.
You can achieve it feature two way
1. The way you are approaching (checking existence and based on that insert).
Instead of using EXISTS use following query
SELECT COUNT(id) FROM PLANT WHERE PLANT.id = ?
This way you can get the count of specific id. Here i assume that you didn't put any UNIQUE constraints in id column. If you set UNIQUE constraint in id column, then second approach is the best approach for you.
2. Let the SQLite handle itself (based on a constraint)
While create the db schema, make the id unique. Assumed schema creation query is following
CREATE TABLE plant (
id INT(11) UNIQUE NOT NULL,
name VARCHAR(255)
);
use following query while inserting
INSERT OR IGNORE (`id`, `name`) VALUES (?, ?)
This way, it will ignore inserting if any sort of constraint fails.
Working demo is here.
NB: It won't report any failure.

Delete all records that are not the latest

I have a table that deliberately has duplicates in it. In this instance the things that will be duplicated are a deviceId, and the datetime. Sometimes the customer updates their data. The table has three columns, deviceId, datetime and value (there is an incremental primary key). Sometimes when the customer re-evaluates their data, they notice that the value is incorrect, they then update it and send the data for re-processing. As a consequence, i need to be able to delete records that are not the very latest records. I cant do it by datetime, as this will also be duplicated in some cases and I cant truncate the staging table.
To delete the dupes I have the following:
;WITH DupeData AS (
SELECT ROW_NUMBER() OVER(PARTITION BY tblMeterData_Id,fldDateTime, fldValue, [fldBatchId],[fldProcessed] ORDER BY fldDateTime) AS ROW
FROM [Stage.tblMeterData])
DELETE FROM DupeData
WHERE ROW > 1
The problem with this, is it seems to delete a random duplicate.
I want to keep the latest record that is in the staging area and delete any others that are not the latest record. I can then update the relevant row with the new value, with the latest data, when I take it from staging into prod.
is any primary or unique key on the table?
if there's unique id - the easiest way below
not sure about performance but should work ok on small amounts
DELETE FROM DupeData
where id in
(select id from
( SELECT id,
ROW_NUMBER() OVER(PARTITION BY tblMeterData_Id,fldDateTime, fldValue, [fldBatchId],[fldProcessed] ORDER BY fldDateTime) AS ROW
FROM [Stage.tblMeterData])
) q
where q.row > 1)

Use computed columns in related table in power query on SQLITE via odbc

In power query ( version included with exel 2016, PC ), is it possible to refer to a computed column of a related table?
Say I have an sqlite database as follow
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE products (
iddb INTEGER NOT NULL,
px FLOAT,
PRIMARY KEY (iddb)
);
INSERT INTO "products" VALUES(0,0.0);
INSERT INTO "products" VALUES(1,1.1);
INSERT INTO "products" VALUES(2,2.2);
INSERT INTO "products" VALUES(3,3.3);
INSERT INTO "products" VALUES(4,4.4);
CREATE TABLE sales (
iddb INTEGER NOT NULL,
quantity INTEGER,
product_iddb INTEGER,
PRIMARY KEY (iddb),
FOREIGN KEY(product_iddb) REFERENCES products (iddb)
);
INSERT INTO "sales" VALUES(0,0,0);
INSERT INTO "sales" VALUES(1,1,1);
INSERT INTO "sales" VALUES(2,2,2);
INSERT INTO "sales" VALUES(3,3,3);
INSERT INTO "sales" VALUES(4,4,4);
INSERT INTO "sales" VALUES(5,5,0);
INSERT INTO "sales" VALUES(6,6,1);
INSERT INTO "sales" VALUES(7,7,2);
INSERT INTO "sales" VALUES(8,8,3);
INSERT INTO "sales" VALUES(9,9,4);
COMMIT;
basically we have products ( iddb, px ) and sales of those products ( iddb, quantity, product_iddb )
I load this data in power query by:
A. creating an ODBC data source using SQLITE3 driver : testDSN
B. in excel : data / new query , feeding this connection string Provider=MSDASQL.1;Persist Security Info=False;DSN=testDSN;
Now in power query I add a computed column, say px10 = px * 10 to the product table.
In the sales table, I can expand the product table into product.px, but not product.px10 . Shouldn't it be doable? ( in this simplified example I could expand first product.px and then create the px10 column in the sales table, but then any new table needinng px10 from product would require me to repeat the work... )
Any inputs appreciated.
I would add a Merge step from the sales query to connect it to the product query (which will include your calculated column). Then I would expand the Table returned to get your px10 column.
This is instead of expanding the Value column representing the product SQL table, which gets generated using the SQL foreign key.
You will have to come back and add any further columns added to the product query to the expansion list, but at least the column definition is only in one place.
In functional programming you don't modify existing values, only create new values. When you add the new column to product it creates a new table, and doesn't modify the product table that shows up in related tables. Adding a new column over product can't show up in Odbc's tables unless you apply that transformation to all related tables.
What you could do is generalize the "add a computed column" into a function that takes a table or record and adds the extra field. Then just apply that over each table in your database.
Here's an example against Northwind in SQL Server
let
Source = Sql.Database(".", "Northwind_Copy"),
AddAColumn = (data) => if data is table then Table.AddColumn(data, "UnitPrice10x", each [UnitPrice] * 10)
else if data is record then Record.AddField(data, "UnitPrice10x", data[UnitPrice] * 10)
else data,
TransformedSource = Table.TransformColumns(Source, {"Data", (data) => if data is table then Table.TransformColumns(data, {"Products", AddAColumn}, null, MissingField.Ignore) else data}),
OrderDetails = TransformedSource{[Schema="dbo",Item="Order Details"]}[Data],
#"Expanded Products" = Table.ExpandRecordColumn(OrderDetails, "Products", {"UnitPrice", "UnitPrice10x"}, {"Products.UnitPrice", "Products.UnitPrice10x"})
in
#"Expanded Products"

using a table in an sql condition without incorporating that table's contents into the results

Say I have four tables, users, contacts, files, and userfiles.
Users can upload files and have contacts. They can choose to share their uploaded files with their contacts.
When a user selects one or more of their uploaded files, I want to show a list of their contacts that they are not already sharing all of their selected files with. So if they selected one file, it'd show the contacts that can't already see that file. If the selected multiple files, it'd show the contacts that can't already see all of the files.
Right now I'm trying a query like this (using sqlite3):
select users.user_id, users.display_name
from users, contacts, userfiles
where contacts.user_id = :user_id
and contacts.contact_id = users.user_id
and (
userfiles.user_id != users.user_id
and userfiles.file_id != :file_id
);
Note that the last line is auto-generated in a loop in the case of multiple selected files.
Where :user_id is the user trying to share the file, and :file_id is the file which, if a user can already see that file, they are omitted from the result. What I end up with is a list of contacts which are sharing any files other than the selected one, so if the user is sharing multiple files with any one contact, that contact shows up in the list multiple times.
How can I avoid the duplicates? I just want to check if the file is already being shared, not grab all of the contents of userfiles that don't involve a particular file or files.
select users.user_id, users.display_name
from users, contacts as c
where c.user_id = :user_id
and c.contact_id = users.user_id
and not exists (
select user_id
from userfiles as uf
where uf.user_id = c.contact_id
and uf.file_id in (:file_ids)
);
Note that :file_ids is all your file_id's, seperated with commas. No more looping to run multiple queries!
EDIT:
This is the data I'm running as a test:
create table users (user_id integer primary key, display_name text);
insert into users values (1,"bob");
insert into users values (2,"jim");
insert into users values (3,"bill");
insert into users values (4,"martin");
insert into users values (5,"carson");
create table contacts values (user_id integer, contact_id integer);
insert into contacts select u1.user_id, u2.user_id from users u1, users u2 where ui.user_id!=u2.user_id;
create table userfiles (user_id integer, file_id integer);
insert into userfiles values (1,10);
insert into userfiles values (2,10);
insert into userfiles values (3,10);
insert into userfiles values (4,10);
insert into userfiles values (1,20);
insert into userfiles values (2,30);
Then, if I run my query with :user_id = 5 and :files_id = 20,30, I get:
select users.user_id, users.display_name
from users, contacts as c
where c.user_id = 5
and c.contact_id = users.user_id
and not exists (
select user_id
from userfiles as uf
where uf.user_id = c.contact_id
and uf.file_id in (20,30)
);
UserID|Display_Name
3 |bill
4 |martin
That seems like what you want, as I understand it, that is, the only users who do not have all the file ID's. If I misunderstood something, please let me know.
This seemed to work, not sure if it's optimal but it is the only way I could figure it out:
select users.user_id, users.display_name
from users, contacts
where contacts.user_id = :user_id
and contacts.contact_id = users.user_id
and (
select count(*)
from userfiles
where userfiles.user_id = users.user_id
and userfiles.file_id in (:file_ids)
) < :number_of_files;
It selects all contacts, except the ones that match all of the file_ids. It does select the contacts which match some of the file_ids, since it grabs the count of contacts matching the specified IDs, and then checks if that is less than the number of ids that were provided.

Resources