Deleting rows in joined tables using ADO - ado

Now I have seen this question in another forum but it didn't had an acceptable answer.
Suppose I have two tables, the Groups table and the Elements table. The tables have no defined relationships. The Elements table has an IdGroup field that refers to the IdGroup (PK) field of the Groups table.
I use the following query through an ADO recordset to populate the tables values to a datagrid:
SELECT Elements.*, Groups.GroupName
FROM Elements
INNER JOIN Groups ON Elements.IdGroup = Groups.IdGroup
From that grid I want to press Delete in order to delete an Element. Here is my problem. When I used DAO, the DAO Delete() function deleted only the record in the Elements group. This was the expected behavior.
When I changed to ADO, the Delete() function deleted records in both tables, the element record and the group to which the element belonged!
Is there any way to reproduce the DAO behavior in ADO without having to define relationships into the tables?
Note: I know there are alternatives (executing DELETE querys could do the job). Just show me a way to do this in ADO, or say it cannot be done.

Rewrite you query to:
replace the INNER JOIN with a WHERE clause consisting of an EXISTS;
use a subquery in the SELECT clause to return the value of Groups.GroupName.
Example:
SELECT Elements.*,
(
SELECT Groups.GroupName
FROM Groups
WHERE Elements.IdGroup = Groups.IdGroup
)
FROM Elements
WHERE EXISTS (
SELECT *
FROM Groups
WHERE Elements.IdGroup = Groups.IdGroup
);
I've tested this using SQL Server 2008 with a ADO recordset set as the DataSource property of a Microsoft OLEDB Datagrid Control (MSDATGRD.OCX) then deleting the row via the gird (I assume you are doing something similar) and the row is indeed deleted from table Elements only (i.e. the row in Groups remains undeleted).
Note the revised query may have a negative impact on performance when fetching rows.

Related

PowerBI - Join DirectQuery and Imported tables in DAX

I have a DirectQuery table (Weather) which is sourced from an Azure SQL server. I would like to join this with an Imported table (Buckles) from an Excel sheet sourced from SharePoint Online.
Both tables have a UID field that is made up of a concatenation between a SiteID and timestamp. The UID field is named differently for each table.
I have created a One-To-Many relationship between the two tables.
I have tried to create a new DAX table using a NATURALINNERJOIN on Weather and Buckles but I get this error:
"No common join columns detected. The join function 'NATURALINNERJOIN' requires at-least one common join column."
I am confident it is not a problem with the underlying data because I've created a new imported Excel table (Test) with a selection of the data from Weather and I'm able to successfully create the join on Test and Buckles.
Is the joining of DirectQuery and Imported tables supported? I feel like this may be a type casting issue, but as far as I can see, both UID fields are set as Text.
The UID field is named differently for each table.
I suspect this may be the issue. NATURALINNERJOIN looks for matching column names
and if the two tables have no common column names, an error is returned.
Note that if you create a calculated DAX table using a DirectQuery source, I don't think that table will still act like DirectQuery. If I understand correctly, it will materialize the calculated table into your model and DAX that references that calculated table no longer points back to the SQL server (and consequently will only update when the calculated table gets rebuilt).

Filter similar records in a table using Dblookupcombobox

I have a dblookupCombobox that is connected to an access table feeds using a datasource and adoquery. I have used the listsource, keyfield, and listfield properties to connect it to the feeds table field name feedtype. problem is that dblookupComboboxshows repetitive records. Is it possible to filter the records and show only one record? Or how else can I achieve this?
The listsource and listfield should really come from another table, even if it's constructed table such as
SELECT DISTINCT feedtype FROM feeds AS lookup.
Create another adoquery and use that to get the list of DISTINCT values, then use that for your listsource.
I always create a specific lookup table where users can manage the valid entries.

Delphi 7 updating joined tables

I'm working with Delphi 7 and Firebird database. I'm using TIBDatabase, TIBTransaction, TIBQuery, TIBDataSet and DBGrid to establish connection and provide user interface for working with table. In my database I have two tables:
Ships
fields
Id integer
Name varchar(20)
Type_Id(Fk) integer
Longth integer
Ship_types
fields
Id(Pk) integer
Ship_type varchar(10)
So resulting dataset which I get through "join" query has such fields
Name
Type
Longth
Type is Ship_type field from Ship_types table joined via query by Type_Id foreign key to this table from Ships table.
Data is displaying properly.
Then I need to edit my data directly through DBGrid. For this purpose I use TIBUpdateSQL component. For displaying Type(lookup) field I chose DBGrid.Columns.PickList property.
So my question is how can I make TIBUpdateSQL work with such type of field? Cause I know that if it would be single table without foreign keys I have to write update statement into ModifySQL property of update component. But what have I do with fk fields? Can I write update join statement in UpdateSQL component or, if not, what else way I can do it?
I don't need to update two tables, I just need to update only Ships table but there is varchar(word representation) field in displaying dataset and in updating dataset it must be integer(corresponding id) to suit to table structure.
Editor in TIBUpdateSQL isn't solution for me cause I'm assigning query to TIBQuery on runtime.
You can't update tables using select with JOIN, only with subselects.
Sub-select example:
SELECT TABLE_NAME.*
, (SELECT TABLE_NAME2.NAME FROM TABLE_NAME2 WHERE TABLE_NAME2.ID = TABLE_NAME.ID)
FROM TABLE_NAME

2 column table, ignore duplicates on mass insert postgresql

I have a Join table in Rails which is just a 2 column table with ids.
In order to mass insert into this table, I use
ActiveRecord::Base.connection.execute("INSERT INTO myjointable (first_id,second_id) VALUES #{values})
Unfortunately this gives me errors when there are duplicates. I don't need to update any values, simply move on to the next insert if a duplicate exists.
How would I do this?
As an fyi I have searched stackoverflow and most the answers are a bit advanced for me to understand. I've also checked the postgresql documents and played around in the rails console but still to no avail. I can't figure this one out so i'm hoping someone else can help tell me what I'm doing wrong.
The closest statement I've tried is:
INSERT INTO myjointable (first_id,second_id) SELECT 1,2
WHERE NOT EXISTS (
SELECT first_id FROM myjointable
WHERE first_id = 1 AND second_id IN (...))
Part of the problem with this statement is that I am only inserting 1 value at a time whereas I want a statement that mass inserts. Also the second_id IN (...) section of the statement can include up to 100 different values so I'm not sure how slow that will be.
Note that for the most part there should not be many duplicates so I am not sure if mass inserting to a temporary table and finding distinct values is a good idea.
Edit to add context:
The reason I need a mass insert is because I have a many to many relationship between 2 models where 1 of the models is never populated by a form. I have stocks, and stock price histories. The stock price histories are never created in a form, but rather mass inserted themselves by pulling the data from YahooFinance with their yahoo finance API. I use the activerecord-import gem to mass insert for stock price histories (i.e. Model.import columns,values) but I can't type jointable.import columns,values because I get the jointable is an undefined local variable
I ended up using the WITH clause to select my values and give it a name. Then I inserted those values and used WHERE NOT EXISTS to effectively skip any items that are already in my database.
So far it looks like it is working...
WITH withqueryname(first_id,second_id) AS (VALUES(1,2),(3,4),(5,6)...etc)
INSERT INTO jointablename (first_id,second_id)
SELECT * FROM withqueryname
WHERE NOT EXISTS(
SELECT first_id FROM jointablename WHERE
first_id = 1 AND
second_id IN (1,2,3,4,5,6..etc))
You can interchange the Values with a variable. Mine was VALUES#{values}
You can also interchange the second_id IN with a variable. Mine was second_id IN #{variable}.
Here's how I'd tackle it: Create a temp table and populate it with your new values. Then lock the old join values table to prevent concurrent modification (important) and insert all value pairs that appear in the new table but not the old one.
One way to do this is by doing a left outer join of the old values onto the new ones and filtering for rows where the old join table values are null. Another approach is to use an EXISTS subquery. The two are highly likely to result in the same query plan once the query optimiser is done with them anyway.
Example, untested (since you didn't provide an SQLFiddle or sample data) but should work:
BEGIN;
CREATE TEMPORARY TABLE newjoinvalues(
first_id integer,
second_id integer,
primary key(first_id,second_id)
);
-- Now populate `newjoinvalues` with multi-valued inserts or COPY
COPY newjoinvalues(first_id, second_id) FROM stdin;
LOCK TABLE myjoinvalues IN EXCLUSIVE MODE;
INSERT INTO myjoinvalues
SELECT n.first_id, n.second_id
FROM newjoinvalues n
LEFT OUTER JOIN myjoinvalues m ON (n.first_id = m.first_id AND n.second_id = m.second_id)
WHERE m.first_id IS NULL AND m.second_id IS NULL;
COMMIT;
This won't update existing values, but you can do that fairly easily too by using with a second query that does an UPDATE ... FROM while still holding the write table lock.
Note that the lock mode specified above will not block SELECTs, only writes like INSERT, UPDATE and DELETE, so queries can continue to be made to the table while the process is ongoing, you just can't update it.
If you can't accept that an alternative is to run the update in SERIALIZABLE isolation (only works properly for this purpose in Pg 9.1 and above). This will result in the query failing whenever a concurrent write occurs so you have to be prepared to retry it over and over and over again. For that reason it's likely to be better to just live with locking the table for a while.

Delphi - TUpdateObject versus OnUpdateRecord for a join SQL statement

I have a pFibdataset(which is working similar to BDEDataset) in which I need to make the following join selection
select table.Name as name,
table1.Name as name_1,
table2.Name as name_2
from table
left join table table_1 on table.id=table_1.id
left join table table_2 on table.id=table_2.id
Fields name, name_1 and name_2 are linked to some data-aware edits. Now, I want after I'm modifying(update,delete,insert operations) the name,name_1 and name_2 fields to be updated in the tables. Based on the wiki Using_Multiple_Update_Objects_Index I can use UpdateObjects, or OnUpdateRecord event.
The problem is that I don't understand how this need to be implemented. I have the join select on the query, how I need to define and work with name_1 and name_2 fields. Can someone provide me an example for this?
I know how to use subqueries in order to accomplish this. I need to see how can I can make it by using UpdateObjects or OnUpdateRecord.
TpFibUpdateObject works like a trigger on client side. To make it work, set the following properties:
DataSet - dataset (master) to monitor
KindUpdate - Insert/Update/Delete - action to monitor
SQL - command to execute when action is fired, params are taken from DataSet
ExecuteOrder - AfterDefault/BeforeDefault - probably you need after / master
BUT, instead using a lot of UpdateObject components and such tangled approach, I recommend two alternative (read better) ways:
Updatable view. It will work like a "virtual table". Create a view that joins these theee tables and write Before Insert/Update/Delete triggers. In Delphi use it as a regular table: select from view / insert into view / update view and delete from view. Anyway I suppose you need in many places these tables linked toghether.
Use EXECUTE BLOCK statements in your TpFIBDataSet SQLs. Insert / Update / Delete in a batch all tables.
Solution : OnUpdateRecord it must be created an TUpdateObject for each field from the joined table.
UpdateObjectvariable.DataSet := Dataset;
fill the SQL text
Apply.
After all update objects are set, UpdateAction := uaApplied; must be called.

Resources