In Firebird can you use a cursor with EXECUTE STATEMENT? - stored-procedures

I would like to pass, to a stored procedure, a table and field name, and then do a FOR SELECT on the table and, for some records, change the field value. I just learned about cursors which is a very convenient way to make changes in a FOR SELECT loop. But can I use cursors with EXECUTE STATEMENT?
I want this stored procedure to work for a variety of tables with a similar structure so I want it to be generic hence passing in the table name and field name.
I'm pretty sure the answer is "No." so how could I do this?

Related

How to use schema-mapping in a stored procedure when you have more than one schema?

I have a stored procedure on my HANA database where I need to join two tables from different schemas. These schemas are named differently in the development, staging and production system.
The obvious solution in this situation would be to use Schema-Mapping. But unfortunately schema-mapping only seems to work for the default schema of a stored procedure. When trying to reference an authoring schema in a stored procedure (ex. JOIN "AUTHORING_SCHEMA"."SOME_TABLE" ON ...) you get the error message "invalid schema name". So it seems like I can only use schema-mapping for one of the tables but not for both.
I know I can read the schema mappings in my stored procedure by querying the table "_SYS_BI"."M_SCHEMA_MAPPING", but I can't find out how to query from a schema when I have the schema name in a variable.
I would give it a try to work around this limitation by defining two synonyms using .hdbsynonym
For details on how to create design time synonyms using .hdbsynonym check https://help.sap.com/saphelp_hanaplatform/helpdata/en/52/78b5979128444cb6fffe0f8c2bf1e3/content.htm and https://help.sap.com/saphelp_hanaplatform/helpdata/en/4c/94a9b68b434d26af6d878e5f51b2aa/content.htm
There you can also find a description on how schema mapping works with hdbsynonym.
For details on synonyms in general see https://blogs.sap.com/2016/12/05/using-synonyms-in-sap-hana/
I solved this with a workaround which I am not entirely happy about, but which works for now.
I created a second stored procedure with the second schema as default schema. This procedure does nothing but SELECTing the contents of the second database table.
The first stored procedure calls the second one to load the data into a local table variable and then performs a JOIN between the first database table and the table variable.
This works reasonably well because the second table is rather small (16 rows at the moment, unlikely to grow beyond 100). But I wouldn't want to do it with a larger table.

How to get IQueryable.ToList() with database fieldnames instead of entity properties

The scenario:
I have a considerable amount of entities as models in CodeFirst mapped to the database fieldname with the Attribute [Column("str")].
I have a bunch of Reporting Service Reports (in local-mode) with the DataSets mapped to the database field names.
I can't pass direct results of linq queries to those reports with the ToList() method because of the field names. What I can do (and I'm trying to avoid) is to type select new for each object; or run each query via a different datasource.
Question:
I would like to know if there is any trick to have a IQueryable object with the original field names instead of the property names. Something like a dynamic select new.
Any suggestions will be appreciated.
No, there isn't. The database column names either have to match the property name, or you have to use the Column attribute to make them line up. That's your only choices.

Tweaking a TDbGrid

I am taking my first stumbling steps into DB aware controls (any good tutorials?).
I have a MySql table with 6 columns and have managed to load it into a TDbGrid.
One of the columns however is an index into another table. It is a bar code and, rather than display that, I would like to display the product name associated with it.
How do I do that?
(and can I hide the "gutter" (?) down the left whcih shows the current row?)
Thanks
You should always perform a join from the SQL side, it's much easier then doing it programaticaly
Such as:
SELECT mytable.id, mytable.column1, another_table.barcode
FROM mytable
JOIN another_table ON another_table.id = mytable.barcode_id
To remove gutter you need to uncheck the DBGrid property dgIndicator in Options.
As for "DB-Aware controls" you should try delphi help.
Instead of a table, make use of a query. Then, use a join to select the product name with it, like this:
SELECT
t.*,
p.name
FROM
YourTable t
INNER JOIN Product p on p.barcode = t.barcode
I use t.*, because I don't know the exact columns. In practise, I would not use select *, but specify specific columns instead. If you are going to use * anyway, you can hide speicfic columns by setting the Visible property of the TField object in the dataset/query to False.
I don't know which components you are using to connect to the table, but most of them do have a query-counterpart that allows you to insert SQL instead of a table name.
The gutter can be hidden by going to the property Options in the object inspector, expand it, and set dgIndicator to False.
Just for the record: with ISAM databases like Paradox and DBF typical solution would be so-called master-detail tables relations and it still might work for SQL. Though it would be very inefficient and slow. You'd definitely read som books about SQL.
Use a TQuery component instead of a TTable and set SQL property using the suggested select statements above. If you just add the columns you want to display in your sql statement, you get the result as expected. As for "gutter" you would have to hack the grid in some way at runtime.

Delphi - TUpdateObject versus OnUpdateRecord for a join SQL statement

I have a pFibdataset(which is working similar to BDEDataset) in which I need to make the following join selection
select table.Name as name,
table1.Name as name_1,
table2.Name as name_2
from table
left join table table_1 on table.id=table_1.id
left join table table_2 on table.id=table_2.id
Fields name, name_1 and name_2 are linked to some data-aware edits. Now, I want after I'm modifying(update,delete,insert operations) the name,name_1 and name_2 fields to be updated in the tables. Based on the wiki Using_Multiple_Update_Objects_Index I can use UpdateObjects, or OnUpdateRecord event.
The problem is that I don't understand how this need to be implemented. I have the join select on the query, how I need to define and work with name_1 and name_2 fields. Can someone provide me an example for this?
I know how to use subqueries in order to accomplish this. I need to see how can I can make it by using UpdateObjects or OnUpdateRecord.
TpFibUpdateObject works like a trigger on client side. To make it work, set the following properties:
DataSet - dataset (master) to monitor
KindUpdate - Insert/Update/Delete - action to monitor
SQL - command to execute when action is fired, params are taken from DataSet
ExecuteOrder - AfterDefault/BeforeDefault - probably you need after / master
BUT, instead using a lot of UpdateObject components and such tangled approach, I recommend two alternative (read better) ways:
Updatable view. It will work like a "virtual table". Create a view that joins these theee tables and write Before Insert/Update/Delete triggers. In Delphi use it as a regular table: select from view / insert into view / update view and delete from view. Anyway I suppose you need in many places these tables linked toghether.
Use EXECUTE BLOCK statements in your TpFIBDataSet SQLs. Insert / Update / Delete in a batch all tables.
Solution : OnUpdateRecord it must be created an TUpdateObject for each field from the joined table.
UpdateObjectvariable.DataSet := Dataset;
fill the SQL text
Apply.
After all update objects are set, UpdateAction := uaApplied; must be called.

How to make sure that it is possible to update a database table column only in one way?

I am using Ruby on Rails v3.2.2 and I would like to "protect" a class/instance attribute so that a database table column value can be updated only one way. That is, for example, given I have two database tables:
table1
- full_name_column
table2
- name_column
- surname_column
and I manage the table1 so that the full_name_column is updated by using a callback stated in the related table2 class/model, I would like to make sure that it is possible to update the full_name_column value only through that callback.
In other words, I should ensure that the table2.full_name_column value is always
"#{table1.name_column} #{table1.surname_column}"
and that it can't be another value. So, for example, if I try to "directly" update the table1.full_name_column, it should raise something like an error. Of course, that value must be readable.
Is it possible? What do you advice on handling this situation?
Reasons to this approach...
I want to use that approach because I am planning to perform database searches on table1 columns where the table1 contains other values related to a "profile"/"person" object... otherwise, probably, I must make some hack (maybe a complex hack) to direct those searches to the table2 so to look for "#{table1.name_column} #{table1.surname_column}" strings.
So, I think that a simple way is to denormalize data as explained above, but it requires to implement an "uncommon" way to handling that data.
BTW: An answer should be intend to "solve" related processes or to find a better approach to handle search functionalities in a better way.
Here's two approaches for maintaining the data on database level...
Views and materialized tables.
If possible, the table1 could be VIEW or for example MATERIALIZED QUERY TABLE (MQT). The terminology might differ slightly, depending on the used RDMS, I think Oracle has MATERIALIZED VIEWs whereas DB2 has MATERIALIZED QUERY TABLEs.
VIEW is simply an access to data that is physically in some different table. Where as MATERIALIZED VIEW/QUERY TABLE is a physical copy of the data, and therefore for example not in sync with source data in real time.
Anyway. these approaches would provide read-only access to data, that is owned by table2, but accessible by table1.
Example of very simple view:
CREATE VIEW table1 AS
SELECT surname||', '||name AS full_name
FROM table2;
Triggers
Sometimes views are not convenient as you might actually want to have some data in table1 that is not available from anywhere else. In these cases you could consider to use database triggers. I.e. create trigger that when table2 is updated, also table1 is updated within the same database transaction.
With the triggers the problem might be that then you have to give privileges to the client to update table1 also. Some RDMS might provide some ways to tune access control of the triggers, i.e. the operations performed by TRIGGERs would be performed with different privileges from the operations that initiate the TRIGGER.
In this case the TRIGGER could look something like this:
CREATE TRIGGER UPDATE_NAME
AFTER UPDATE OF NAME, SURNAME ON TABLE2
REFERENCING NEW AS NEWNAME
FOR EACH ROW
BEGIN ATOMIC
UPDATE TABLE1 SET FULL_NAME = NEWNAME.SURNAME||', '||NEWNAME.NAME
WHERE SOME_KEY = NEWNAME.SOME_KEY
END;
By replicating the data from table2 into table1 you've already de-normalized it. As with any de-normalization, you must be disciplined about maintaining sync. This means not updating things you're not supposed to.
Although you can wall off things with attr_accessible to prevent accidental assignment, the way Ruby works means there's no way to guarantee that value will never be modified. If someone's determined enough, they will find a way. This is where the discipline comes in.
The best approach is to document that the column should not be modified directly, block mass-assignment with attr_accessible, and leave it at that. There's no concept of a write-protected attribute, really, as far as I know.

Resources