How to refresh query on client when server has update whithout disconnected and reconnect - delphi

I developed an Client/Server data base application, using Firebird IBdatabase, IBquery. I need to know how to refresh the data on the server AND client when one of them has update/insert query. The reason being that when I run a query on the client, after I inserted records into a table, the new records do not reflect in the queries. Until i disconnected and reconnect again
I'm using a Firebird DB with InterBase VCL, developing in Delphi XE2

You don't have to disconnect the connection, but you will have to refresh (or close and reopen) the IBQuery. This is the case for most databases.
If you do not want this, you will have to send a notification from the database to all clients. I don't know if this would be doable from FireBird, but it is not common at all for databases to do this.

The transaction type for your select query is probably snapshot. You can either start a new snapshot transaction each time you want to refresh, or use transaction type read committed.

Related

How do you change your active database connection in SQL Workbench/J?

Is it possible to change your active/default database connection in SQL Workbench/J while still under a single connection profile? There are times I am connected to a database server with multiple databases and I would like to switch my active database without having to use a USE statement, specify the full 3 part naming convention, or switch connection profiles entirely. In SSMS, there is a simple drop-down menu to easily switch between different databases. Just wondering if there is something similar in SQL Workbench/J that I'm just missing.
There is an experimental feature to enable a dropdown with the available databses in the main window.
If you run
WbSetDbConfig gui.enable.dbswitcher=true;
in a SQL editor tab when connected to a SQL Server database, then you should have a dropdown to switch the current database after restarting SQL Workbench/J.
It will essentially issue a USE in the background for the current connection when using SQL Server.

Copying remote Firebird table to local database

I have a remote Firebird 3.0 server with a database. In this database, there is a big table. The client very often queries this table during their work. There are too many clients and bad internet connection, so the work with this table is terrible. I made a local copy of this table via IBExpert into a temporary database, which is distributed with client application.
But now there is a need in a change of some values in this table (add new values and edit some olds). So I need some kind of synchronization - copying of remote modified table to client's local database.
The client application was made by use of Delphi Berlin 10.1. So the synchronization should be done by Delphi code.
Can you give me an idea, how it will be correctly to synchronize such a big table, please?
You could fire POST_EVENT on master database (for insert, update, delete (triggers)) to notify client applications that there are changes.
Then your client would need to fire procedure (on local DB) to do a sync. This could be done by EXECUTE STATEMENT ON EXTERNAL
FOR EXECUTE STATEMENT ('SELECT ... WHERE CURRENT_TIMESTAMP >= tablename.modifiedon')
ON EXTERNAL 'SERVER/PORT:DBPATH'
You should include date of insert/modified/delete in master DB.

Most efficient way to pass SQL Login credentials to Delphi Datasnap servers?

Using Delphi XE to build a relatively straightforward database app using Datasnap.
Since some security in my application is handled at the database level, I need to pass a user's SQL credentials from my client app to my Datasnap server.
(I'm trying to make the Datasnap server stateless if possible, so recognise that I will have to do this for every call.)
I'm using ClientDatasets (CDS) on the client side so I could use OnBeforeGetRecords to pass the data in the OwnerData OleVariant from the CDS on the client to the corresponding TDataSetProvider on the server. But that means every single CDS on every data module has to have an event that does this, which seems messy and unwieldy. I can't help feeling there must be a way to pass messages to the server at a higher level than that.
What I'd really like is something like this at the DSServerClass level on the server side:
Procedure TMyServerContainer.MyServerClassCreateInstance(DSCreateInstanceEventObject: TDSCreateInstanceEventObject);
begin
// Server detects request for data from client app
fUsername := GetUsernameFromClientSomehow;
fPassword := GetPasswordFromClientSomehow;
// create data modules and initialise
MyDataModule := TMyDataModule.Create(nil);
MyDataModule.InitialiseWithSQLCredentials(fUsername, fPassword);
DSCreateInstanceEventObject.ServerClassInstance := MyDataModule;
End;
Could the Authentication Manager component help me here? Any other ideas? Or am I stuck with OnBeforeGetRecords?
Many thanks.
You can use the SQL credentials as UserName and Password for connecting to the DataSnap server. These values can be verified in the Authentication Manager and/or simply forwarded to the underlying SQLConnection component for connecting to the SQL server.
The most secure way would be to pass along the user security token (encrypted) and then use integrated security on the server side impersonating in a thread the calling user security context. This way no user/password would ever be sent across the wire. Unluckily while MS/DCE RPC can do this for every call (and DCOM, being built above RPC), Datasnap can't (SPNEGO/GSSAPI/SSPI looks to complex for the guys at Embarcadero, they like simple, unsecure protocols). Otherwise be very careful the way you send credential across the network, they could be easily sniffed unless properly protected.
I would advise you anyway to send them only once, if you need to (and in the most protected way you can), and then store them protected on the server side (suing Windows protected storage facilities), and send back to the client an handle/session token (tied to the originating IP), to be used in subsequent calls instead of resending credentials each time. Informations are cleared when the user logs off or the session timeouts.

Oracle ODP.Net and connection pooling

this is really two questions in one I guess.
We've developed a .Net app that accesses an Oracle database, and have noticed that after changing the user's Oracle password, the app continues to work for a short time with the old password in the connection string. Presumably this is something to do with the way existing connections are pooled?
When first investigating this we tried turning off pooling in the connection string, however the app wouldn't work, throwing the error "Unable to enlist in a distributed transaction" at the point it tries to open a connection. While we probably wouldn't want to turn off connection pooling in a production app, I'm curious as to why MSDTC seems to need it?
We are using Oracle 11g (11.1.2) and latest ODP.Net (11.2 I think).
Thanks in advance
Andy
Please see some of the finding below:
For Question One: (application still connected with old DB password)
If we connect the database with connection pooling option, connection pool manager would create and maintain the number of connection sessions when first calling the open or close of OracleConnection object. (number of this connection sessions depend on "min" & "max" pool size in connection string). In Oracle, I think you could check active session like:
SELECT s.inst_id,
s.sid,
s.serial#,
p.spid,
s.username,
s.program
FROM gv$session s
JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE s.type != 'BACKGROUND';
And according to Oracle doc, this connection pooling service will close the connection sessions after 3 minutes of in-active state. [ http://docs.oracle.com/html/E10927_01/featConnecting.htm ]
So the most possible reason could be, your application still
connected to the database by using this Pool and still connected for
a short time, even after you changed the database password.
There could be also one possibility of "Oracle Client Cache"
feature in ODP.net. But not quite sure, you can check at, [
http://www.oracle.com/technetwork/issue-archive/2008/08-jul/o48odpnet-098170.html ]
For Question Two: (why MSDTC needed)
If you are using nested Database connection in your code, it will be promoted to DTC. [ http://petermeinl.wordpress.com/2011/03/13/avoiding-unwanted-escalation-to-distributed-transactions/ ] Actually there was Oracle Service for Microsoft Transaction Server (OraMTS) act as among ODP.net, DTC, and Oracle Database.
But you didn't happend this problem (MSDTC) before disabled the connection pooling. It seems like your code is reusing the same connection out of undelining connection pool, and it might eliminate the need to promote DTC. There was similar question on StaffOverflow. [ Why isn't my transaction escalating to DTC? ]

Blackberry | Keeping local Persistant Storage up to date with remote database

I'm developing a blackberry application to remotely access an external customer database.
Selected employees can change customer entries via a webinterface accessible in our intranet.
I don't want the blackberry to contact the database on every request, so I built in a local storage, which stores the top 50 selected customers of the blackberry user.
What the best practice to keep both records in sync? I thought about creating an hashcode of each record to reduce the datasize to transfer (and though the energy necessary to transmit it). Can anyone here tell me what they do, to reduce requests by a mobile device?
Thanks,
rAyt
In a couple of different situations I've added a created/modified timestamp to each record. On a successful sync with the server, you note the last server time, store it on the client, and on the next sync only get the records (if any) that have changed since the last one. This will reduce data but you may still have to deal with records that were changed on both client and server since the last sync.

Resources