I am using Delphi 7 and ZeosLib 6.6.6 to access SQLite3 database.
What is the best practice to use shared database.
I plan to put the database file (data.db3) in a shared location.
And the Delphi application is on local desktop computer of every users.
I want to know how to manage database locking for example. Detecting if the database is being locked by certain user, things like that.
Thanks.
SQlite3 handle database sharing by default, locally on the same computer. You have nothing to do, just open the database several times on your hard drive. Of course, it does have an overhead, and locking will make it slower than access from one unique process.
But if by "in a shared location" you mean a network drive, as your question suggests, it probably won't work as expected.
Locking files over a network are not safe (at least in Windows world). See http://www.sqlite.org/cvstrac/wiki?p=SqliteNetwork
You should instead rely on a true Client/Server approach, still possible with SQLite3 on the server, and Clients accessing to it via the network. See e.g. our RESTful server using JSON and several protocols.
You can put a SQLite database on a shared network resource. According to the SQLite documentation - that is not recommended. Main reason - SQLite cannot effectively manage locking on a shared resource.
If you need multi-user access to a SQLite database, then you may consider using middleware, like DataAbstract. As a driver for Data Abstract you can use our library AnyDAC. Some articles: Using SQLite with AnyDAC and Using Data Abstract with AnyDAC. In first article check "Connecting to SQLite database from Delphi application" for usage cases, including how to setup for concurrent access.
Related
Are there major disadvantages using the embedded Firebird 3 in a multi-user application server (Delphi Webbroker) instead of the full blown server install?
The application usually has very short transactions with low data volume.
As far as I am informed accessing one database file with multiple threads through the embedded server is not problematic but user security is not available. As the application server does the rights stuff I do not need Firebird security.
But will I loose performance or things like garbage collection?
Firebird Embedded provides all the features (except network access and authentication) that a normal Firebird server provides. However, because it is in-process, any problems that cause your application to crash, will take Firebird with it and vice versa.
Other possible downsides:
Garbage collection will - as far as I know - always use the 'cooperative' model (where the connection to find old record versions, is the one that cleans it up),
You can't use other tools to access your database remotely which may make administration harder,
You can't put your database on a separate server from your web application (think of security requirements).
Personally, I would only choose Firebird Embedded if the situation calls for it. In all other situations, I will use Firebird Server.
What is the difference between ADOTable and ClientDataSet?
Both components are capable of performing Batch Update, why add the extra overhead of having 2 additional components like ClientDataSet and DataSetProvider.
The main difference is that ClientDataSet can operate without a connection to external database. You can use it as in-memory table or load it's contents from file.
In combination with DataSetProvider it is frequently used to overcome limits of unidirectional datasets and as a cache.
A ClientDataSet is an in-memory dataset, which has a lot of usefull additional functionallities.
One big advantage compared to Interbase/Firebird tables and queries is, that you don't need to keep a transaction alive, e.g. as long as you display the data in a grid.
Have a look at this article:
A ClientDataSet in Every Database Application
Client dataset is a generic implementation that works regardless of the underlying db access library. It can work (through the provider) with any TCustomDataset descendant, be it a dbExpress dataset, a BDE one, an ADO one, or any of the many libraries available for Delphi to allow for direct database access using the native client (i.e. ODAC, Direct Oracle Access, ecc. ecc.)
It can also work in a multi-tier mode where the data access dataset and provider are in a remote server application and the TClientDataset is in the client application, allowing for "thin client" deployment which doesn't require database clients or data access library like ADO installed on the client (the required midas.dll code can be linked to the application when using recent versions of Delphi, anyway only the midas.dll is required otherwise).
On top of that it can be used as an in-memory table able to store data in a local file. It allows for the "briefcase" model also, where a thin client can still work when not connected to the database, and then "sync" when a connection becomes available. That's was more useful in the past, when wireless access was not common.
As you can see, TClientDataset offers a lot more of a TADODataset.
The most important difference I can think of is resolving update conflicts. In fact, TClientDataSet exposes the handy ReconcileErrorForm dialog, which wraps up the process of showing the user the old and new records and allows them to specify what action to take, while with TADOTable for instance, you're basically on your own.
I want to know which is the best architecture to adopt for this case :
I have many shops that connect to a web application developed using Ruby on Rails.
internet is not reachable all the time
The solution was to develop an offline system which requires installing a local copy of the distant database.
All this wad already developed.
Now what I want to do :
Work always on the local copy of the database.
Any change on the local database should be synchronized with distant database.
All the local copies should have the same data in other local copies.
To resolve this problem I thought about using a JMS like software eventually Rabbit MQ.
This consists on pushing any sql request into a JMS queue that will be executed on the distant instance of the application which will insert into the distant DB and push the insert or SQL statement into another queue that will be read by all the local instances. This seems complicated and should slow down the application.
Is there a design or recommendation that I must apply to resolve this kind of problem ?
You can do that but essentially you are developing your own replication engine. Those things can be a bit tricky to get right (what happens if m1 and m3 are executed on replica r1, but m2 isn't?) I wouldn't want to develop something like that unless you are sure you have the resources to make it work.
I would look into existing off-the shelf replication solution. If you are already using a SQL DB it probably has some support for it. Look here for more details if you are using MySQL
Alternatively, if you are willing to explore other backends, I heard that CouchDB has great support for replication. I also heard of people using git libraries to do that sort of thing.
Update: After your comment, I realize you already use MySql replication and are looking for solution for re-syncing the databases after being offline.
Even in that case RabbitMQ doesn't help you at all since it requires constant connection to work, so you are back to square one. Easiest solution would be to just write all the changes (SQL commands) into a text file at a remote location, then when you get connection back copy that file (scp, ftp, emaill or whatever) to master server, run all the commands there and then just resync all the replicas.
Depending on your specific project you may also need to make sure there are no conflicts when running commands from different remote location but there is no general technical solution to this. Again, depending on the project, you may want to cancel one of the transactions, notify the users that it happened and so on.
I would recommend taking a look at CouchDB. It's a non-SQL database that does exactly what you are describing automatically. It's used especially in phone applications that often don't have internet or data connectivity. The idea is that you have a local copy of a CouchDB database and one or more remote CouchDB databases. The CouchDB server then takes care of teh replication of the distributed systems and you always work off your local database. This approach is nice because you don't have to build your own distributed replication engine. For more details I would take a look at the 'Distributed Updates and Replication' section of their documentation.
What I want to do: My application has a full connection to a Derby DB, and I want to poke around in the DB (read-only) in parallel (using a different tool).
I'm not sure how Derby actually works internally, but I understand that I can have only 1 active connection to a Derby DB.
However, since the DB is only consisting of files on my HDD, shouldn't I be able to open additional connections to it, in read-only mode?
Are there any tools to do just that?
There are two possibilities how to run Apache Derby DB.
Embedded: You run DB within your application → only one connection possible
Client: You start DB as server in separate process → classic DB with many connections
You can recognize the type upon driver size. If the driver has more then 2MB that you use embedded version.
Update
When you startup the derby engine (server or embedded) it gets exclusive access to database files.
If you need to access a single database from more than one Java Virtual Machine (JVM), you will need to put a server solution in place. You can allow applications from multiple JVMs that need to access that database to connect to the server.
For details see Double-booting system behavior.
I realize this is an old question, but I thought I might add a little more detail on a solution since links in the currently accepted answer are broken.
It is possible to run the Derby Network Server within a JVM that is using the embedded database already. The code that is using the embedded Derby database doesn't need to change anything and can keep using the DB as is, but with the Derby Network Server started, other programs can connect to derby and access the database.
All you need to do is ensure that derbynet.jar is on the classpath
And then you can do one of the following
Include the following line in the derby.properties file: derby.drda.startNetworkServer=true
Specify the property as a system property at java start
java -Dderby.drda.startNetworkServer=true
You can use the NetworkServerControl API to start the Network Server from a separate thread within a Java application:
NetworkServerControl server = new NetworkServerControl();
server.start (new PrintWriter(System.out));
More details here: http://db.apache.org/derby/docs/10.9/adminguide/tadminconfig814963.html
Keep in mind that doing this does not enable any security on this connection, so it is not a good idea to do this on a production system. It is possible to add security though and that is documented here: http://db.apache.org/derby/docs/10.9/adminguide/cadminnetservsecurity.html
Two other ideas:
In your application, shut down the database and close the connection when the database is not actively in use. Then your application won't interfere with another tool which is trying to open the database.
Make a copy of your database, by taking a backup (you can do this while the database is open by your application), then restore that backup to a separate place on your disk. Then you can use another tool to access the copied database at your ease.
If you can afford the memory and do not need up-to-date data, then you can access read-only databases from multiple JVMs by creating in-memory copies:
ij> connect 'jdbc:derby:memory:memdb;restoreFrom=mydb';
In the application I am designing, I have to communicate with a device and store a history of data readings in a database. The device is essentially a sensor that spits out numbers via the serial port. The user end of the application is a RubyOnRails interface that allows the user to view this data and configure the device.
I am wondering what kind of connection between the database and the device you could recommend for this kind of a setup.
Up to this point, I had a custom application running on a host computer (a computer with the device connected directly through a serial port) that would serve as a bridge to a MySQL database. The application would connect directly to the MySQL database and execute queries. It works fairly well, but I am not sure if this is the best solution.
The only other alternative I see is to have an intermediate application that my custom application could connect to, instead of directly going to the database. This could be a part of the main application, or something separate. Would this be a better solution?
Would you recommend another approach?
Thank you,
I have a similar structure, although I fetch my data from a Web Service. The way I organize is:
Create classes in lib/imports, eg DailyDataImport, DailyDataSummarize (you can organize the hierarchy and names as per your wish or willingness).
Create a rake task under a new namespace, say import and add it to your cron job depending frequency. Take a look at Cron in Ruby. Its helpful.
This allows me to have a better control over what goes in my database.
Some questions to consider:
What schedule does the Device follow
to populate the data?
Do you need the data as-is or you
want a little control over it or you
need to process it, like summarizing
and aggregating etc.
MS SQL Server 2008 has great data synchronisation support.
SQL Server 2008 Express is free and can act as a replication subscriber (but not publisher) for clients.
Microsoft Sync Framework