Are there major disadvantages using the embedded Firebird 3 in a multi-user application server (Delphi Webbroker) instead of the full blown server install?
The application usually has very short transactions with low data volume.
As far as I am informed accessing one database file with multiple threads through the embedded server is not problematic but user security is not available. As the application server does the rights stuff I do not need Firebird security.
But will I loose performance or things like garbage collection?
Firebird Embedded provides all the features (except network access and authentication) that a normal Firebird server provides. However, because it is in-process, any problems that cause your application to crash, will take Firebird with it and vice versa.
Other possible downsides:
Garbage collection will - as far as I know - always use the 'cooperative' model (where the connection to find old record versions, is the one that cleans it up),
You can't use other tools to access your database remotely which may make administration harder,
You can't put your database on a separate server from your web application (think of security requirements).
Personally, I would only choose Firebird Embedded if the situation calls for it. In all other situations, I will use Firebird Server.
Looking at the node-postgres documentation on connecting to a database server it looks like the Client and Pool constructor are functionally equivalent.
My understanding is that using the Pool constructor provides you with the same functionality as using the Client constructor except that connections are made from a connection pool.
Isn't this always desirable? What are the conditions that I would choose to use the Client constructor over the Pool constructor?
One fairly good explanation can be found here: https://gist.github.com/brianc/f906bacc17409203aee0. As part of this post:
I would definitely use a single pool of clients throughout the application. node-postgres ships with a pool implementation that has always met my needs, but it's also fine to just use the require('pg').Client prototype and implement your own pool if you know what you're doing & have some custom requirements on the pool.
The drawback to using a pool for each piece of middleware or using multiple pools in your application is you need to control how many open clients you have connected to the backend, it's more code to maintain, and likely wont improve performance over using a single pool. If you find requests often waiting on available clients from the pool you can increase the size of the built in pool with pg.defaults.poolSize = 100 or something. I've set the default at 20 which is a sane default I think. If you have long running queries during web requests you probably have bigger problems than increasing your pool size is going to solve.
Delphi Xe2 + ZeosLib 7.0.3 Stable + Firebird 1.0
I am doing updates to the several tables and data is retaining on memory. It is not reflected on the database in a way that other applications can see it.
I have tried using auto-commit only and did not work.
I have also used explicit transaction control, ZConnection.StartTransaction and ZConnection.Commit and did not work either.
I am updating data in a webserver created on delphi with Indy httpServer. I get post requests then act reading or updating the database. The connection is stateless, however it is maintained a list of client apps that are connected and an a instance of ZConnection for each client to have isolation since the requests are threaded.
Besides that I am having the problem of not saving having only one client connected and doing one request per time, no overlapping or re-entrance.
I need to hear advice on using this scenario of Firebird, What I should do to make the commit work.
I have a Client/Server application written Delphi. Essentially all the application is doing is transferring xml data streams between a server application and connected clients. I am currently using the Indy TIdTCPServer component. But the server side application keeps crashing on some of my installments. And it has been extremely difficult to debug. So I am wondering if there is some "architecture" I should be utilizing which does all the tcp/ip connection management and database connection pooling, allowing me to concentrate on the business logic.
Here are more details:
clients must maintain a "persistent" connection. There are times when the server must notify and send data to all connected clients.
clients are connecting from laptop computers using wireless aircards. So network "drops" are pretty common.
Backend database is SqlServer.
There can be upward of 100 computers simultaneously connected at a time.
When the server gets a new connection (TCPServer.OnConnect) I instantiate my own object containing it own SqlServer database connection. When tcp connections are dropped I in turn free these objects (and associated database connection).
Client application have a TTimer built into them. They routinely send heartbeats to the server. And if they "drop"/"lose" their connection they automatically establish a new connection once the network is back.
Anyone have any suggestions on the best approach/architecture here?
I presume the Indy component would work, but at the same time feel I am "reinventing the wheel" with respect to managing the connections.
Three component sets I am aware of that will take care of the nitty gritty technical aspects of client server applications for you:
kbmMW: http://components4developers.com/
Asta: http://www.astatech.com/index.asp
RemObjects: http://www.remobjects.com/
You may have to rework your applications to take advantage of the way these component sets work, but assuming you have properly separated layers that shouldn't be too much of a hassle and will buy you the advantage of well tested and widely used code for your client server work.
If you want some light TCP/IP components, take a look at our SynCrtSock unit.
You'll find low-level classes to create IP Client and Servers.
We implemented both TCP/IP and UDP/IP in one of our applications.
There is also a THttpServer class, which implement a HTTP/1.1 server. Therefore it follows the HTTP/1.1 connection management. There is also an optional compression, and using HTTP/1.1 on a port other than 80 is not a bad idea. And what is good with HTTP/1.1 is that it can pass through firewalls, and can be easily be VPNed or hosted on another HTTP server (like IIS or Apache) with a proxy. There is even a FastCGI class, if you need such a server under a linux-based solution.
Of course, a THttpClientSocket class does the same on the client class.
We use these classes to add HTTP/1.1 connection to our Open Source SQLite3 RESTful framework - http://synopse.info/forum/viewforum.php?id=2
See http://synopse.info/fossil/artifact?name=722e896e3d7aad1fe217b0e2e7903483e66d66d1 for the SynCrtSock unit. Open source, work from Delphi 7 to Delphi 2010.
Misha Charrett's CSI Application Framework covers pretty much exactly what you're asking for.
It's an open source Delphi framework that at its heart is a distributed message passing and threading framework that allows XML message passing from both client to server and server to client.
It can handle disconnections/reconnections, high client numbers and there's an optional virtual database library that will handle SQL server (or you could just use same SQL Server access you're using now).
It's not particularly well known yet but I can tell you that it's been actively developed over the last few years and that the author Misha is very keen to assist anyone who's interested in using it in their application.
Well, it would probably require a complete rewrite of much of your C/S code, but instead of using the Indy components, you could try to use a COM+ solution instead. Basically, you would create a COM+ component that will be installed on the server and your client applications will connect to this client and call the functions of this component directly. It will have transaction management which will be handled by Windows itself and the same is true about handling transactions. It's also technically possible to create events, which would allow the server to do callbacks to the client, although that would make things a bit more complicated.
I don't think this solution would work out for you, though, unless you have a lot of experience with COM development in Windows and/or you're brave enough to try something different.
In the past, I had a similar problem where hundreds of clients had to connect to a single server, doing all kinds of database transactions. It has a steep learning curve but me and my team managed to get things working and once we understood the technique, it resulted in a very stable and reliable solution which did manage to have up to 500 users simultaneously doing updates and other actions in a one-time extreme stress-test. But again, the learning curse is steep, so it might not be the solution you're looking for.
(Still, COM+ will use a lot of functionality that's build-in into Windows, like transaction management, database pooling and whatever more.)
If you use Indy each connection will equal a thread.
Anyway, I suggest for connecting to MSSQL to use SDAC from Devart http://www.devart.com/sdac/ and for the connection layer to use HPScktSrvr based on I/O Completion Port from http://www.torry.net/authorsmore.php?id=7131 (I don't know though what changes it will need for TThread changes in newer VCL).
You build your client class arround THPServerClient, you set your new class as the server ClientClass and the framework will create automatically new clients for you.
You may also want to have a look at the ICS/Midware combo: http://www.overbyte.be/
So I have a connection pool setup. Which is great and all since I have an application that really needs it. However what I would like to know is if it is possible to share this connection pool with other J2SE apps? Would this even be worth it, as opposed to creating a connection pool based on each apps needs? If it would be prudent, how can I accomplish this?
It is not hard having connection pools in a single JVM doing multiple things - that is what applications servers do everyday (using JNDI to throw objects across classloaders)
The interesting part is when you have the connection pool in a separate JVM from the client code needing it, as this does not immediately allow simply asking for and getting a connection from the pool and returning it afterwards.
Basically you have two options:
Doing remote requests for all your JDBC commands over the network. This will most likely mean that the data will travel over the network twice, from the database to the connection pool, and then from the connection pool to your application. If the database connections are very expensive objects then this might be a viable solution.
Use RMI to get the connection object from the connection pool JVM to your own machine. This is a very expensive operation, but can as far as I know include the actual driver classes, allowing your connection pool to provide connections to databases not known to your application JVM. To me this would only make sense if the database connections were ridiculoulusly expensive or it was a requirement to be able to support additional databases after deployment without changing the original deployments.
Note that the primary reason for having connection pools at all is because connections are expensive to create, use shortly and then discard. Some databases more than others, e.g. MySQl is (or was when I tried) very cheap so it might be the simplest just to do that.
So. First of all: Measure what your connection pool buys you in time, and then consider if it is worth your while to centralize this further.