How many connections can Oracle Express Edition (XE) handle? - oracle-xe

I'm building a web application that uses Oracle Database 10g as the database backend. I realize the Express edition has limitations, but I just wanted to make sure that number of connections wasn't one of them.
Does Oracle Express Edition (XE) limit the amount of concurrent connections (for example, the number of users viewing the site)?

According to:
http://forums.oracle.com/forums/thread.jspa?messageID=1673347
(among others)
There's no hardcoded limit, however there is an effective default limit of ~20 concurrent connections, however you can extend that with something like:
ALTER SYSTEM SET processes=200 scope=spfile
(and restart the DB)
In practice you've probably either hit other XE limits well before 200 connections, or should have been using a simpler DB to begin with.

There are some benchmarks for XE 10g here:
http://ora-00001.blogspot.com/2011/03/stress-testing-oracle-xe-10g.html

Number of connections depends on RAM and other dependant things.

Related

Firedac multiple pools of connections

I'm in process of upgrading our client-server ERP app to multi-tier. We want to offer our customers the possibility having their databases in cloud(hosted in our server).So, clients are written in Delphi, server is a http IOCP server written also in Delphi (from mORMot framework), for dbs we use Firebird embedded.
Our customers(let's say 200), can have 25-30 Firebird databases (total 5000-6000 databases), accessed by 4-5 user per each customer. This is not happening all at once. One user can work in one db, other 2 users can work in another db, but all the dbs should be available and online. So, I can have 800-1000 users working at 700-900 dbs. Databases are not big, typically 20-30 MB each but can go to 200 MB.
This is not data sharding so please don't suggest to merge all databases together, I really need them individually with possibility to backup/restore/replace each one of them.
So, I need multiple pools of connections - for every database I need a pool of let's say 2 connections. I read about Firedac connection pooling. It seems that TFDManager should be perfect for me. I define multiple "ConnectionDef"s with "Pooled=true" and it can maintain multiple pools of connections (each connection lasting until some minutes of inactivity).
Questions:
I have to create all "ConnectionDef"s before server starts serving requests?
Can TFDManager "handle" requests (and time-out connections on inactivity), while in other thread I need to create a new database , so automatically I need to create a new pool of connections and start serving requests from newly created database. Practically can I call FDManager.AddConnectionDef(..) while other pools are in use?
AFAIK Firebird embedded does not have any "connection". As its name states, it is embedded within the same process, so there is no connection pool needed. Connection pools are needed when several clients connect/disconnect to the same DB over the network, whereas here all is embedded, and you would have direct access to the Firebird engine.
As such, you may:
Define one "connnection" per Firebird embedded database;
Protect your SOA code via a mutex (aka critical section) per DB. In fact, mORMot's HTTP IOCP server would run the incoming requests from a thread pool, so ensure that all DB access is safely made.
Ensure you use at least Firebird 2.5 since the embedded version is told to be threadsafe only since revision 2.5 (see the release notes).
Instead of FireDAC, consider using ZDBC/Zeos (in latest 7.2/7.3 branch), which has nice features, together with the native mORMot SynDB libraries.
Looking at Firedac sources, seems that all about adding connection definitions and acquiring connections in pooled mode is thread-safe.
Adding a connection definition or matching one is guarded by a TMultiReadExclusiveWriteSynchronizer and acquiring a connection from the pool is guarded by a TCriticalSection.
So, answers:
I don't have to create all "ConnectionDef"s before server starts serving requests.
Yes, I can call safely FDManager.AddConnectionDef(..) while other pools are in use.
Using Firedac, acquiring a connection for any of those databases will be guarded by one TCriticalSection. The solution proposed by #Arnaud Bouchez presents a more grained access by creating one TCriticalSection per database and I think will scale better, but you should be aware of a bug when using multiple TCriticalSection, especially that all will be initiated at once:
https://www.delphitools.info/2011/11/30/fixing-tcriticalsection/
In that article present a very simple fix for this bug.

Weird three minutes delay for TSQLConnection.Connect with InterBase 7.5

Environment:
Delphi 2009 client applications (and one Java), running on Windows 2003 server
connecting to InterBase 7.5.1 (another Windows 2003 Server) over dbExpress
The Delphi applications log the time to open the TSQLConnection using the AfterConnect event handler of the TSQLConnection object.
In random intervals, the connect need a three-minutes "extra time". I first suspected it could be a problem with the SQL query, but more detailed logging today showed that it is the SQLConnection.Connect which hangs.
I am not sure if this could be a problem with network, the InterBase server, or the Delphi / dbExpress layer.
Has anybody experienced a similar three-minutes "hang"?
p.s. the Java application does not log connect time so I can not say wheter it is affected by this problem.
This phenomenon appeared in the log files since we started with logging in 2012, but the rate has sharply increased last month. The only environment change has been the addition of new Windows servers (for RDP services, Mail, and Fax) so it could be a network-related problem.
Aside of a possible network problem, the cause of the delay can be that, from time to time, your query triggers a garbage collection in one of the table(s) that it is querying.
I don't know in detail Interbase 7.5 internals, but in my experience (with Firebird), this usually happens when a select is made on a table from which many records have been deleted/updated recently.
This doc at IBExpert.net explains it:
A garbage collection is only performed during a database sweep, database backup or when a SELECT query is made on a table (and not by INSERT, ALTER or DELETE). Whenever Firebird/InterBase® touches a row, such as during a SELECT operation, the versioning engine sweeps out any versions of the row where the transaction number is older than the Oldest Interesting Transaction (OIT). This helps to keep the version history small and manageable and also keeps performance reasonable.
A periodic sweep or backup made at low usage hours, can increase performance and minimize the risk of being hitted by an inconvenient garbage collection. See Sweep interval and automated housekeeping (page 6-20) and Facilitating garbage collection (page 11-19) at the Interbase 7.5 Operations Guide for more info on this.
Please check if hard disk power saving is activated on any disk on mentioned servers. That would explain if you have a delay in first connect and then no delay in following connections. Then, after a while power saving gets activated and the same problem raises again.
Since the rate has increased with the additions of new servers on the network you could have a packet loss and a long timeout to retry. For test that hypothesis you can change the connection timeout to a small value. You also can monitor the network traffic between the servers using wireshark or tcpdump.
Monitoring
To monitor the TCP handshake only you can use:
tcpdump -i eth0 'tcp[13] & 2 = 2

Is DataSnap Optimized for responding to more than 1k users at the same time?

We want to start a big multi-tier application. The server side application must respond to more than 1000 users at the same time. We want to create server application by 64 bit compiler and client side with 32 bit. In this case we don't know DataSnap can respond to all client without any problem or not?
In this case The Server computer is very powerful (multi-processor and more than 16GB of RAM) and DataBase Management system is FireBird 2.5.
You need a way to perform realistic load tests.
For the Firebird database, you can simulate concurrent users with the free Apache JMeter tool. It can run SQL statements and record their execution time statistics (average, min/max etc.). So you could for example create a thread group with twenty different SQL queries, and then run twenty threads which each will perform these queries sequentially.
JMeter allows to define time limits on the SQL query, and JMeter treats it as an error if the query exceeds this limit. Then you can try to find the maximum client count where the overall error rate is still less than (for example) five percent.
But you also need to know how high the expected database load will be, and you will also need to have a test database with a realistic size, not only a couple of records. Also, some database queries like reports might cause higher load - these should be included in the simulation too, as they can affect overall performance. In JMeter, you can create a second thread group, running in parallel with the first one, for these long-running statements with different settings (less simulated clients).
Testing the database will show if there is a bottleneck already in this area. For example, the test result could be that the database can serve twenty clients with a total average transaction rate of 20 TPS (transactions per second), which means one client executes one transaction per second. But this TPS value will decrease with higher user count.
Related question: Firebird usage in big projects which also has a link to http://www.firebirdsql.org/en/case-studies-catalog/
Regarding DataSnap client load simulation: this can be done with a scripted client, which runs a predefined set of statements / commands over the connection.
To run a high number of load test clients simultaneaously you could use a service like Amazon Elastic Compute Cloud (EC2), to launch clones of your test machine image, saving you hardware costs. But of course I would start with a small client machine which simply runs ten or twenty scripted clients.
As far as I know DataSnap is based on Indy. And Indy's connection handling model is not very scaleable - one thread per connection, which is very resource consuming. Even using Indy's thread pools is not an option I think... Also in Windows (32 bit) for example there is a limit of the maximum threads you can create (2000 IIRC). Anyway - using many threads is not good and hits performance of the server (for reference - Windows Internals book, Windows Performance Team blog etc.)
A scalable, robust and professional application server would use IO Completion ports (IOCP) for data processing. But I don't know if DataSnap can take advantage of it.
UPDATE:
On the CodeRage7 I asked similar scaleability questions. Here are the answers:
Q: Recently there was a question on StackOverflow about DataSnap's scaleability/performance. So can DS handle, for example, 2000 or more concurent user request at the network and application level?
A: the scalability is based on scalability of TCP/HTTP/HTTPs and # of connections allowed in your server operating system. Also based on memory and hardware you employ. There is no specific limit in DataSnap.
My comment: While this is true, Indy's connection handling model, i.e. one thread per connection, introduces bottleneck especially in 32 bit Windows (2000 threads max). In Win64 it should not be so much problem, but again - this kind of handling data flow leads to performance degradation.
Q: Does DataSnap support some kind of load balancing?
A: Not directly. You can do this in code in your DataSnap server(s).
My comment: I've found very good paper on implementing Failover/Load Balancing in DataSnap in Andreano Lanusse's blog
Q: Does DataSnap support IO Completion ports for better scaleability?
This my question was left unanswered.
Hope this helps!
UPDATE2:
I found very interesting post on DS Performance: DataSnap analysis based on Speed & Stability tests
UPDATE3:
DataSnap, Deployment, Performance, and More (Marco Cantu)
Monitoring and control of connections in DataSnap XE2 - translated in English
Monitoring and control of connections in DataSnap XE2 - original
When the specifications for a system are made, you need to be very precise when it's about multiple users.
For example: you create a website, and the client expects 15.000 unique users.
Then the client usually comes up with a requirement that the system needs to support 15.000 simultaneous users, which is very naive.
You'll need a more detailed specification than that.
Usually it's more sensible to say something like: in 99% of the requests, 99% of the users can get a response to their request within 5 seconds average.
In normal usage, you'll never see all users send a request within the same second. If at some point they all arrive within the same minute (also very unlikely), you'll have a lot fewer concurrent users.
Even for websites with tens of thousands of users, where most of them connect on a daily basis, the webserver is idle most the time, and once and a while it jumps to 5% or in extreme cases to 20%. If we really have to serve all of these users at once we'd be screwed, but that never happens, and it's not realistic to prepare a server for such loads.

experiences with firebird server over the internet with multiple clients?

Has somebody real experience with firebird databases over the internet?
I have a typical windows accounting/ERP software (done with delphi) that works with the firebird database server pretty well.. Now my users (300 aprox. now, but should increment) also want to work "in the cloud" (connecting from the office, from the laptop, from the house, etc.). It is a lot of work of recreating everything to a standard web application (let's say for example, HTML+CSS+JS+PHP+MYSQL), so I'm considering keeping the win client (I don't care about other OSes) but instead of the server living in the clients LANs moving it to a pair of dedicated servers that I will contract (one primary and one secondary againts failures for starting).
Searching I've come across this faq http://www.firebirdfaq.org/faq53/ that explains that the fb protocol it isn't ideal for working in the internet, but still all my users today have at least a 1MBbit/sec ADSL internet connection (I don't think that to be slow as the faq denotes).
Somebody have done this? what was the experience? how secure are fb servers for being open to the internet? how well they scale?
I know that building a "middleware" with SOAP for example will be more normal, but still the solution I'm evaluating here is much more fast and easy (still I have some work with the replication, backup, hearbreath services, but it's much less than redoing everything for the web).
Thanks! Edit: FB version: 2.5.
I had being trying to "push" the Firebird Core developers to improve the Firebird protocol to get better speed with high latency network (aka. Internet). Recently, Dmitry Yemanov published some articles in his blog about this subject (dyemanov.blogspot.com). It seems that there is margin for optimizations, and I would really like to see this coming in FB 2.5.3 and FB 3.0, although there is no warranty for this happening in those versions or anytime soon. You can vote in such improvement here: http://tracker.firebirdsql.org/browse/CORE-2530
Safety? You may try to set up a VPN. It also may help with speed, since most of the VPNs software out there (Zebedee, etc) can compress the data being transfered, helping to speed up data transfer in some cases.
Some of my customers do use Firebird traditional C/S over the internet. It is much slower compared to local network, and of course, how much slower depends basically on the link speed and latency. You can do some optimization at the client side too, using metadata cache, etc. but don't expect miracles with the currently protocol. I would say that for whole day working, using Terminal Services would be a better option for now.
The response about the scaling question
Firebird runs in production on large big iron servers : 512G of ram 100.000 concurrent users
We run Firebird to power larger systems (for 12 government agencies
and 3 banks). It has approximately 100000 end users multiplexed
through 2500 (max) pooled connections
https://plus.google.com/111558763769231855886/posts/Q1ACy1yyTgP
The protocol in Firebird 2.5 is improoved there is still room left for 3.0 but you can check
what is already done
http://asfernandes.blogspot.com/2009/07/network-latency-influence-on-firebird.html
And the future enhancements in 3.0
http://www.firebirdnews.org/?p=6953
To protect your connection i guess the best bet is ssl/ssh tunnel (it can be a opnvpn)
with high compression option
http://mapopa.blogspot.com/2010/11/securing-firebird-using-ssh-tunnel.html
FB protocol problem isn't about bandwidth, but latency. In my experience, some operations can be very slow over internet/VPN compared to LAN or local connection. I haven't examined issue further since I don't really run applications over internet connection.
However, I suggest three-tier model for application. Create own application server, which runs on database server/same network. Let the clients talk with application server and you get maximum performance.
There are some N-tier application/middleware frameworks for Delphi:
RemObjects SDK and DataAbstract
RealThinClient
kbmMW
Delphi's own DataSnap
MidWare
With those you can get data compression, encryption, binary messages (faster than SOAP) etc.
You can implement TCP/IP packets encryption/decryption directly in the firebird engine itself.
Personnaly, i have downloaded the Firebird 2.5 source code and injected secure tunnelization code directly in his low level communication layer (the INET socket layer). Now, encryption/decryption is done directly by the firebird engine for each TCP/IP packet both at the server side and the client side (fbclient.dll).
Then there is no need to re-structure the client application except adding one line of code that provide the secret key you choose to crypt communication to the fbclient.dll. The same secret key must be declared in the firebird.conf file of your server installation.
I have also implemented a proxy negociation solution in the fbclient.dll in order to allow to TCP/IP packets to pass throught any proxy server (like Microsoft ISA Server for example).
For us, this architecture is functional for more than one year in a real production system.
kbmMW CodeGear Edition is free but without source. It can be used for commercial apps.
Download it after registering at: https://portal.components4developers.com
In case you see certificate errors (you shouldnt but I know we have heard that some actually do), accept and ignore them. The site is valid despite the cert.error.
kbmMW CodeGear Edition contains a subset of kbmMW Professional Edition, but supports the following Delphi database API's:
Borland Database Engine
DBExpress
kbmMemTable
SQLite3
It supports binary, binary over HTML, XML and SOAP protocols in communication with clients.
It contains everything you need incl.
unified remote custom method invocation
unified remote dataset query, execute and data change resolving
unified database meta data handling and creation (tables, fields, indexes, generators/sequencers)
optional automatic proxying of requests to another server and proxying results back to original requester
full native XML DOM and SAX support
full dataset briefcase support as CSV, or binary data
advanced but simple to use wizard for creating new application server services
THere is one caveat though. Newest version of kbmMW CodeGear Edition always only supports newest Delphi version. You can still download older kbmMW CodeGear Editions matching older Delphi releases.
kbMMW Professional Edition and kbmMW Enterprise Edition do not have such limitations, and currently supports D7, D2006, D2007, D2010, DXE, DXE2 along with Embarcadero C++ counterparts.
best regards
Kim Madsen
www.components4developers.com

What size is considered 'big' for the tbl_version table in TFS_Main database

We are currently experiencing significant waits in TFS database, and are trying to understand if these are as a consequence of the size of the tbl_Version version history table in the database.
Currently this table contains just over 20 million records, and is taking up approximately 6GB of storage space (total index space is just over 10GB). Looking at the queries that SQL Server is having to deal with, we have high PAGEIOLATCH_SH waits whenever this table is accessed. Obviously we don't have control over the queries being thrown at the database (all part of TFS).
Currently we have TFS on a Virtual Machine, and in essence want to get to understand whether we should (a) move to a physical machine, (b) attempt to reduce size of tbl_version or (c) follow a combination of these.
In our organisation it will be non-trivial to move to a physical server, so I'd like to get a feel for whether our table sizes are 'normal' or not before making any such decision.
PageLatch_SH typically indicates a wait for a page to be loaded from disk to memory. From the sounds of it tbl_Version is not being kept around in memory. There are 2 things you can do to improve the situation:
a. Get more RAM (not sure how much RAM you have on the server).
b. Get a faster disk subsystem.
In TFS 2010 we enable page compression if you have Enterprise Edition of SQL. This should help with the problem.
Based on some 2007 stats from Microsoft: http://blogs.msdn.com/b/bharry/archive/2007/03/13/march-devdiv-dogfood-statistics.aspx probably not the biggest.
But MS (as documented on that blog) had done some DB tuning, this I believe is in TFS 2010, but for earlier versions you'll probably need to talk to MS direct.
Caveat: We're using TFS 2008.
We're currently sitting with about 9GB of data (18GB index) with 31M rows. This is after about a year and a half of usage in an IS shop with 50-60 active developers.
Part of our problem, which we still need to address, is large binaries stored in the version control system. The answer to my question here may provide some information as to whether or not there are a few major offenders that are causing the size of that table to be bigger than you want.

Resources