I'm searching for information on database connectivity (Firebird in my case) with IntraWeb Applications.
I especially need to know the difference(s) involved in using the database on the TDataModule with LockDataModule function, or using the database on the UserSessionUnit.
For example i need to have the database totally disconnected if no users are using the server, and at most 30 users will be connected.
I may at worst have to connect to some old paradox Database and i need a structure that could handle that (i know that i'll have to generate a folder based on WebApplication.AppID to handle sessions). At worst...
Thanks in advance for any piece of information or useful links you could provide me ^^
Scenario 1 - You leave "Pool Data Connections" unchecked in the Intraweb Application Wizard
In this scenario the wizard creates a ServerController, a UserSession but not a DataModule. You place your database, session and dataset components on the UserSession.
Whenever a new user connects to your website a new instance of the UserSession is created and a connection to the database is made. When the ServerController.SessionTimeOut expires due to user inactivity the UserSession is destroyed and that particular connection to the database is severed.
For 30 concurrent users this model will probably be fine for you and should guarantee that all database connections will be severed when the website is not in use.
Scenario 2 - You check "Pool Data Connections" in the Intraweb Application Wizard
As well as the ServerController and the UserSession the wizard will create an empty DataModule. You place your database, session and dataset components on the DataModule.
The ServerModule has a TIWDataModulePool component on it which has PoolCount property.
When your application starts it creates PoolCount instances of the DataModule each one of which makes a connection to the database. As your pages require database access they call LockDataModule and UnlockDataModule to temporarily make use of one of the DataModule instances from the pool.
When your application closes the DataModule instances in the pool are destroyed and their connections to the database are closed.
This model is appropriate when having an open database connection per user would exceed the capabilities of your database server. For just 30 users connecting to a FireBird database I don't believe it would be required.
You may want to consider using a component set like kbmMW by http://www.components4programmers.com/ I have used this for years with Desktop apps ans now with IW apps. I deploy my apps as Services and currently having a few issues deploying as an ISAPI. kbmMW is well suited for an app with lots of connections as it offers connection pooling etc... It has many features and benefits. Check out site for yourself.
I use the Enterprise version, though I think the free version may be useful to you.
Cheers!
-Lou
Related
I have a Delphi-based client-server application. The server is a fairly simple application with a TRemoteDataModule. The client application connects to it via a TSocketConnection.
Everything is working fine on that front.
Is there a way to access the list of currently active connections for the TRemoteDataModule?
The reason I'm asking is because I would like to prevent users from connecting from more than one machine at a time to the same server, so I figured I would keep a list of usernames per connection and check against it whenever a new connection is attempted, or beforehand, client-side, via a a COM function.
Or should I be using some other mechanism altogether?
As an aside, is there a way to see which threading model and instancing model were used when creating the RDM? I'm not the one who created it, and I don't know where to find that.
Cheers!
Using Delphi XE5, if that makes a difference
I have a application that has been programmed with MVC/EF Code First. It does a lot of server side processing and is pretty resource intensive.
I know how to set up load balancing, but, I want to know if scaling an EF application is as simple as provisioning a new server, deploying the application and pointing to the DB cluster - or are there any issues I will face with regards to multiple EF applications hitting the same database server?
I can't seem to find any advice/guides for this and I am worrying I made the wrong choice by choosing EF over something simpler/more straight forward!
... issues ... regards to multiple EF applications hitting the same database server?
Rewind a bit to the fact that your application is an ASP .NET MVC based application. Having multiple instances of it is probably going to raise the spectre of state management.
MSDN has a pretty good introduction to why this is an issue:
HTTP is a stateless protocol. This means that a Web server treats each HTTP request for a page as an independent request. The server retains no knowledge of variable values that were used during previous requests. ASP.NET session state identifies requests from the same browser during a limited time window as a session, and provides a way to persist variable values for the duration of that session. By default, ASP.NET session state is enabled for all ASP.NET applications.
Alternatives to session state include the following:
Application state, which stores variables that can be accessed by all users of an ASP.NET application.
This point is an extremely common way of storing state, but breaks down when there's multiple instances of an application involved (the state is "visible" to only one of the instances).
Typically this is worked around by using either the StateServer or SQLServer value of SessionStateMode. The same article provides a pretty good summary of each option (emphasis mine).
StateServer mode, which stores session state in a separate process called the ASP.NET state service. This ensures that session state is preserved if the Web application is restarted and also makes session state available to multiple Web servers in a Web farm.
SQLServer mode stores session state in a SQL Server database. This ensures that session state is preserved if the Web application is restarted and also makes session state available to multiple Web servers in a Web farm.
If your application is stateless, this is a moot point.
I am worrying I made the wrong choice by choosing EF
As far as issues with multiple instances of your application accessing a database go, you're going to have issues with any sort of data access technology.
Here's the basic scenario: let's say your application sends welcome emails to users on a schedule.
Given the table Users:
UserId | Email | WelcomeLetterSent
-------+-----------------+------------------
1 | user#domain.com | 0
And some psuedo-code:
foreach (var user in _context.Users.Where(u => !u.WelcomeLetterSent))
{
SendEmailForUser(user);
user.WelcomeLetterSent = true;
}
_context.SaveChanges();
There's a race condition where both instance one and instance two of your application might simultaneously evaluate _context.Users.Where(...) before either of them has the chance to set WelcomeLetterSent = true and call SaveChanges. In this case, two welcome emails might get sent to each user instead of one.
Concurrency can be an insidious thing. There's a primer on managing concurrency with the Entity Framework over here, but this is only the tip of the iceberg.
The answer to your question? It depends on what your application does :)
On top of that, I ideally want to build some "extra" support applications that hook in to the same DB... and, I am just not sure how EF will handle multiple apps to the same DB....
If your application can tolerate multiple instances of itself accessing one database, then it's usually not a stretch to make these "support applications" play nicely. It's not much different whether the concurrency is from multiple instances of one application or multiple applications with one instance each.
UPDATED 2010-11-25
A legacy stand-alone application (A1) is being re-created as a web application (A2).
A1 is written in Delphi 7 and uses a MS Access database to store the data. A1 has been distributed to ~1000 active users that we have no control over during the build of A2.
The database has ~50 tables, some which contain user data, some which contain template data (which does not need to be copied); 3-4 of these user tables are larger (<5000 records), the rest is small (<100).
Once A2 is 'live', users of A1 should be able to migrate to A2. I'm looking for a comparison of scenario's to do so.
One option is to develop a stand-alone 'update' tool for these users, and have this update tool talk to the A2 database through webservices.
Another option is to allow users to upload their Access db (~15 MB) database to our server, run some kind of SSIS package (overnight, perhaps) to get this into A2 for that user, and delete the Access db afterward.
Am I missing options? Which option is 'best' (I understand this may be somewhat subjective, but hopefully the pro's and cons for the scenario's can at least be made clear).
I'll gladly make this a community wiki if so demanded.
UPDATE 2010-11-23: it has been suggested that a variant of scenario 1 would be to have the update tool/application talk directly to the production database. Is this feasible?
UPDATE 2011-11: By now, this has been taken into production. Users upload the .zip file the .mdb is in, which is unpacked and placed in a secure location. A nightly SSIS scheduled job comes along and moves the data to staging tables, which are then moved into production through SP's.
I would lean toward uploading the complete database and running the conversion on the server.
In either case you need to write a conversion program. The real questions is how much of the conversion you deploy and run on the customers' computers. I would keep that part as simple as possible, i.e. just the upload. That way if you find any bugs or unexpected data during the conversion you can simply update the server and not need to re-deploy your conversion program.
The total amount of data you are talking about is not too large to upload, and it sounds like the majority of it would need to be uploaded in any case.
If you install a conversion program locally it would need a way to recover from a conversion that stopped part way through. That can be a lot more complicated than simply restarting an upload of the access database.
Also you don't indicate there would be any need for the web services after the conversions are done. The effort to put those services together, and keep them running and secure during the conversions would be far more than a simple upload application or web form.
Another factor is how quickly your customers would convert. If some of them will run the current application for some time period you may need to update your conversion application as the server database changes over time. If you upload the database and run the conversion on the server then only the server conversion program would need to be updated. There would not be any risk of a customer downloading the conversion program but not running it until after the server databases were updated.
We have a similar case where we choose to run the conversion on the server. We built a web page for the user to upload their files. In that case there is nothing to deploy for the new application. The only downside we found is getting the user to select the correct file. If you use a web form for the upload you can't pre-select file name for the user because of security restrictions. In our case we knew where the file was located but the customers did not. We provide directions on the upload page for the users to help them out. You could avoid this by writing a small desktop application to perform the upload for the users.
The only downside I see to writing a server based conversion is some of your template data will be uploaded that is un-needed. That is a small amount of data anyway.
Server Pros:
- No need to re-deploy the conversion due to bugs, unexpected data, or changes to the server database
- Easier to secure (possibly), there is only one access point - the upload. Of course you are accepting customer data in the form of an access database so you still can't trust anything in it.
Server Cons:
- Upload un-needed template data
Desktop Pros:
- ? I'm having trouble coming up with any
Desktop Cons:
- May need multiple versions deployed
As to talking to a server database directly. I have one application that talks to a hosted database directly to avoid creating web services. It works OK, but if given the chance I would not take that route again. The internet is dropped on a regular basis and the SQL Providers do not recover very well. We have trained our clients just to try again when that happens. We did this to avoid creating web services for our desktop application. We just reference the IP address in the server connection string. There is an entire list of security reasons not to take this route - we were comfortable with our security setup and possible risks. In the end the trade off of using the desktop application with no modifications was not worth having an unstable product.
Since a new database server to be likely one the standard database engines in the industry, why not consider linking the access application to this database server? That way you can simply send your data up to sql server that way.
I'm not really sure why you'd consider even suggest using a set of web services to a database engine when access supports an ODBC link to that database engine. So one potential upgrade path would be to simply issue a new application in access that has to be placed in the same directory as to where their current existing data file (and application) is now. Then on startup this application can simply RE link all of its tables to your existing database, plus come with a pre link set of tables to the database server. This is going to be far less work in building up some type of web services approach. I suppose part of this centers around where the database servers going to be hosted, but in most cases perhaps during the migration period, you have the database server running somewhere where everyone can get access to it. And a good many web providers allow external links to their database now.
It's also not clear that on the database server system you're going to create separate databases for each one, or as you suggest in your title it's all going to be placed into one database. Since is going to be placed into one database, then during the upsizing, an additional column that identifies the user location or however you plan to distinguish each database will be added during this upsizing process to distinguish each user set of data.
How easy this type of migration be will depend on the schema and database layout that the developers are using for the new system. Hopefully and obviously it has provisions for each user or location or however you plan to distinguish each individual user of the system. So, I don't suggest web services, but do suggest linking tables from the Access application to the instance of SQL server (or whatever server you run).
How best to do this will depend on the referential integrity and business rules that must be enforced, if there are any. For example, is there the possibility of duplicates when the databases are merged? I gather they are being merged from your somewhat cryptic statement: "And yes, one database for all, aspnet membership for user id's".
If you have no control of the 1000+ users of A1, how are you going to get them all to convert to A2?
Have you considered giving them an SQL Server Express DB to upgrade to, and letting them host the Web App on their own servers?
I’m researching EF4 for a new in-house application development project using .NET v4, EF4, & SQL Server 2008 R2. To date, our small dev team has done very little .NET development and only demonstration EF applications. Our current applications use DB app-roles for security and that's worked out well for us.
From reading and some basic experimentation, my understanding is:
EF can open and close DB connections as needed. However it is possible to manually open and close an EntityConnection for use by the EF ObjectContext.
SQL Server app-role security requires running sp_setapprole on DB connections to set the application role context. sp_unsetapprole can be used to revert a connection to its original context.
By default, DB connections are pooled. Using sp_setapprole with connection pooling can be problematic if the connections are not restored to their original context before being returned to the pool.
If all the above is correct then the obvious way to use EF4 with app-roles is to manually open & close the EntityConnection, being sure to execute sp_setapprole after opening and sp_unsetapprole before closing.
Is there a better way? I'm mostly concerned about accidentally closing the connection without first calling sp_unsetapprole. Seems like an error that may not be noticed immediately.
You can just add "Pooling=false;" to your store connection in the app.config (Provider Connection String). If you don't actually need pooling, this seems to be the simplest solution.
I hate to re-invent the wheel so I'm looking for an existing solution to create a simple authentication system for my application. I've experimented for a while with using CardSpace or OpenID inside the application but I can't convince management that these would be working solutions.
Of course, I could just build a simple login dialog where username, domain and (hashed) password is stored inside a database table and I've done such a thing many times already. I hate this solution since I feel it's just a weak option. And I don't want to spend too much time trying to make the whole logon system as secure as possible, especially since I suspect that there should be existing solutions for this.
So, next to OpenID/OpenAuth and CardSpace, are there any other Authentication solutions that can be used from a Delphi/WIN32 application?
Right now, the application will be used by many customers. Most are single-user environments, although it's likely that some of those will start to have two to 5 users once this authentication system is added. But we want to support a customer who needs to allow about 500 different users on the same application. These are spread over about 100 offices but they all connect to the same SQL Server database. (MS Access right now, but we're making it possible for this user to use SQL Server instead.) To make matters even more interesting, the customer uses Citrix to centralize the user systems and the application has straight access to the SQL Server database. It's not an ideal setup but then again, the customer isn't really paying for this. We're just setting up a test environment. A proof-of-concept which the customer will test for us. Flaws will be solved later on. But right now I need quick solutions and one of them is a practical authentication system where I don't have to write a lot of code.
Have you considered using SQL Server authentication and not allowing authentication for those using an Access Database?
If you use the new SQL Server Native Client and SQL Server 2005 you can have passwords expire and change them from your client application. All of the tools to create and manage user accounts are built into SQL Server Management Studio. And if you decide later to support Windows Authentication you just need to modify your connection string.
We have a system where users on the network use Windows Authentication so they don't need to worry about another user name and password. For users that access the system via a VPN and non-domain joined machines they use SQL Authentication.
Here is the MSDN Page that talks about dealing with passwords programmatically in SQL Server 2005
You do need to make sure that SQL Server Native Client is installed, but that is simple compared to the rest of ADO.
I would suggest then
Delphi - since you are using Delphi :)
Open source - since you need to be able to figure out what is wrong if there is a problem, you probably want it cheap.
So, here are some solutions:
http://www.torry.net/pages.php?id=313
CoWindowsAccount v.1.0
SSecurity v.1.2.1.3
http://free-password-manager-plus.software.informer.com/1.6/
It might work for your purposes, but why not ask Windows for the current domain and user name, and use them as unique IDs. Windows has already done the authentication, and it saves the users making up new passwords or anything. I've used this to good effect. I also made it optional to include the machine name in the ID, so that the same user on different computers would also be unique.