How to apply migration for website across all tenants? - asp.net-mvc

I am about to build a multi tenant web application, where each client (tenant) is in separate databse, but web application is the same across tenants.
Therefor i am looking for a good strategy of how to apply Entity framework migrations for tenants when webapplication get updated.
I cant figure out if its best to create a service which upgrades all the clients at once, or every client get upgraded on the fly when they sign in first time into the web application? Or there might be other simpler solutions?

I have a piece of code that does somethings like this.
In my case it runs as part of our CD process, which triggers a job (Hangfire) that updates all our databases for our multi-tenant scenario.
In our case, this enforces that code updates gets deployed every time along with dbs migrations.
The overhead of running a migration job over already migrated dbs is minimal, so no worries to run it after each deployment.

Related

Update many heroku Apps with one Rails Project

I'd like know if is possible to have a single Rails App, where i have many different clients using this same App, where every single client have your own PostGreSQL DB in Heroku, so we have the same project to be updated for all this clients when i do the pushs to Heroku.
Do you know if it's possible to be done?
And how to ignore the database.yml file in updates, because every single client has your own DB.
Thanks!
You can but you probably shouldn't!
You can attach any number of Heroku Postgres instances to a Heroku app. You'll see that each instance you create adds a connection string to the list of environment variables - listed under the App's settings tab.
You can map the key string to a customer - via some unique identifier. You would then need to have an interceptor to bind the connection to the relevant database - or chose the relevant connect from a pre-bound list - and add it into the request context for each request.
It's a messy approach and not recommended. What would be slightly less messy is if you created a separate schema per customer instead. This way you bind to a single database instance and retain your database.yml config. But, each customer has their own dedicated schema. However, these are more architectural concerns that Heroku capabilities. From a Heroku perspective, both multi-database and mutli-schema approach is possible.
It should be noted that neither approach would give you any more referential integrity at the application logic level than standard roles and permissions with adequate auth mechanisms... All schemas and/or databases will be visible to the same application regardless of its separation at the database level. So, really there's little tangible benefit to it.

bulk create user accounts to asp.net mvc3 membership tables in production environment

In dev environment I am using the ASP.NET configuration tool in Visual Studio to create a few users for testing. As I movel closer to QA and Production, I'm wondering what is the best way for me to automate the creation of a large amount (1000's) of users after application deployment.
I have a csv with all the usernames and passwords, roles etc. and I wan't to avail of the encryption and password salting security that is built in. I do not want to manually "Register" all these users.
I'm just not sure if this is something I can do (or instruct a db admin to perform for me).
Does anyone know of a way to achieve this?
Any assistance would be greatly appreciated.
Regards
The simplest solution would be to set up a "CSV Upload" form. The CSV would be processed by an MVC action calling Membership.CreateUser in a loop.
Probably, the performance of this will be good enough.
There's a few ways that I know of approaching a batch processing problem on an ASP.NET site.
Because of the wonky way an ASP.NET site's application pool can get recycled, batch processing is usually done on an external process.
Windows service
One way is a separate windows service, which gets the new excel and pumps that data in, and has a timer which keeps going around. I've seen this used often, and it is quite a pain, because it takes extra work to make it easily deployable.
Update ASP.Net membership from windows service
CacheItem
Second way is to use CacheItems and their expiration timers to do batch processing, what you do is you define a cache object with a long timer, and when that expires and the Removed-callback gets called, you do your database work. This is good because it deploys with your ASP.NET site, and you have your code in one logical place.
https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
Workflow Foundation
Third way, is to make a workflow foundation service. That service gets a call from your ASP.NET site, which instantiates a WF service, that does some db work with your excel file, and then it goes into a while-loop with a delay of a month in it. This is good, because it is not tied to the lifespan of your ASP.NET application pool - you get more control, and this logic can be separated into a different IIS hosted WCF service.
http://msdn.microsoft.com/en-us/library/dd489452.aspx
Integrating with data is always a pain though, remember that the solution that gives you the least work and least chance of failure when deploying is the best solution.

Windows Azure Multi Tenancy

I am starting a new enterprise web application. It will be hosted up on Windows Azure and will be an asp.net MVC application talking to an SQL database.
My question relates to multi-tenancy and the correct way to accomplish it. In the past I've created a multi-tenant application by having a tenant table and than putting a TenantID column in every table. This worked fine (but it was only on a smaller scale so it didn't really exercise it to the nth degree). Looking into the multi-tenant stuff on Azure, it doesn't seem to recommend this way. They talk about subdomain, splitting tenants etc. To me, that just seems like a management nightmare. I would like the user to hit a website, enter their tenant login details and boom they are off.
Is there a simpler way to implement multi-tenancy in Azure that still allows me to use Azure's scalability strengths?
Should I just use the simple TenantID method? Will the Azure framework still scale well to suit?
Should I worry about tenancy at the start or just leave it till the end?
Advice needed.
Thanks
I have done it both ways on Azure. I have done it the way you have done previously where they enter in a tenant code upon logging in and this works fine, I don't see any reason to do differently. You can use SQL Azure federations to manage the tenants so you can have multiple databases easily for scalability.
I have then also used the subdomains approach to identity the tenant, but all it did was map the subdomain to a tenant code. I used this in a system where they didn't have to log on so it was easier for the user.
Worry about it at the start if only to design the database to cope with it.

Best practice for deploy website

I would like to know the best practice to deploy a website.
What I usually do is to minify all the javascript and css file, clean the html code from comments and publish my solution with Visual Studio.
There are other best way to put online a lower weigth website?
Try to execute Production deployments when there are few if any users online, such as at night or weekends. Notify users that there will be a scheduled outage.
When deploying to the production environment, you can create a "App_Offline.htm" file and place it in the root of the ASP.NET website. ASP.NET recognises this file has a special meaning - all dynamic page requests are shown this page instead of the page requested by the user. Typically this page displays a friendly message such as "The server is down for routine maintenance. Please try again in 30 minutes."
Another tip to make deployments less painful is to keep your web.config as similar as possible between your various environments such as Development, Test and Production. For the things that really have to change on the different environments, such as connection strings, you can extract these into their own connectionStrings.config file, by setting in web.config.
For database deployments, there are some great third party tools (such as Teratrax Database Compare for SQL Server) which allow you to compare the schema and/or data between 2 databases and produce a SQL script that will migrate the target database to the schema of the other database. Whether this works for you will depend on your exact development practices. If you cannot use such tools, you could script every database change, then replay those scripts when deploying to a different environment.
And of course you should ideally have a Test environment which is exactly like Production and enables do all your acceptance testing and to ensure your release is stable and your deployment is going to work before you do the real thing.

How to migrate multiple users' Access db's to one single SQLServer db

UPDATED 2010-11-25
A legacy stand-alone application (A1) is being re-created as a web application (A2).
A1 is written in Delphi 7 and uses a MS Access database to store the data. A1 has been distributed to ~1000 active users that we have no control over during the build of A2.
The database has ~50 tables, some which contain user data, some which contain template data (which does not need to be copied); 3-4 of these user tables are larger (<5000 records), the rest is small (<100).
Once A2 is 'live', users of A1 should be able to migrate to A2. I'm looking for a comparison of scenario's to do so.
One option is to develop a stand-alone 'update' tool for these users, and have this update tool talk to the A2 database through webservices.
Another option is to allow users to upload their Access db (~15 MB) database to our server, run some kind of SSIS package (overnight, perhaps) to get this into A2 for that user, and delete the Access db afterward.
Am I missing options? Which option is 'best' (I understand this may be somewhat subjective, but hopefully the pro's and cons for the scenario's can at least be made clear).
I'll gladly make this a community wiki if so demanded.
UPDATE 2010-11-23: it has been suggested that a variant of scenario 1 would be to have the update tool/application talk directly to the production database. Is this feasible?
UPDATE 2011-11: By now, this has been taken into production. Users upload the .zip file the .mdb is in, which is unpacked and placed in a secure location. A nightly SSIS scheduled job comes along and moves the data to staging tables, which are then moved into production through SP's.
I would lean toward uploading the complete database and running the conversion on the server.
In either case you need to write a conversion program. The real questions is how much of the conversion you deploy and run on the customers' computers. I would keep that part as simple as possible, i.e. just the upload. That way if you find any bugs or unexpected data during the conversion you can simply update the server and not need to re-deploy your conversion program.
The total amount of data you are talking about is not too large to upload, and it sounds like the majority of it would need to be uploaded in any case.
If you install a conversion program locally it would need a way to recover from a conversion that stopped part way through. That can be a lot more complicated than simply restarting an upload of the access database.
Also you don't indicate there would be any need for the web services after the conversions are done. The effort to put those services together, and keep them running and secure during the conversions would be far more than a simple upload application or web form.
Another factor is how quickly your customers would convert. If some of them will run the current application for some time period you may need to update your conversion application as the server database changes over time. If you upload the database and run the conversion on the server then only the server conversion program would need to be updated. There would not be any risk of a customer downloading the conversion program but not running it until after the server databases were updated.
We have a similar case where we choose to run the conversion on the server. We built a web page for the user to upload their files. In that case there is nothing to deploy for the new application. The only downside we found is getting the user to select the correct file. If you use a web form for the upload you can't pre-select file name for the user because of security restrictions. In our case we knew where the file was located but the customers did not. We provide directions on the upload page for the users to help them out. You could avoid this by writing a small desktop application to perform the upload for the users.
The only downside I see to writing a server based conversion is some of your template data will be uploaded that is un-needed. That is a small amount of data anyway.
Server Pros:
- No need to re-deploy the conversion due to bugs, unexpected data, or changes to the server database
- Easier to secure (possibly), there is only one access point - the upload. Of course you are accepting customer data in the form of an access database so you still can't trust anything in it.
Server Cons:
- Upload un-needed template data
Desktop Pros:
- ? I'm having trouble coming up with any
Desktop Cons:
- May need multiple versions deployed
As to talking to a server database directly. I have one application that talks to a hosted database directly to avoid creating web services. It works OK, but if given the chance I would not take that route again. The internet is dropped on a regular basis and the SQL Providers do not recover very well. We have trained our clients just to try again when that happens. We did this to avoid creating web services for our desktop application. We just reference the IP address in the server connection string. There is an entire list of security reasons not to take this route - we were comfortable with our security setup and possible risks. In the end the trade off of using the desktop application with no modifications was not worth having an unstable product.
Since a new database server to be likely one the standard database engines in the industry, why not consider linking the access application to this database server? That way you can simply send your data up to sql server that way.
I'm not really sure why you'd consider even suggest using a set of web services to a database engine when access supports an ODBC link to that database engine. So one potential upgrade path would be to simply issue a new application in access that has to be placed in the same directory as to where their current existing data file (and application) is now. Then on startup this application can simply RE link all of its tables to your existing database, plus come with a pre link set of tables to the database server. This is going to be far less work in building up some type of web services approach. I suppose part of this centers around where the database servers going to be hosted, but in most cases perhaps during the migration period, you have the database server running somewhere where everyone can get access to it. And a good many web providers allow external links to their database now.
It's also not clear that on the database server system you're going to create separate databases for each one, or as you suggest in your title it's all going to be placed into one database. Since is going to be placed into one database, then during the upsizing, an additional column that identifies the user location or however you plan to distinguish each database will be added during this upsizing process to distinguish each user set of data.
How easy this type of migration be will depend on the schema and database layout that the developers are using for the new system. Hopefully and obviously it has provisions for each user or location or however you plan to distinguish each individual user of the system. So, I don't suggest web services, but do suggest linking tables from the Access application to the instance of SQL server (or whatever server you run).
How best to do this will depend on the referential integrity and business rules that must be enforced, if there are any. For example, is there the possibility of duplicates when the databases are merged? I gather they are being merged from your somewhat cryptic statement: "And yes, one database for all, aspnet membership for user id's".
If you have no control of the 1000+ users of A1, how are you going to get them all to convert to A2?
Have you considered giving them an SQL Server Express DB to upgrade to, and letting them host the Web App on their own servers?

Resources