Linq2DB Parameterized Schema - linq2db

Ok, this is definitely my last question about Linq2DB! Maybe...
I've gotten through some of the learning curve with Linq2DB for a project that will work against DB2/iSeries data. One problem though is that while my code works against my test database just fine, in production it will need to point at different schemas for the same objects. For instance, a particular user class in one environment would have a table mapping like:
[Table(Schema="ABC", Name="USERS")]
in another environment it might look like:
[Table(Schema="XYZ", Name="USERS")]
I haven't quite figured out how I will approach this in production. Has anyone dealt with this before? Is there a way to do this with a DataContext? Or possibly by digging into the internals of the mapping? Any thoughts or ideas are appreciated!

I would recommend to use fluent mapping or configurations for your case.
For fluent mapping pass schema name to fluent mapping builder function:
void ConfigureMappings(MappingSchema ms, string schema)
{
ms.GetFluentMappingBuilder()
.Entity<Users>()
.HasSchemaName(schema)
// configure columns and other entities
}
Configuration-based solution: use different configurations for test and production environments and pass configuration name to data connection constructor:
[Table(Schema="ABC", Name="USERS", Configuration="test")]
[Table(Schema="XYZ", Name="USERS", Configuration="production")]
public class User
{...}
// or make test configuration default and override it for production where it differ from default
[Table(Schema="ABC", Name="USERS")]
[Table(Schema="XYZ", Name="USERS", Configuration="production")]
public class User
{...}
This approach (with configurations) also could be used for fluent mapper.

Related

How to switch between test and production databases with Entity Framework 5

I've got separate test and production databases for my ASP.NET MVC 4 application. Using Entity Framework 5, how can I have Entity Framework switch between these two databases programmatically? The application knows which database should be used at which time, so I just need for the application to be able to change Entity Framework so that it utilizes the correct database at the correct time.
Anyone know how to accomplish this and/or have a good example available?
EDIT: due to how my app is arranged, I am going to need to utlize GvM's answer. The last remaining thing, however, is that I don't know how to send an argument to the base class. Here is what I mean:
public class YourContext : DbContext {
public YourContext() : base("yourNameOrConnectionString")
{
}
}
The problem is that this code does not work:
string dbConnectionString = "MyDBTest";
using (var db = new YourContext(dbConnectionString))
{
//code to use the db
}
How does one send an argument to a base class, in this case YourContext() : base()
Use the config transforms. That's what they're there for. Admittedly, though, the config transforms are a little confusing because the debugger doesn't actually use them. Let me elaborate.
By default, you get three web.configs: Web.config, Web.Debug.config and Web.Release.config. The latter two, are combined with the first in the Solution Explorer, but you can expand Web.config to see them. Web.Debug.config and Web.Release.config are the transforms. They use a special XML-style syntax to allow you alter, or transform, settings in the main Web.config file. Each corresponds to a "Configuration", namely "Debug" and "Release", the two built into Visual Studio by default. You can add additional configurations as you need them.
Now, here's where things are confusing. The Debug configuration, despite its name, is never actually used when debugging. A better name would be perhaps Development or Staging; it's the configuration intended for when you deploy the site in a testing capacity, rather than to production. The Release config is for production. So, what you need is in your main Web.config, specify the connection string for your local development database. Then, in the Debug and Release configs, you add a transform to change that to the staging/production database connection strings, respectively. When you're debugging locally, the one the main web.config will be used, and then when you publish your application, you'll choose to either use Debug or Release, based on the environment you'll be deploying to, and then the transforms will be run to alter the published web.config appropriately.
For more information on transforms and how to write them, see: http://msdn.microsoft.com/en-us/library/dd465326(v=vs.110).aspx
Provide your connection name or connection string to the DbContext constructor.
public class YourContext : DbContext {
public YourContext(string connection) : base(connection)
{
}
}
The string can be a variable your application modifies based on what database you want to connect to.
EDIT: updated the example so that now you have a constructor that accepts a string parameter which then passes it to the base class when instantiated.

how to serialize domain classes grails

I tried to serialize grails domains classes to Maps or similar in order to be able to store it in memcached.
I want to be able to read the objects only, I don't need gorm crud. Only to read them without breaking the kind of interfaces they have.
For instance: I could convert domains to maps, becouse it wouldn't break the interface for access like .<property> or .findall or similar
First I tried to do a manual serialization but it was very error prone. So I needed something more general.
Then I tried to serialize as a map with a grails codec.
Here is the testing repo.
Here is the snippet.
But I get StackOverFlowException.
I also tried to mark all the domains as Serializable but I need to reattach every domain when I bring them back from memcached to avoid hibernate errors like org.hibernate.LazyInitializationException: could not initialize proxy - no Session
Do you know a way to achieve this?
Is very frustrating to google search for something like this "storing domain classes in memcached" and find out is not a common problem.
I haven't see an out-of-the-box solution for doing this, but if you wanted to keep it generic you could do it manually (and consistently) like this:
def yourDomainInst = DefaultGrailsDomainClass(YourDomainClazz)
List listOfOnlyPersistantProperties = yourDomainInst.persistantProperties
def yourNewMap
yourObjects.each { obj ->
listOfOnlyPersistantProperties.each { prop ->
def propName = prop.name
yourNewMap.put(propName, obj."$propName")
}
}
Something like that should work. Note there's probably a hundred errors because I can't try it out right now, but that is the general idea.
Have a look at: Retrieving a list of GORM persistent properties for a domain

EF 4.3 Auto-Migrations with multiple DbContexts in one database

I'm trying to use EF 4.3 migrations with multiple code-first DbContexts. My application is separated into several plugins, which possibly have their own DbContext regarding their domain. The application should use one single sql-database.
When I try to auto migrate the contexts in an empty database, this is only successful for the first context. Every other context needs the AutomaticMigrationDataLossAllowed-Property set to true but then tries to drop the tables of the previous one.
So my question is:
How can I tell the migration-configuration just to look after the tables defined in their corresponding context and leave all others alone?
What is the right workflow to deal with multiple DbContexts with auto-migration in a single database?
Thank you!
Here is what you can do. very simple.
You can create Configration Class for each of your context.
e.g
internal sealed class Configuration1 : DbMigrationsConfiguration<Context1>{
public Configuration1 (){
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "YourProject.Models.ContextNamespace1";
}
}
internal sealed class Configuration2 : DbMigrationsConfiguration<Context2>{
public Configuration2 (){
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "YourProject.Models.ContextNamespace2";
}
}
Now you add migration. You dont need to enable migration since you already did with the 2 classed above.
Add-Migration -configuration Configuration1 Context1Init
This will create migration script for context1. your can repeat this again for other Contexts.
Add-Migration -configuration Configuration2 Context2Init
To Update your database
Update-Database -configuration Configuration1
Update-Database -configuration Configuration2
This can be done in any order. Except you need to make sure each configration is called in sequence.
Code First Migrations assumes that there is only one migrations configuration per database (and one context per configuration).
I can think of two possible solutions:
Create an aggregate context that includes all the entities of each context and reference this "super" context from your migrations configuration class. This way all the tables will be created in the user's database, but data will only be in the ones that they've installed plugins for.
Use separate databases for each context. If you have shared entities between the contexts, add a custom migration and replace the CreateTable(...) call with a Sql("CREATE VIEW ...") call to get the data from the entity's "originating" database.
I would try #1 since it keeps everything in a single database. You could create a seperate project in your solution to contain your migrations and this "super" context. Just add the project, reference all of your plugins' projects, create a context that includes all of the entities, then call Enable-Migrations on this new project. Things should work as expected after that.
I have a working site with multiple contexts using migrations. However, you do need to use a separate database per context, and it's all driven off of a *Configuration class in the Migrations namespace of your project, so for example CompanyDbContext points to Company.sdf using CompanyConfiguration. update-database -configurationtypename CompanyConfiguration. Another LogDbContext points to Log.sdf using LogConfiguration, etc.
Given this works, have you tried creating 2 contexts pointing at the same database and telling the modelbuilder to ignore the other context's list of tables?
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Ignore<OtherContextsClass>();
// more of these
}
Since the migrations work with the ModelBuilder, this might do the job.
The crappy alternative is to avoid using Automatic Migrations, generate a migration each time and then manually sift through and remove unwanted statements, then run them, although there's nothing stopping you from creating a simple tool that looks at the Contexts and generated statements and does the migration fixups for you.
Ok, I have been struggling with this for a day now, and here is solution for those seeking the answer...
I am assuming that most people reading this post are here because they have a large DbContext class with a lot of DbSet<> properties and it takes a long time to load. You probably thought to yourself, gee, that makes sense, I should split up the context, since I won't be using all of the dbsets at once, and I will only load a "Partial" context based on the situation where I need it. So you split them up, only to find out that Code First migrations don't support your way of revolutionary thinking.
So your first step must have been splitting up the contexts, then you added the MigrationConfiguration class for each of the new contexts, you added the connection strings named exactly the same as your new Context classes.
Then you tried running the newly split up contexts one by one, by doing Add-Migration Context1 then doing Update-Database -Verbose...
Everything seemed to work fine, but then you notice that every subsequent Migration deleted all tables from the Previous migration, and only left the tables in from the very last migration.
This is because, the current Migrations model expects Single DbContext per Database, and it has to be a mirror match.
What I also tried, and someone suggested here doing that, is create a single SuperContext, which has All the Db sets in it. Create a single Migration Configuration class and run that in. Leave your partial Context classes in place, and try to Instantiate and use them. The EF complains that the Backing model has changed. Again, this is because the EF compares your partial dbcontext to the All-Sets context signature that was left over from your Super Context migration.
This is a major flaw in my opinion.
In my case, I decided that PERFORMANCE is more important than migrations. So, what I ended up doing, is after I ran in the Super context and had all the tables in place, I went into the database and Manually deleted _MigrationHistory table.
Now, I can instantiate and use my Partial Contexts without EF complaining about it. It doesn't find the MigrationHistory table and just moves on, allowing me to have a "Partial" view of the database.
The trade off of course is that any changes to the model will have to be manually propagated to the database, so be careful.
It worked for me though.
As mentioned above by Brice, the most practical solution is to have 1 super DbContext per application/database.
Having to use only 1 DbContext for an entire application seems to be a crucial technical and methodological disadvantage, cause it affects Modularity among other things. Also, if you are using WCF Data Services, you can only use 1 DataService per application since a DataService can map to only 1 DbContext. So this alters the architecture considerably.
On the plus side, a minor advantage is that all database-related migration code is centralized.
I just came across this problem and realised the reason I had split them into different contexts was purely to have grouping of related models in manageable chunks and not for any other technical reason. Instead I have declared my context as a partial class and now different code files with different models in them can add DbSets to the DbContext.
This way the automigration magic still works.
I've got it working with manual migrations, but you can't downgrade as it can't discrimitate between configurations in the __MigrationHistory table. If I try and downgrade then it treats the migrations from the other configurations as automatic and since I don't allow data loss it fails. We will only ever be using it to upgrade though so it works for our purposes.
It does seem like quite a bit ommision though, I'm sure it wouldn't be hard to support it provided there was no overlap between DbContexts.
Surely the solution should be a modification by the EntityFramework team to change the API to support the direct modification of the _MigrationHistory table to a table name of your choice like _MigrationHistory_Context1 such that it can handle the modification of independent DbContext entities. That way they're all treated separately, and its up to the developer to ensure that the names of entities don't collide.
Seems like there are a lot of people who share my opinion that a duplicate DbContext with references to the superset of entities is a bogus non-enterprise friendly way to go about things. Duplicate DbContexts fail miserably for modular (Prism or similar) based solutions.
I want people to know that the answer with this below is what worked for me but with one caveat: don't use the MigrationsNamespace line.
internal sealed class Configuration1 : DbMigrationsConfiguration<Context1>{
public Configuration1 (){
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "YourProject.Models.ContextNamespace1";
}
}
internal sealed class Configuration2 : DbMigrationsConfiguration<Context2>{
public Configuration2 (){
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "YourProject.Models.ContextNamespace2";
}
}
However, I already had the 2 databases established with their own contexts defined so I found myself getting an error saying "YourProject.Models namespace already has ContextNamespace1 defined". This was because the "MigrationsNamespace = "YourProject.Models.ContextNamespace2";" was causing the dbcontext to be defined under the YourProjects.Models namespace twice after I tried the Init (once in the migration Context1Init file and once where I had it defined before).
So, I found that what I had to do at that point was start my database and migrations from scratch (thankfully I did not have data I needed to keep) via following the directions here:
http://pawel.sawicz.eu/entity-framework-reseting-migrations/
Then I changed the code to NOT include the MigrationsNamespace line.
internal sealed class Configuration1 : DbMigrationsConfiguration<Context1>{
public Configuration1 (){
AutomaticMigrationsEnabled = false;
}
}
internal sealed class Configuration2 : DbMigrationsConfiguration<Context2>{
public Configuration2 (){
AutomaticMigrationsEnabled = false;
}
}
Then I ran the Add-Migration -configuration Configuration1 Context1Init command again and the Update-Database -configuration Configuration1 line again (for my 2nd context too), and finally, everything seems to be working great now.

Is it possible to use a JNDI dataSource read only in Grails?

I need to add a JNDI datasource from a legacy database to my Grails (1.2.2) application.
So far, the resource is added to my Tomcat (5.5) and DataSource.groovy contains:
development {
dataSource {
jndiName = "jdbc/lrc_legacy_db"
}
}
I also created some domain objects mapping the different tables to comfortably load and handle data from the DB with GORM. But I now want to assure, that every connection to this DB is really read-only. My biggest concern here is the dbCreate- property and the automatic database manipulation through GORM and the GORM classes.
Is it enough to just skip dbCreate?
How do I assure that the database will only be read and never ever manipulated in any way?
You should use the validate option for dbCreate.
EDIT: The documentation is quite a bit different than when I first posted this answer so the link doesn't quite get you to where the validate option is explained. A quick find will get you to the right spot.
According to the Grails documentation:
If your application needs to read but never modify instances of a persistent class, a read-only cache may be used
A read-only cache for a domain class can be configured by
1. Enable Caching
Add something like the following to DataSource.groovy
hibernate {
cache.use_second_level_cache=true
cache.use_query_cache=true
cache.provider_class='org.hibernate.cache.EhCacheProvider'
}
2. Make Cache Read-Only
For each domain class, you will need to add the following to the mapping closure:
static mapping = {
cache usage:'read-only', include:'non-lazy'
}

SharpArchitecture - FluentNHibernate Schema Generation?

I'm trying out SharpArchitecture and want to have FluentNHibernate generate my database schema for my MVC WebSite.
I'm a bit lost on where to do this. I can do it by adding the SchemaUpdate thingy in the global.asax.cs-file right after NHibernateInitializer.Instance().InitializeNHibernateOnce(InitializeNHibernateSession); in "Application_beginrequest". (If I place it before that call, SharpArch throws an exception).
This doesn't seems right and it smells bad. It feels like I'm missing something basic in the Sharp Architecture that allows for automatic schema generation to my DB (MSSQL2005). Or am I not? If not, please fill me in on best practices for schema generation with fluent nhibernate and Sharp Architecture.
Thanks in advance!
Edit: I might add that I'm looking on the Northwind sample project in SharpArch, but want to make FNHb generate the schema instead.
You don't want to do it in Application_BeginRequest.
To auto-gen the DDL, what you should do is do it in your TDD classes. Create a special class that you can manually call when you need to generate your DDL for your development database.
Something like:
private static void CreateDatabaseFromFluentNHibernateMappings()
{
var mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies();
SchemaExport schema = new SchemaExport(NHibernateSession.Init(new SimpleSessionStorage(), mappingAssemblies, NHIBERNATE_CFG_XML));
schema.Execute(true, true, false);
}
This will generate and execute the DDL based on your mappings to the database you specify in your NHibernate config file (in the NHIBERNATE_CFG_XML). The database, albeit empty, should already exist.
You can also create another method in your class that can update the schema of the development database as you develop in case you have added new entities, properties, etc.
private static void UpdateExistingDatabaseFromFluentNHibernateMappings()
{
var mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies();
SchemaUpdate schema = new SchemaUpdate(NHibernateSession.Init(new SimpleSessionStorage(), mappingAssemblies, NHIBERNATE_CFG_XML));
schema.Execute(true, true);
}
This will update an existing database with the changes you have made in FNH without destroying the existing database. Very useful, especially when you might have test data already in the database.
And finally, You can use NDbUnit to preload a database based on test data defined in XML in your project and under SCM. Great when you have a team working on the same database and you want to preload it with data, thus everyone starts with the same blank slate.
Using NDbUnit:
private static void LoadTheTestDataintoDb()
{
const string connectionstring = // your connection string to your db
NDbUnit.Core.INDbUnitTest sqlDb = new NDbUnit.Core.SqlClient.SqlDbUnitTest(connectionstring);
sqlDb.ReadXmlSchema(/* your XML schema file defining your database (XSD) */);
sqlDb.ReadXml(/* Your XML file that has your test data in it (XML) */);
// Delete all from existing db and then load test data allowing for identity inserts
sqlDb.PerformDbOperation(NDbUnit.Core.DbOperationFlag.CleanInsertIdentity);
}
This requires you to use NDbUnit. Thanks to Stephen Bohlen for it!
I hope this helps; I wrote this kinda quickly, so if I confused you, let me know.

Resources