I have the following code:
using (MySqlConnection conn = new MySqlConnection(connStr))
{
conn.Open();
cmd = conn.CreateCommand();
cmd.CommandText = "SELECT * FROM Events";
cmd.CommandType = CommandType.Text;
cmd.ExecuteNonQuery();
}
From reading a few articles on this site, it is suggested that the DbCommand should be in a using block, however, I can't see why this is needed.
The Connection is closed, so what is DbCommand holding on to that requires a using block?
Is it really the case if a class inherits from IDisposable that you must use a using block or manually called Dispose?
I ran a simulator with 100 threads on the code above, and also with code with a using block on the DbCommand and I could see no real differences in memory usage.
DbCommand is abstract, and does not presuppose what native resources the vendor-specific subclass will hold, and in what order they should be released. Proper nesting of using blocks seems a reasonable implicit coding convention.
Related
My problem will probably be explained better with code.
Consider the snippet below:
// First read
OntModel m1 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m1,uri0);
m1.loadImports();
// Second read (from the same URI)
OntModel m2 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m2,uri0);
m2.loadImports();
where uri0 points to a valid RDF file describing an ontology model with n imports.
and the following custom ReadHook (which has been set in advance):
#Override
public String beforeRead(Model model, String source, OntDocumentManager odm) {
System.out.println("BEFORE READ CALLED: " + source);
}
Global FileManager and OntDocumentManager are used with the following settings:
processImports = true;
caching = true;
If I run the snippet above, the model will be read from uri0 and beforeRead will be invoked exactly n times (once for each import).
However, in the second read, beforeRead won't be invoked even once.
How, and what should I reset in order for Jena to invoke beforeRead in the second read as well?
What I have tried so far:
At first I thought it was due to caching being on, but turning it off or clearing it between the first and second read didn't do anything.
I have also tried removing all ignoredImport records from m1. Nothing changed.
Finally got to solve this. The problem was in ModelFactory.createOntologyModel(). Ultimately, this gets translated to ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RDFS_INF,null).
All ontology models created with the static OntModelSpec.OWL_MEM_RDFS_INF will have their ImportsModelMaker and some of its other objects shared, which results in a shared state. Apparently, this state has blocked the reading hook to be invoked twice for the same imports.
This can be prevented by creating a custom, independent and non-static OntModelSpec instance and using it when creating an OntModel, for example:
new OntModelSpec( ModelFactory.createMemModelMaker(), new OntDocumentManager(), RDFSRuleReasonerFactory.theInstance(), ProfileRegistry.OWL_LANG );
By day I'm a C# programmer, but an F# enthusiast.
whist doing some tutorial (suave) I stumbled upon this error
System.NotSupportedException
HResult=0x80131515
Message=Enlisting in Ambient transactions is not supported.
Source=System.Data.SqlClient
StackTrace:
at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
at System.Data.SqlClient.SqlConnection.Open()
at FSharp.Data.Sql.Providers.MSSqlServerProvider.FSharp-Data-Sql-Common-ISqlProvider-ProcessUpdates(IDbConnection con, ConcurrentDictionary`2 entities, TransactionOptions transactionOptions, FSharpOption`1 timeout)
at <StartupCode$FSharp-Data-SqlProvider>.$SqlRuntime.DataContext.f#1-52(SqlDataContext __, IDbConnection con, Unit unitVar0)
at FSharp.Data.Sql.Runtime.SqlDataContext.FSharp-Data-Sql-Common-ISqlDataContext-SubmitPendingChanges()
at Program.main(String[] argv) in C:\Users\M_R_N\source\repos\ConsoleApp2\ConsoleApp2\Program.fs:line 34
yet the code seems so trivial, I cant believe it doesn work, we seem to be able to read data from a (SQL express) database, but not write to it (or at least not delete, I've not tried adding). I don't really know what an ambient transaction is, I'm actually not to concerned about transactional behaviour I simply want to select some data, update it or delete it.
This is all the code....
open System
open FSharp.Data.Sql
[<Literal>]
let ConnectionString =
"Data Source=(localdb)\ProjectsV13;Initial Catalog=suavemusicstore;Integrated Security=SSPI;Connect Timeout=30;Encrypt=False;TrustServerCertificate=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False"
type Sql =
SqlDataProvider<
ConnectionString = ConnectionString,
DatabaseVendor = Common.DatabaseProviderTypes.MSSQLSERVER>
type DbContext = Sql.dataContext
type Album = DbContext.``dbo.AlbumsEntity``
type Genre = DbContext.``dbo.GenresEntity``
let getAlbum id (ctx : DbContext) : Album option =
query {
for album in ctx.Dbo.Albums do
where (album.AlbumId = id)
select album
} |> Seq.tryHead
[<EntryPoint>]
let main argv =
let ctx = Sql.GetDataContext()
match (getAlbum 2 ctx) with
| Some(album) ->
album.Delete()
ctx.SubmitUpdates() // EXCEPTION thrown here
0
| _ -> 0
is there a workaround? its the first time I've used type providers and core, yet it seems that you cannot write a simple CRUD app.
this HAS been reported elsewhere, mostly in C# EF apps, where I think there is more scope to work around the problem (maybe).
Any ideas how to work around it? I've tried upgrading/downgrading various nugget packages, to no avail
I've had the same issue not that long ago on my toy F# project. I did not find any proper solution and ended up ignoring transactions entirely. This doesn't solve the underlying issue, but for me it was enough (the project is mainly for learning purposes).
let TransactionOptions = {IsolationLevel = IsolationLevel.DontCreateTransaction; Timeout = TimeSpan.FromSeconds(1.0)}
let dbContext = Sql.GetDataContext(TransactionOptions)
I don't know if the above works, but it seems like a reasonable workaround.
I DID fix my problem, but by reverting to a framework based console app template rather than the core one.
Basically, I use the Any23 distiller to extract RDF statements from files embedded with RDFa (The actual files where created by DBpedia Spotlight using the xhtml+xml output option). By using Any23 RDFa distiller I can extract the RDF statements (I also tried using Java-RDFa but I could only extract the prefixes!). However, when I try to pass the statements to a Jena model and print the results to the console, nothing happens!
This is the code I am using :
File myFile = new File("T1");
Any23 runner= new Any23();
DocumentSource source = new FileDocumentSource(myFile);
ByteArrayOutputStream outA = new ByteArrayOutputStream();
InputStream decodedInput=new ByteArrayInputStream(outA.toByteArray()); //convert the output stream to input so i can pass it to jena model
TripleHandler writer = new NTriplesWriter(outA);
try {
runner.extract(source, writer);
} finally {
writer.close();
}
String ttl = outA.toString("UTF-8");
System.out.println(ttl);
System.out.println();
System.out.println();
Model model = ModelFactory.createDefaultModel();
model.read(decodedInput, null, "N-TRIPLE");
model.write(System.out, "TURTLE"); // prints nothing!
Can anyone tell me what I have done wrong? Probably multiple things!
Is there any easy way i can extract the subjects of the RDF statements directly from any23 (bypassing Jena)?
As I am quite inexperienced in programming any help would be really appreciated!
You are calling
InputStream decodedInput=new ByteArrayInputStream(outA.toByteArray()) ;
before calling any23 to insert triples. At the point of the call, it's empty.
Move this after the try-catch block.
Background:
From another question here at SO I have a Winforms solution (Finance) with many projects (fixed projects for the solution).
Now one of my customers asked me to "upgrade" the solution and add projects/modules that will come from another Winforms solution (HR).
I really don't want to keep these projects as fixed projects on the existing finance solution. For that I'm trying to create plugins that will load GUI, business logic and the data layer all using MEF.
Question:
I have a context (DbContext built to implment the Generic Repository Pattern) with a list of external contexts (loaded using MEF - these contexts represent the contexts from each plugin, also with the Generic Repository Pattern).
Let's say I have this:
public class MainContext : DbContext
{
public List<IPluginContext> ExternalContexts { get; set; }
// other stuff here
}
and
public class PluginContext_A : DbContext, IPluginContext
{ /* Items from this context */ }
public class PluginContext_B : DbContext, IPluginContext
{ /* Items from this context */ }
and within the MainContext class, already loaded, I have both external contexts (from plugins).
With that in mind, let's say I have a transaction that will impact both the MainContext and the PluginContext_B.
How to perform update/insert/delete on both contexts within one transaction (unity of work)?
Using the IUnityOfWork I can set the SaveChanges() for the last item but as far as I know I must have a single context for it to work as a single transaction.
There's a way using the MSDTC (TransactionScope) but this approach is terrible and I'd reather not use this at all (also because I need to enable MSDTC on clients and server and I've had crashes and leaks all the time).
Update:
Systems are using SQL 2008 R2. Never bellow.
If it's possible to use TransactionScope in a way that won't scale to MSDTC it's fine, but I've never achieved that. All the time I've used TransactionScope it goes into MSDTC. According to another post on SO, there are some cases where TS will not go into MSDTC: check here. But I'd really prefer to go into some other way instead of TransactionScope...
If you are using multiple contexts each using separate connection and you want to save data to those context in single transaction you must use TransactionScope with distributed transaction (MSDTC).
Your linked question is not that case because in that scenario first connection do not modify data so it can be closed prior to starting the connection where data are modified. In your case data are concurrently modified on multiple connection which requires two-phase commit and MSDTC.
You can try to solve it with sharing single connection among multiple contexts but that can be quite tricky. I'm not sure how reliable the following sample is but you can give it a try:
using (var connection = new SqlConnection(connnectionString))
{
var c1 = new Context(connection);
var c2 = new Context(connection);
c1.MyEntities.Add(new MyEntity() { Name = "A" });
c2.MyEntities.Add(new MyEntity() { Name = "B" });
connection.Open();
using (var scope = new TransactionScope())
{
// This is necessary because DbContext doesnt't contain necessary methods
ObjectContext obj1 = ((IObjectContextAdapter)c1).ObjectContext;
obj1.SaveChanges(SaveOptions.DetectChangesBeforeSave);
ObjectContext obj2 = ((IObjectContextAdapter)c2).ObjectContext;
obj2.SaveChanges(SaveOptions.DetectChangesBeforeSave);
scope.Complete();
// Only after successful commit of both save operations we can accept changes
// otherwise in rollback caused by second context the changes from the first
// context will be already accepted = lost
obj1.AcceptAllChanges();
obj2.AcceptAllChanges();
}
}
Context constructor is defined as:
public Context(DbConnection connection) : base(connection,false) { }
The sample itself worked for me but it has multiple problems:
First usage of contexts must be done with closed connection. That is the reason why I'm adding entities prior to opening the connection.
I rather open connection manually outside of the transaction but perhaps it is not needed.
Both save changes successfully run and Transaction.Current has empty distributed transaction Id so it should be still local.
The saving is much more complicated and you must use ObjectContext because DbContext doesn't have all necessary methods.
It doesn't have to work in every scenario. Even MSDN claims this:
Promotion of a transaction to a DTC may occur when a connection is
closed and reopened within a single transaction. Because the Entity
Framework opens and closes the connection automatically, you should
consider manually opening and closing the connection to avoid
transaction promotion.
The problem with DbContext API is that it closes and reopens connection even if you open it manually so it is a opened question if API always correctly identifies if it runs in the context of transaction and do not close connection.
#Ladislav Mrnka
You were right from the start: I have to use MSDTC.
I've tried multiple things here including the sample code I've provided.
I've tested it many times with changed hare and there but it won't work. The error goes deep into how EF and DbContext works and for that to change I'd finally find myself with my very own ORM tool. It's not the case.
I've also talked to a friend (MVP) that know a lot about EF too.
We have tested some other things here but it won't work the way I want it to. I'll end up with multiple isolated transactions (I was trying to get them together with my sample code) and with this approach I don't have any way to enforce a full rollback automatically and I'll have to create a lot of generic/custom code to manually rollback changes and here comes another question: what if this sort of rollback fails (it's not a rollback, just an update)?
So, the only way we found here is to use the MSDTC and build some tools to help debug/test if DTC is enabled, if client/server firewalls are ok and all that stuff.
Thanks anyway.
=)
So, any chance this has changed by October 19th? All over the intertubes, people suggest the following code, and it doesn't work:
(_contextA as IObjectContextAdapter).ObjectContext.Connection.Open();
(_contextB as IObjectContextAdapter).ObjectContext.Connection.Open();
using (var transaction = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions{IsolationLevel = IsolationLevel.ReadUncommitted, Timeout = TimeSpan.MaxValue}))
{
_contextA.SaveChanges();
_contextB.SaveChanges();
// Commit the transaction
transaction.Complete();
}
// Close open connections
(_contextA as IObjectContextAdapter).ObjectContext.Connection.Close();
(_contextB as IObjectContextAdapter).ObjectContext.Connection.Close();
This is a serious drag for implementing a single Unit of Work class across repositories. Any new way around this?
To avoid using MSDTC (distributed transaction):
This should force you to use one connection within the transaction as well as just one transaction. It should throw an exception otherwise.
Note: At least EF6 is required
class TransactionsExample
{
static void UsingExternalTransaction()
{
using (var conn = new SqlConnection("..."))
{
conn.Open();
using (var sqlTxn = conn.BeginTransaction(System.Data.IsolationLevel.Snapshot))
{
try
{
var sqlCommand = new SqlCommand();
sqlCommand.Connection = conn;
sqlCommand.Transaction = sqlTxn;
sqlCommand.CommandText =
#"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'";
sqlCommand.ExecuteNonQuery();
using (var context =
new BloggingContext(conn, contextOwnsConnection: false))
{
context.Database.UseTransaction(sqlTxn);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
}
sqlTxn.Commit();
}
catch (Exception)
{
sqlTxn.Rollback();
}
}
}
}
}
Source:
http://msdn.microsoft.com/en-us/data/dn456843.aspx#existing
In EF4, this was not easily possible. You either had to degrade to classic ADO.NET (DataReader), use ObjectContext.Translate or use the EFExtensions project.
Has this been implemented off the shelf in EF CTP5?
If not, what is the recommended way of doing this?
Do we have to cast the DbContext<T> as an IObjectContextAdapter and access the underlying ObjectContext in order to get to this method?
Can someone point me to a good article on doing this with EF CTP5?
So i got this working, here's what i have:
internal SomeInternalPOCOWrapper FindXXX(string xxx)
{
Condition.Requires(xxx).IsNotNullOrEmpty();
var someInternalPokey = new SomeInternalPOCOWrapper();
var ctx = (this as IObjectContextAdapter).ObjectContext;
var con = new SqlConnection("xxxxx");
{
con.Open();
DbCommand cmd = con.CreateCommand();
cmd.CommandText = "exec dbo.usp_XXX #xxxx";
cmd.Parameters.Add(new SqlParameter("xxxx", xxx));
using (var rdr = cmd.ExecuteReader())
{
// -- RESULT SET #1
someInternalPokey.Prop1 = ctx.Translate<InternalPoco1>(rdr);
// -- RESULT SET #2
rdr.NextResult();
someInternalPokey.Prop2 = ctx.Translate<InternalPoco2>(rdr);
// -- RESULT SET #3
rdr.NextResult();
someInternalPokey.Prop3 = ctx.Translate<InternalPoco3>(rdr);
// RESULT SET #4
rdr.NextResult();
someInternalPokey.Prop4 = ctx.Translate<InternalPoco4>(rdr);
}
con.Close();
}
return someInternalPokey;
}
Essentially, it's basically like classic ADO.NET. You read the DbReader, advance to the next result set, etc.
But at least we have the Translate method which seemingly does a left-to-right between the result set fields and the supplied entity.
Note the method is internal.
My Repository calls this method, then hydrates the DTO into my domain objects.
I'm not 100% happy with it for 3 reasons:
We have to cast the DbContext as IObjectContextAdapter. The method Translate should be on DbContext<T> class IMO.
We have to use classic ADO.NET Objects. Why? Stored Procedures are a must have for any ORM. My main gripe with EF is the lack of the stored procedure support and this seems to not have been rectified with EF CTP5.
You have to open a new SqlConnection. Why can't it use the same connection as the one opened by the EF Context?
Hope this both helps someone and sends out a message to the EF team. We need multiple result support for SPROCS off the shelf. You can map a stored proc to a complex type, so why can't we map a stored proc to multiple complex types?